[Algorithms] Sampling vs fitting values over vertices
Brought to you by:
vexxed72
|
From: pete d. <pb...@po...> - 2009-11-06 23:01:13
|
Our engine (I presume, like many others) stores some lighting/misc data per mesh vertex that would really be best stored in a texture. This works reasonably well and in itself doesn't bother me terribly, but the fact that we sample the values at the vertex itself does. The artifacts, ie the occasional dark tri due to occlusion at the verts being much darker than the average over the face, are something our artists have gotten used to painting out. But I'd hope we could give much better baseline results. So instead of simply sampling at vertices, I was planning on generating samples on each face (probably somewhat uniformly over the outside area of the mesh) and fitting (least squares) for the vertex values. Ie try to minimize the actual deviation of the interpolated values over the face from what they should be, instead of simply ignoring the fact that they'll be interpolated.. And in additional to accuracy, it should let us handle hard edges, doublesided tris and a few more situations more accurately than they are now. I presume people have done this before, so before trying it out I thought I'd ask if anyone had positive results, rules of thumb about how dense data needs to be sampled to generate good results in most cases, if applying a similar scheme to additionally generate gradients wrt u&v gives compelling improvements, etc. Searching on the web/archive didn't really find too much (besides D3DX PRT methods that don't say how they work, and still require a texture thus an atlas/usable parameterization, which I don't really want to deal with). Thanks for any input! Pete |