From: H. H. <hen...@gm...> - 2008-08-26 19:28:32
|
The current implementation of glmMatInverse2f is defined thus: GLMmat2f * glmMatInverse2f (GLMmat2f *out, const GLMmat2f *m) { GLfloat m00, m01, m10, m11; GLfloat inv_det; assert (out); assert (m); inv_det = 1.0f / glmMatDeterm2f (m); m00 = m->m00; m01 = m->m01; m10 = m->m10; m11 = m->m11; out->m00 = m11 * inv_det; out->m01 = - m01 * inv_det; out->m10 = - m10 * inv_det; out->m11 = m00 * inv_det; return out; } (with similar implementations for GLMmat3f and GLMmat4f) The problem here is that the implementation is incomplete and contains a bug. Consider the case when determinant is 0. For matrices that have det == 0 inverses are non-existing. Therefore we will need to include a test to check whether that is the case: det = glmMatDeterm2f (m); if (fabs (det) < error_tolerance) ... Now two issues arise: 1) What should be the value of error_tolerance? Should we set it up as a constant or something that the caller provides? My thoughts: we should set it up as a constant as we want to keep thing simple. I propose the value of 0.0005 2) What should the routine do if det == 0? Return NULL or do we just do assertion failure? My thoughts: we should return NULL and the out matrix is unaffected -- Henri 'henux' Häkkinen |