From: Richard R. <sf...@ol...> - 2005-01-17 21:17:36
|
> (9) In "freeglut_init.c" the definitions of the token lengths (down around > lines 650-700--I'm not sure of the exact line numbers) are hard-coded. The > OpenGLUT version has made the compiler calculate them automatically. Do we > want to adopt their code? Actually, when I rewrote that, I dispensed with token lengths. The lengths were only used with strcmp(), which in turn is operating on (at least in part) library data. If the library string is corrupted, then why expect that the token-length is meaningful? In fact, if memory has been corrupted, the whole question of any of the code becomes debatable. (^& What I did have the compiler compute was the token IDs, using an {enum}, and then I built the table pairing token IDs to token strings using a macro. This is good for adding tokens in the future, which is certainly on OpenGLUT's table. For freeglut, again, this is just change for change's sake. The freeglut code works, albeit a bit of a pain (as you may have discovered when you added aliases for typos and Brittish spellings). I'd recommend freeglut not bother with this, unless freeglut's mission is changing. -- "I probably don't know what I'm talking about." http://www.olib.org/~rkr/ |