From: Nathan T. <nb...@nb...> - 2013-10-18 19:17:46
|
I recently went through a library I'm working on and removed all type declarations on formal parameters from the top of DEFUNs, the reason being that I assumed such declarations were redundant in light of FTYPE declarations that also declare the types of the arguments passed to functions. And since function type declarations also declare return types, I figured it made more sense to keep the FTYPE declarations and remove the TYPE declarations from the DEFUNs. However, when I ran some tests after removing the declarations, I noticed that performance had taken a huge hit. Certain procedures took almost twice as long as they had taken before I had removed the type declarations. (The optimize setting being (optimize (debug 0) (safety 0) speed).) Can anyone here explain why this might be? (I figured that since optimizations aren't defined by the language itself, this would be the best place to ask.) Why don't FTYPE declarations on function arguments have the same effect as TYPE declarations on formal parameters? Given that the effect is different, as it seems, when _should_ one use FTYPE declarations for potential performance improvements? Should I be doing both if I want maximal performance? On a related note, if I declare a function's return type in an FTYPE declaration and bind a variable using the function, as in: (declaim (ftype (function (&rest) <type>)) foo) (let ((var (foo a b c))) ...) will the compiler infer that VAR has type <type>, or should I declare that at the top of the LET's body (assuming I want maximal performance)? Thank you. Nathan |