From: Richard M K. <kr...@pr...> - 2008-05-29 20:13:04
|
"Hans Hübner" writes: (Note: I've excised your arguments against the stylistic and performance aspects of signaling errors, to focus only the point about debugging and *BREAK-ON-SIGNALS*.) > From a user perspective, it is preferable to have libraries use > signals sparingly, as it makes debugging harder. Well the other way of looking at it is that setting *BREAK-ON-SIGNALS* to a type too high in the hierarchy is the culprit. After all, the CLHS entry for *BREAK-ON-SIGNALS* does say | When setting ‘*break-on-signals*’, the user is encouraged to choose | the most restrictive specification that suffices. Setting | ‘*break-on-signals*’ effectively violates the modular handling of | condition signaling. So from my point of view, setting *BREAK-ON-SIGNALS* to something like T amounts to asking to be notified about everybody else's use of the condition system, even if the program calling the signaling code is prepared to handle the condition. But I think there's a way of using *BREAK-ON-SIGNALS* without running aground on incidental uses in library code: * first, define some base condition class for your application, * then set *BREAK-ON-SIGNALS* to that type (or a union including that type, if you're debugging more than one application at a time), * finally, have all the errors that your code signals be generalized instances of that class (possibly inheriting also from other condition classes, e.g., FILE-ERROR). This way, you get into a break loop when code you care about signals, but not when anybody else's does. Wouldn't a practice like this give you what you want, without you needing to care about what random library code does? -- Richard |