- ;; FIXME: I can't see why we have to use
- ;; (MACROLET ((LOSE () (ERROR ..))) (LOSE))
- ;; instead of just (ERROR "..") here.
- (macrolet ((lose ()
- (error "You can't use INST without an ~
- ASSEMBLE inside emitters.")))
Hmmn, SYMBOL-MACROLET on a special is a little twisted, and I guess other
Lisps don't like it? And the macrolet/macroexpand of ..inherited-labels..
is a bit precious too, since we're defining a macro that we never have any
intention of evaluating just to squirrel some info away in the lexical
environment. In the old days the manly macro hackers had COMPILER-LET to
do this sort of thing, but then ANSI took it away...
Anyway, the reason for the rather peculiar LOSE local macro is to cause a
compile-time ERROR if any reference to **current-segment** appears in the
code without being shadowed by the correct definition. If you simply said
(error ...) then the error would only happen at runtime. I think that what
happened is that someone got burned by forgetting to use an ASSEMBLE form,
and then some code quietly went into *the wrong segment*.
This is not good, but the if the symbol macrolet had a different name than
the special, then you would at least have gotten undefined variable error
on the symbol macro reference.
It appears that there is in fact no reason for the special binding of
**current-segment**, and this is what should be flushed, not the symbol
macrolet. If I am correct that INST, EMIT-LABEL, etc. are always called in
the lexical scope of an ASSEMBLE, then the special can be dispensed with
entirely. However, it is possible that somewhere someone is calling a
function from a vop generator that calls INST, making use of the special
binding. This is easily enough found by removing the special and looking
for undefined var warnings, and easily enough fixed by passing the segment
in and adding an ASSEMBLE.
Referencing a special is, needless to say, more costly than a local, and
the assembler is one of the more performance critical parts of the
compiler. Does it make a big difference? You won't know without testing,
and just timing an entire compile probably isn't going to give you enough
signal-to-noise to see this effect. You have to time something like
GENERATE-CODE. The difference may be not so big on an x86 as long as the
code is compiled unsafe, since with 8 registers, it's probably on the stack
already, so you've got one memory reference. Adding another to get the
symbol only doubles it. Doing a boundp check also hurts.