On 02 Dec 2007 08:05:42 +0200, Juho Snellman <jsnell@...> wrote:
> Is this a problem you've encountered in the wild? Was it an isolated
> incident, or a more widespread problem?
Yes and no. I ran into this when working out where heap corruption
could be coming from: basically the same thing that gave rise to
SB-EXT:RESTRICT-COMPILER-POLICY: looking for _potential_
causes of heap corruption in large bodies of code that use inlining
If the policy is picked up from the call site, to analyze the safety
issues of a site with dangerous policy you need to look at the whole
call tree at the site to know for sure that the precondition checking
that is done is sufficient. That is simply a recipe for disaster: the
amount of work required to grows exponentially. [Insert dynamic
scoping vs. lexical scoping analogue here.]
If this is limited to SAFETY 0 sites (re the other post on that), and
that is not used indiscriminately, then things may just be feasible.
If SPEED > SAFETY is also potentially dangerous, then... ugh.
I understand the desire to have a safe generic version, and a fast
inlined version, but I'm not sure how to reconcile these two. From
my perspective the safe/fast needs have other alternatives[*], but
I don't see any good alternatives for those wanting to audit code
bases containing both inline functions and SAFETY 0 code without
losing _all_ the benefits of both.
[*] DEFINE-INLINE, %FOO vs FOO, DEFTRANSFORM, etc.
...but as you may guess from above, I actually consider the type-weakening
for SPEED > SAFETY a more urgent question: a huge number of transforms
depend on that, so keeping it unsafe seems a bit dodgy to me.