From: Paul K. <pk...@gm...> - 2009-02-15 22:42:55
|
On 15-Feb-09, at 1:12 PM, Paul Sexton wrote: > On Sun, 2009-02-15 at 14:24 +0200, Juho Snellman wrote: >> Paul Sexton <ps...@xn...> writes: >>> I have attached a source file which brings SBCL to its knees >>> (slows to a >>> crawl, eats up 100s of MB of system memory, often crashes unless I >>> jump >>> through hoops to give SBCL lots of extra memory). Its contents is >>> basically a single function containing a large "case" statement. >>> Other >>> CL implementations compile it without batting an eyelid. I therefore >>> suspect a compiler bug. >> >> Not really a bug, more like a design issue. >> >> Basically SBCL does a lot more analysis on your program for type >> inference. [...] >> consider writing this as a data driven function: just >> define a table to contain all this data, and do a simple table lookup >> at runtime. It's going to be more maintainable, faster to execute, >> and >> take up less memory at runtime. >> > You are probably right that I could get around it by refactoring, > and I > may end up doing that. However I suspect it's more than a design issue > -- as I said, other compilers (Clozure, Allegro) and even clisp have > no > problems with it, whereas SBCL's memory footprint goes from <100MB > to > > 500MB when compiling this function. > > Changing SAFETY to 0 in the declaim statement seems not to make any > difference. It is a design issue with the compiler and the way it uses static analyses. Neither safety nor your observations on radically different implementations have anything to do with it. CMUCL and SBCL are known for their compiler's sophistication. Sophisticated analyses unfortunately tend to come with additional computational cost at compile-time. Instead of using a more data-driven approach, you could also go through the interpreter. Paul Khuong |