Brian Downing <bdowning@...> writes:
> Unfortunately, this means that with a per-thread *available-buffers* for
> fd-streams, you grow two pages of OS memory per request. The patch
> makes *available-buffers* shared again, and wraps it with a mutex.=20=20
That sounds like a good idea, in a "minimal invasive changes to
> With this patch, I've been running a request per second to it all day
> now (doing (apropos-list "*") into HTMLGEN, which conses a ton), and
> memory usage is now steady.
Have you tried stress-testing it as well (use apachebench with big
number for -c ?) My concern is that the probable context switching
associated with locking around *available-buffers* costs a lot - in
fact, probably even costs more than removing the buffer resourcing
completely and just calling malloc each time. I think a syscall is=20
cheaper than all the signal delivery mechanics associated with
a thread mutex, and malloc() might under some circumstances not even
need a syscall
(Why aren't we using Lisp memory for these buffers? Because we
unix-read/-write to/from them and don't want them moving about, I
think. Conceivably we could instead try the new pinned-object stuff
here, notwithstanding that it may be broken; if so, at least it'd
probably be easier to track down than the current problem with
> One question I have is will having per-thread
> sb!impl::*descriptor-handlers* cause problems with this usage of
> fd-streams and threads? It doesn't seem to, but I don't really
> understand serve-event yet, or if it is even used.
Oh, ack. Hmm. Without looking, I don't know. And yes, it is used.
Even if your application doesn't use it, it's used for flush-output
My first application using threaded SBCL is about to go tentatively
live. It has approximately no state and can be restarted very easily
if it dies, but nevertheless I am breathing very carefully.
http://www.cliki.net/ - Link farm for free CL-on-Unix resources=20