From: Hans-Christian E. <hc...@hc...> - 2010-07-26 17:12:55
|
Hello, I recently screwed up in a .yaws script which led to a runaway "loop" under certain conditions. I got more and more worker processes that consumed all available CPU time. Yaws got slower and slower until I finally noticed what was wrong. IMHO it'd be nice if one could limit the number of reductions and/or maximum time a request is allowed to take per <server>. This could be easily done by using a monitor process that regularily checks all worker processes and kills rogue ones if necessary. This would work so long as worker processes do not spawn more processes. It would also work for websocket processes, since their PIDs are passed back to yaws when created. What do you think? HC |