From: Paul S. <wog...@ya...> - 2004-05-29 19:24:32
|
Jim Starkey wrote: > There are numerous places in the code that go to great lengths to > continue operation after memory is exhausted. The server, for > example, goes into a timeout and retry loop when a memory allocation > fails. > > Does this really make sense, particularly when allocating a small > block? When memory is truly exhausted almost everything stops working > -- system services, library functions, other Firebird code, and > critically important, error reporting. What are the chances that > essentially untested recovery code could ever recover from a bona fide > memory exhaustion? Wouldn't it make more sense to try for an > immediate, reasonably clean server shutdown than to risk something > else failing catastrophically? > > It certainly is possible write code that is tolerant of low memory > conditions, but it requires careful analysis of all possible > allocation failures including library and system services and special > testing to simulate and test all conceived failure modes. > Realistically, I think this is way beyond what can reasonably be > expected of a large, complex database system, particularly one that > suffered significant mid-life abuse. > > I'd like hear some discussion as to what the internal policy should be > -- attempt recovery or graceful shutdown. (This is really a server > issue -- the engine itself must return an error, but it can certainly > latch into an error state to block further processing). > > Thought and comments? > Send a message to the console stating the reason, and then shutdown as gracefully as possible. This is the safest route to take, so thts makes the most sense when safety is paramount. Paul |