From: Saulius V. <sa...@gl...> - 2008-12-12 00:08:15
|
Hey guys, Need some help understanding how memory allocation is done per process on Linux 1.5.5 CS. It's understandable that there is some amount of working memory needed to support fb process, plus per connection amount, plus cache depending on DB page size and buffer count. Usually in our application instance having 8K page and 5000 buffers we end up with 65-70MB per process. What makes me wander is how concurrent user connection count affects memory utilization per process. Somehow the more users we have on the server - the bigger overhead is added per connection. When comparing mem utilization on same DB having single connection and 55 for example - utilized memory delta increases by ~33MB. On server having 160 connections it's almost 2-3 times more. Is this lock manager related data? 30MB just for that? Anything else? Any way to minimize this overhead? If some developers can shed some light in this area would be great. It will give me some more ideas how to address performance issues for this client due to memory over-utilization. Thanks, Saulius Vabalas |