Re: [Speedycgi-users] Memory leak in speedy_backend
Brought to you by:
samh
|
From: Sam H. <sa...@da...> - 2003-10-05 01:03:45
|
Speedycgi doesn't do any dns operations or hostname lookups directly, so it's probably not the source of those sockets. Are you using the apache module, or the executable? Which version of apache? Please try upgrading to the latest version of speedycgi to see if the problem is still there. If you're still having problems you should lower the MaxRuns value (-r) until you can figure what it is. Also, it doesn't look like your problem is related to the memory leak that James reported. > Hello list > > Sorry for my late reply. I'm still investigating my mentioned problems. > I run different Perl applications under Speedy (Speedy v2.11), but have > only problems with one of them. The memory leakage affects only the > child process(es) (quite severely), and the "Too many files opened" > error happens occasionally. > > Often, this application forces Speedy to run separate processes under > the same group id, due to its Web user tracking activity. The code base > (about 7000 lines excl. modules) is clean and verified (4 hashes: > private vars (my) go out of scope, dynamic hashes are flushed on each > evocation, persistent user hash only records small data (such as cookies > etc.), and the static config hash is never altered once loaded ), and > runs using the strict pragma. So, vars leaking memory should really not > be the case here, except I am missing something fundamental. Sockets and > filehandles are always closed. > > What I am also wondering is why each Apache/Speedy child opens such a > vast amount of sockets (500 and more, which can lead to running out of > file descriptors) on the DNS port 53: > > COMMAND PID USER FD TYPE DEVICE NODE NAME > speedy_ba 14333 wwwrun 150u IPv4 301994 UDP > 1.megapublic.net:25553 ->ns.magnet.ch:domain > > speedy_ba 14333 wwwrun 151u IPv4 301995 UDP > 2.megapublic.net:25554->2.megapublic.net:domain > > speedy_ba 14333 wwwrun 152u IPv4 302020 UDP > 1.megapublic.net:25569->ns.magnet.ch:domain > > speedy_ba 14333 wwwrun 153u IPv4 302023 UDP > 2.megapublic.net:25572->2.megapublic.net:domain > > speedy_ba 14333 wwwrun 154u IPv4 302054 UDP > 1.megapublic.net:25587->ns.magnet.ch:domain > > speedy_ba 14333 wwwrun 155u IPv4 302056 UDP > 2.megapublic.net:25589->2.megapublic.net:domain > > and so on up to 500+ per process .... > > Any opinions? Thanks! > > > Ph. Wiede > > > James H. Thompson wrote: > > > The memory leak only becomes obvious if have a small -r set > > and do thousands of calls to the speedy_backend a day. > > The leak is in the base speedy_backend controller process not in the > > speedy_backend children processes that run the user perl scripts. > > > > Since the problem seemed to be in the way the Perl API calls were used > > from the speedy_backend > > controller process, it could easily behave differently on Solaris, > or > with different versions of Perl. > > > > Anyway, with my patch in place, the memory growth went away. > > > > Jim > > > > James H. Thompson > > jh...@la... > > > > > >> dear speedy users, > >> > >> here is another opinion on this one... > >> > >> my processes run for weeks at a time, and the resident memory size of > >> the processes remains a steady 7632K > >> > >> I am running a 2500 line perl cgi script. it uses sockets to send IP > >> packets to another machine (independantly of the incoming HTTP CGI > >> connections). I am using a custom OO module, and I am also using... > >> CGI, CGI::peristent, CGI::Carp, Net::LDAP, Net::LDAP::Util. it does > >> not write to any files but it does read html from eight text files. > >> > >> OS is Sun Solaris 7 > >> > >> Speedy version unknown - it does not respond to the -v option, so its > >> probably v2.11 > >> > >> is it possible that the leaks are in your code? you can get into > >> some weird situations when your program runs under speedy, where you > >> end up doing things many times when you only wanted to do them once. > >> my program ran for about a year before a realised it was reading in > >> the eight text files on every HTTP connection, when I should really > >> only read them once the first time thru. > >> > >> alternatively, it might be worth you trying speedy v2.11 > >> > >> regards, > >> mark. > >> > >> At 12:05 15/09/2003 +0200, PW wrote: > >> > >>> I have noticed the same behavior in a moderate traffic Perl > >>> application (clean code, using custom OO modules, 1 hit/1-5 > minutes, >>> using the group function across several virtual domains, > process > >>> expiration set to 1 hour)on SuSE Linux, too. Processes grow from an > >>> initial 7 MB to 60 MB+ over 1-3 days. Additionally and as already > >>> mentioned on this list, there is as a problem with too many socket > >>> connections established, yielding eventually a "Too many files > >>> opened" error by Apache. > >>> > >>> Ph. Wiede > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users > |