Re: [Speedycgi-users] Memory leak in speedy_backend
Brought to you by:
samh
|
From: PW <sub...@me...> - 2003-10-06 21:34:40
|
Thanks for taking the time to reply, Sam. I'm using the speedycgi apache=20 module v2.11 with Apache/1.3.23 (Unix). As to the socket connections, with netstat -a the UDP connections are=20 shown to be caused by the HTTPD process. There are around 100 virtual=20 hosts. As I said the memory leaks are caused by the speedy childs not the=20 backend. I have found the cause by now, it's not a leak but seems to be=20 my misunderstanding of the group feature. Say, I have 10 domains running=20 the same Perl application (named for this example "Tracker", utilizing=20 identical modules) each out of their CGI-BIN, using the same speedycgi=20 group id. Tracker's persistent vars are initiated only if undefined, as=20 is a database sizing 3 MB that is then loaded into memory. So the domain=20 that is first accessed launches Tracker and the speedycgi backend via a=20 SSI call. This works as documented - one backend, one child process=20 (ore more depending on traffic). But my understanding was that the loaded speedycgi backend would treat=20 all Trackers with the same group id as one application across all=20 virtual hosts. Unfortunately, this is not the case as speedycgi seems to=20 treat each domain's Tracker as an unique application in terms of=20 Namespace and variables. The supposed "memory leak" is just caused by=20 each domain's Tracker loading its own static vars and database instead=20 of this being done once and globally by the first launched Tracker. Why is this so? How can one preload a set of modules to be reused by=20 applications sharing the same group id? Or is this feature not=20 implemented (yet :)? Thanks -- Ph. Wiede Sam Horrocks wrote: > Speedycgi doesn't do any dns operations or hostname lookups directly, s= o > it's probably not the source of those sockets. Are you using the apach= e > module, or the executable? Which version of apache? Please try upgrad= ing > to the latest version of speedycgi to see if the problem is still there. >=20 > If you're still having problems you should lower the MaxRuns value (-r) > until you can figure what it is. Also, it doesn't look like your probl= em > is related to the memory leak that James reported. >=20 > > Hello list > > > Sorry for my late reply. I'm still investigating my mentioned pro= blems.=20 > > I run different Perl applications under Speedy (Speedy v2.11), but h= ave=20 > > only problems with one of them. The memory leakage affects only the=20 > > child process(es) (quite severely), and the "Too many files opened"=20 > > error happens occasionally. > > > Often, this application forces Speedy to run separate processes u= nder=20 > > the same group id, due to its Web user tracking activity. The code b= ase=20 > > (about 7000 lines excl. modules) is clean and verified (4 hashes:=20 > > private vars (my) go out of scope, dynamic hashes are flushed on eac= h=20 > > evocation, persistent user hash only records small data (such as coo= kies=20 > > etc.), and the static config hash is never altered once loaded ), an= d=20 > > runs using the strict pragma. So, vars leaking memory should really = not=20 > > be the case here, except I am missing something fundamental. Sockets= and=20 > > filehandles are always closed. > > > What I am also wondering is why each Apache/Speedy child opens su= ch a=20 > > vast amount of sockets (500 and more, which can lead to running out = of=20 > > file descriptors) on the DNS port 53: > > > COMMAND PID USER FD TYPE DEVICE NODE NAME > > speedy_ba 14333 wwwrun 150u IPv4 301994 UDP=20 > > 1.megapublic.net:25553 ->ns.magnet.ch:domain > > > speedy_ba 14333 wwwrun 151u IPv4 301995 UDP=20 > > 2.megapublic.net:25554->2.megapublic.net:domain > > > speedy_ba 14333 wwwrun 152u IPv4 302020 UDP=20 > > 1.megapublic.net:25569->ns.magnet.ch:domain > > > speedy_ba 14333 wwwrun 153u IPv4 302023 UDP=20 > > 2.megapublic.net:25572->2.megapublic.net:domain > > > speedy_ba 14333 wwwrun 154u IPv4 302054 UDP=20 > > 1.megapublic.net:25587->ns.magnet.ch:domain > > > speedy_ba 14333 wwwrun 155u IPv4 302056 UDP=20 > > 2.megapublic.net:25589->2.megapublic.net:domain > > > and so on up to 500+ per process .... > > > Any opinions? Thanks! > > > > Ph. Wiede > > > > James H. Thompson wrote: > > > > The memory leak only becomes obvious if have a small -r set > > > and do thousands of calls to the speedy_backend a day. > > > The leak is in the base speedy_backend controller process not in= the > > > speedy_backend children processes that run the user perl scripts. > > > > > > Since the problem seemed to be in the way the Perl API calls were= used > > > from the speedy_backend > > > controller process, it could easily behave differently on Solari= s,=20 > > or > with different versions of Perl. > > > > > > Anyway, with my patch in place, the memory growth went away. > > > > > > Jim > > > > > > James H. Thompson > > > jh...@la... > > > > > > > > >> dear speedy users, > > >> > > >> here is another opinion on this one... > > >> > > >> my processes run for weeks at a time, and the resident memory si= ze of > > >> the processes remains a steady 7632K > > >> > > >> I am running a 2500 line perl cgi script. it uses sockets to se= nd IP > > >> packets to another machine (independantly of the incoming HTTP C= GI > > >> connections). I am using a custom OO module, and I am also usin= g... > > >> CGI, CGI::peristent, CGI::Carp, Net::LDAP, Net::LDAP::Util. it = does > > >> not write to any files but it does read html from eight text fil= es. > > >> > > >> OS is Sun Solaris 7 > > >> > > >> Speedy version unknown - it does not respond to the -v option, s= o its > > >> probably v2.11 > > >> > > >> is it possible that the leaks are in your code? you can get int= o > > >> some weird situations when your program runs under speedy, where= you > > >> end up doing things many times when you only wanted to do them o= nce. > > >> my program ran for about a year before a realised it was reading= in > > >> the eight text files on every HTTP connection, when I should rea= lly > > >> only read them once the first time thru. > > >> > > >> alternatively, it might be worth you trying speedy v2.11 > > >> > > >> regards, > > >> mark. > > >> > > >> At 12:05 15/09/2003 +0200, PW wrote: > > >> > > >>> I have noticed the same behavior in a moderate traffic Perl > > >>> application (clean code, using custom OO modules, 1 hit/1-5=20 > > minutes, >>> using the group function across several virtual domains= ,=20 > > process > > >>> expiration set to 1 hour)on SuSE Linux, too. Processes grow fro= m an > > >>> initial 7 MB to 60 MB+ over 1-3 days. Additionally and as alrea= dy > > >>> mentioned on this list, there is as a problem with too many soc= ket > > >>> connections established, yielding eventually a "Too many files > > >>> opened" error by Apache. > > >>> > > >>> Ph. Wiede --=20 Mit freundlichen Gr=FCssen Philippe Wiede megapublic=AE Inc. Senior Consultant for Global Corporate Branding and Integrated=20 Communications Base station: Gemsberg 11, CH-4051 Basel, Switzerland Phone +41-61-263-33-36, Fax +41-61-263-33-37 www.megapublic.com, www.megapublic.ch, www.novience.com Confidentiality Notice: The contents of this email from megapublic inc. are confidential to the ordinary user of the email address to whom it was addressed. No-one else may copy or forward all or any of it in any form. Please reply to the sender if you receive this email in error so that we can arrange for proper delivery and then please delete this message. |