speedycgi-users Mailing List for SpeedyCGI (Page 4)
Brought to you by:
samh
You can subscribe to this list here.
| 2002 |
Jan
|
Feb
(4) |
Mar
(12) |
Apr
(3) |
May
(1) |
Jun
(10) |
Jul
(12) |
Aug
(2) |
Sep
(8) |
Oct
(10) |
Nov
(4) |
Dec
(4) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
|
Feb
|
Mar
(12) |
Apr
(1) |
May
(18) |
Jun
(1) |
Jul
(3) |
Aug
(3) |
Sep
(9) |
Oct
(21) |
Nov
(11) |
Dec
(2) |
| 2004 |
Jan
(6) |
Feb
(1) |
Mar
(2) |
Apr
(1) |
May
(10) |
Jun
(3) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2005 |
Jan
|
Feb
|
Mar
(4) |
Apr
(1) |
May
|
Jun
(1) |
Jul
(6) |
Aug
(4) |
Sep
(1) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2006 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
| 2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2009 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Sam H. <sa...@da...> - 2003-11-01 19:19:37
|
> The second shows up when running a simple script like this in a shell: > -------------------------------- > #!/usr/bin/speedy > print STDERR "STDERR print\n"; > print "normal print\n"; > -------------------------------- > I would expect the STDERR line to always appear first in the output, but > it does not. I don't think there's a way to preserve the order as long as speedycgi is using sockets for output from the backend. The backend puts both strings into separate unix sockets, which the kernel buffers. When the frontend goes to get the data, it sees two sockets with data available and no way to tell which one was written first. |
|
From: Sam H. <sa...@da...> - 2003-10-31 18:02:47
|
I suspect this is more an openwebmail issue than a speedycgi issue.
You might want to try posting to one of the openwebmail lists.
Maybe this error message relates to code in the config file that
require is pulling in, rather than the require statement itself.
> I am attempting to run SpeedyCGI + Openwebmail on some mailservers. I
> have installed the latest
> version of both SpeedyCGI and Openwebmail. Everything works
> wonderfully for about 5-10 minutes
> and then these error messages start popping up in the log files.
>
>
> [Thu Oct 30 13:58:06 2003] [error] [client 204.29.75.195] Insecure
> dependency in require while running setuid at
> /var/www/cgi-bin/openwebmail/ow-shared.pl line 1196.
> [Thu Oct 30 13:58:35 2003] [error] [client 204.29.75.195] Insecure
> dependency in require while running setuid at
> /var/www/cgi-bin/openwebmail/ow-shared.pl line 1196.
>
> I checked the ow-shared.pl file the error pops up as a Taint check
> error when trying to
> include config files. I hardwired a couple of them, but then the
> error would just jump to
> a different line and a different require.
>
> $file="$config{'ow_cgidir'}/$file";
> ($file=~ /^(.*)$/) && ($file = $1);
> require $file; # done only once because of %INC
>
> From what I have read, doing a regex and referring to the file as $1
> *should* make it pass the taint check. So I am
> confused and I was wondering if you guys had seen this before? I am
> desparatly needing to get openwebmail running
> a tad faster.
>
> Thanks!
>
> Rich
>
>
>
>
>
>
> -------------------------------------------------------
> This SF.net email is sponsored by: SF.net Giveback Program.
> Does SourceForge.net help you be more productive? Does it
> help you create better code? SHARE THE LOVE, and help us help
> YOU! Click Here: http://sourceforge.net/donate/
> _______________________________________________
> Speedycgi-users mailing list
> Spe...@li...
> https://lists.sourceforge.net/lists/listinfo/speedycgi-users
>
|
|
From: Richard T. <tr...@as...> - 2003-10-31 15:11:28
|
I am attempting to run SpeedyCGI + Openwebmail on some mailservers. I
have installed the latest
version of both SpeedyCGI and Openwebmail. Everything works
wonderfully for about 5-10 minutes
and then these error messages start popping up in the log files.
[Thu Oct 30 13:58:06 2003] [error] [client 204.29.75.195] Insecure
dependency in require while running setuid at
/var/www/cgi-bin/openwebmail/ow-shared.pl line 1196.
[Thu Oct 30 13:58:35 2003] [error] [client 204.29.75.195] Insecure
dependency in require while running setuid at
/var/www/cgi-bin/openwebmail/ow-shared.pl line 1196.
I checked the ow-shared.pl file the error pops up as a Taint check
error when trying to
include config files. I hardwired a couple of them, but then the
error would just jump to
a different line and a different require.
$file="$config{'ow_cgidir'}/$file";
($file=~ /^(.*)$/) && ($file = $1);
require $file; # done only once because of %INC
From what I have read, doing a regex and referring to the file as $1
*should* make it pass the taint check. So I am
confused and I was wondering if you guys had seen this before? I am
desparatly needing to get openwebmail running
a tad faster.
Thanks!
Rich
|
|
From: Grant D. <gr...@wo...> - 2003-10-28 22:17:45
|
I have seen a few quirks with STDERR output from speedy. The first problem (and the one the most concerns me) shows up using speedy with CGI under Apache. If the first request to Apache after it starts up is to a speedy CGI script that prints a message to STDERR, the STDERR output shows up at the _top_ of the Apache error log, overwriting whatever was there. After that any STDERR output will overwrite existing lines in the error log from the top down. Note that this only happens if the speedy script is the very first request. The second shows up when running a simple script like this in a shell: -------------------------------- #!/usr/bin/speedy print STDERR "STDERR print\n"; print "normal print\n"; -------------------------------- I would expect the STDERR line to always appear first in the output, but it does not. Last, as in http://sourceforge.net/mailarchive/forum.php?thread_id=2413603&forum_id=12206 STDERR output using mod_speedycgi doesn't show up in the Apache error log at all. I have seen these on two different setups: speedy 2.21, RedHat 7.3, perl 5.6.1 (RedHat RPM install), and Apache 1.3.27 (source build) speedy 2.22, RedHat 9, perl 5.8.1 (source build), and Apache 1.3.28 (source build) -Grant |
|
From: Sam H. <sa...@da...> - 2003-10-24 09:48:41
|
The compiler on Darwin (OS/X) doesn't like the multiply defined my_perl. diff -c -r1.12 speedy_perl.h *** speedy_perl.h 7 Oct 2003 04:03:48 -0000 1.12 --- speedy_perl.h 23 Oct 2003 22:57:24 -0000 *************** *** 21,24 **** void speedy_perl_run(slotnum_t _gslotnum, slotnum_t _bslotnum); int speedy_perl_fork(void); ! PerlInterpreter *my_perl; --- 21,24 ---- void speedy_perl_run(slotnum_t _gslotnum, slotnum_t _bslotnum); int speedy_perl_fork(void); ! extern PerlInterpreter *my_perl; |
|
From: Sam H. <sa...@da...> - 2003-10-12 06:31:51
|
SpeedyCGI release 2.22 is available at:
http://daemoninc.com/SpeedyCGI/download.html
The changes since 2.21 are:
- Redhat9 fixes.
- Better support for setuid under Solaris.
- Fixes for HP-UX 11.22
- Fix for memory leak in speedy_backend reported by James H. Thompson
- speedy_file.c fixes for bugs reported by Dmitri Tikhonov
- Fix from Lars Thegler for buffer overrun in speedy_opt.c
- Add efence malloc debugging.
|
|
From: Alex M. <al...@nd...> - 2003-10-09 20:18:26
|
Funny -- I emailed st...@so... at 11:50 this morning about the comments on their status page concerning CVS ... Lo and behold, there is = now a new comment about CVS posted at 12:00, that there are still problems = with the servers. But no mail from them... Too bad there aren't any mirrors for this stuff. Alex -----Original Message----- From: spe...@li... [mailto:spe...@li...] On Behalf Of Sam Horrocks Sent: Thursday, October 09, 2003 1:03 PM To: Alex Moen Cc: spe...@li... Subject: Re: [Speedycgi-users] CVS???=20 I've seen problems with with the pserver service for the last few days - sourceforge must be having problems. You have to try it several times before it eventually it works. > What's up with the CVS system??? Is it having problems, or is it just = my net > connection? >=20 > Thanks! >=20 > Alex Moen > NDTC >=20 >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: SF.net Giveback Program. > SourceForge.net hosts over 70,000 Open Source Projects. > See the = people who have HELPED US provide better services: > Click here: http://sourceforge.net/supporters.php > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users >=20 ------------------------------------------------------- This SF.net email is sponsored by: SF.net Giveback Program. = SourceForge.net hosts over 70,000 Open Source Projects. See the people who have HELPED = US provide better services: Click here: = http://sourceforge.net/supporters.php _______________________________________________ Speedycgi-users mailing list Spe...@li... https://lists.sourceforge.net/lists/listinfo/speedycgi-users |
|
From: Alex M. <al...@nd...> - 2003-10-09 19:07:17
|
Weird. phpMyAdmin's cvs browse is nice and fast, awstat's seems to be = too, seems to be localized on speedycgi's portion... I've been trying all = day to try to get the updated files, and it's getting pretty frustrating. Too = bad they can't just roll the changes into a new tarball, I hate CVS... :) Thanks, Alex > From: Sam Horrocks [mailto:sa...@da...]=20 > Sent: Thursday, October 09, 2003 1:03 PM >I've seen problems with with the pserver service for the last few days = - sourceforge must be having problems. You have to try it several times before it >eventually it works. > > What's up with the CVS system??? Is it having problems, or is it = just my net > connection? >=20 > > Thanks! > >=20 > > Alex Moen > > NDTC |
|
From: Sam H. <sa...@da...> - 2003-10-09 19:03:12
|
I've seen problems with with the pserver service for the last few days - sourceforge must be having problems. You have to try it several times before it eventually it works. > What's up with the CVS system??? Is it having problems, or is it just my net > connection? > > Thanks! > > Alex Moen > NDTC > > > > ------------------------------------------------------- > This SF.net email is sponsored by: SF.net Giveback Program. > SourceForge.net hosts over 70,000 Open Source Projects. > See the people who have HELPED US provide better services: > Click here: http://sourceforge.net/supporters.php > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users > |
|
From: Alex M. <al...@nd...> - 2003-10-09 18:56:30
|
What's up with the CVS system??? Is it having problems, or is it just my net connection? Thanks! Alex Moen NDTC |
|
From: Sam H. <sa...@da...> - 2003-10-09 17:37:44
|
Using links shouldn't affect the environment variables. > Thank you for your feedback, Sam. Actually, I just discovered that I am > in fact already using SpeedyCGI v2.21, not v2.11, since last October > 2002. And yes, all domains run under the same Apache user id. I applied > your hint to "use" instead of "require" modules, as well as to preload > the entire utilized module set in the main application. However, the > described behavior in terms of memory growth and excessive port > connections remains the same. > > The idea to hard/soft link the application is interesting, however, will > this strategy preserve the ENV vars of each virtual host (which the app > depends on to distinguish between the hosts)? If yes, this might be the > route to take. > > Thanks -- Ph. Wiede > > > Sam Horrocks wrote: > > Groups help when you have many different scripts that you want to run > > within a single interpreter. If your only concern is a single cgi-bin > > script, groups won't make a difference. But using groups won't hurt, > > as long as your code is OK running that way. > > > > In addition: > > - Make sure all the domains run under the same user id. > > > > - Make sure all domains use the same exact file. If you have multiple > > copies of the script (for example one copy per virtual host), it > > wont work right - each copy will have a different inode in the > > filesystem, and therefore its own package (with groups), > > or its own interpreter (without groups). If you need to put the > > same script in multiple locations, make sure to use links (hard or > > soft) instead of copying the file. > > > > - As much as possible use "use" instead of "require" to load modules. > > > > - If your application involves multiple scripts in a single group, > > make sure that all scripts "use" all the modules that the scripts > > in that group will need. That way no matter which script is started > > first, the interpreter will compile into the parent all the > > modules that group will need. > > > > - Upgrade to the latest version - there is more sharing of compiled code > > in 2.20 and later, and there is better control on the number of > > backends spawned. > > > > > > > > > > > Thanks for taking the time to reply, Sam. I'm using the speedycgi apache=20 > > > module v2.11 with Apache/1.3.23 (Unix). > > > > As to the socket connections, with netstat -a the UDP connections are=20 > > > shown to be caused by the HTTPD process. There are around 100 virtual=20 > > > hosts. > > > > As I said the memory leaks are caused by the speedy childs not the=20 > > > backend. I have found the cause by now, it's not a leak but seems to be=20 > > > my misunderstanding of the group feature. Say, I have 10 domains running=20 > > > the same Perl application (named for this example "Tracker", utilizing=20 > > > identical modules) each out of their CGI-BIN, using the same speedycgi=20 > > > group id. Tracker's persistent vars are initiated only if undefined, as=20 > > > is a database sizing 3 MB that is then loaded into memory. So the domain=20 > > > that is first accessed launches Tracker and the speedycgi backend via a=20 > > > SSI call. This works as documented - one backend, one child process=20 > > > (ore more depending on traffic). > > > > But my understanding was that the loaded speedycgi backend would treat=20 > > > all Trackers with the same group id as one application across all=20 > > > virtual hosts. Unfortunately, this is not the case as speedycgi seems to=20 > > > treat each domain's Tracker as an unique application in terms of=20 > > > Namespace and variables. The supposed "memory leak" is just caused by=20 > > > each domain's Tracker loading its own static vars and database instead=20 > > > of this being done once and globally by the first launched Tracker. > > > > Why is this so? How can one preload a set of modules to be reused by=20 > > > applications sharing the same group id? Or is this feature not=20 > > > implemented (yet :)? > > > > > Thanks -- Ph. Wiede > > > > ------------------------------------------------------- > This SF.net email is sponsored by: SF.net Giveback Program. > SourceForge.net hosts over 70,000 Open Source Projects. > See the people who have HELPED US provide better services: > Click here: http://sourceforge.net/supporters.php > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users > |
|
From: PW <sub...@me...> - 2003-10-09 11:45:45
|
Thank you for your feedback, Sam. Actually, I just discovered that I am in fact already using SpeedyCGI v2.21, not v2.11, since last October 2002. And yes, all domains run under the same Apache user id. I applied your hint to "use" instead of "require" modules, as well as to preload the entire utilized module set in the main application. However, the described behavior in terms of memory growth and excessive port connections remains the same. The idea to hard/soft link the application is interesting, however, will this strategy preserve the ENV vars of each virtual host (which the app depends on to distinguish between the hosts)? If yes, this might be the route to take. Thanks -- Ph. Wiede Sam Horrocks wrote: > Groups help when you have many different scripts that you want to run > within a single interpreter. If your only concern is a single cgi-bin > script, groups won't make a difference. But using groups won't hurt, > as long as your code is OK running that way. > > In addition: > - Make sure all the domains run under the same user id. > > - Make sure all domains use the same exact file. If you have multiple > copies of the script (for example one copy per virtual host), it > wont work right - each copy will have a different inode in the > filesystem, and therefore its own package (with groups), > or its own interpreter (without groups). If you need to put the > same script in multiple locations, make sure to use links (hard or > soft) instead of copying the file. > > - As much as possible use "use" instead of "require" to load modules. > > - If your application involves multiple scripts in a single group, > make sure that all scripts "use" all the modules that the scripts > in that group will need. That way no matter which script is started > first, the interpreter will compile into the parent all the > modules that group will need. > > - Upgrade to the latest version - there is more sharing of compiled code > in 2.20 and later, and there is better control on the number of > backends spawned. > > > > > > Thanks for taking the time to reply, Sam. I'm using the speedycgi apache=20 > > module v2.11 with Apache/1.3.23 (Unix). > > > As to the socket connections, with netstat -a the UDP connections are=20 > > shown to be caused by the HTTPD process. There are around 100 virtual=20 > > hosts. > > > As I said the memory leaks are caused by the speedy childs not the=20 > > backend. I have found the cause by now, it's not a leak but seems to be=20 > > my misunderstanding of the group feature. Say, I have 10 domains running=20 > > the same Perl application (named for this example "Tracker", utilizing=20 > > identical modules) each out of their CGI-BIN, using the same speedycgi=20 > > group id. Tracker's persistent vars are initiated only if undefined, as=20 > > is a database sizing 3 MB that is then loaded into memory. So the domain=20 > > that is first accessed launches Tracker and the speedycgi backend via a=20 > > SSI call. This works as documented - one backend, one child process=20 > > (ore more depending on traffic). > > > But my understanding was that the loaded speedycgi backend would treat=20 > > all Trackers with the same group id as one application across all=20 > > virtual hosts. Unfortunately, this is not the case as speedycgi seems to=20 > > treat each domain's Tracker as an unique application in terms of=20 > > Namespace and variables. The supposed "memory leak" is just caused by=20 > > each domain's Tracker loading its own static vars and database instead=20 > > of this being done once and globally by the first launched Tracker. > > > Why is this so? How can one preload a set of modules to be reused by=20 > > applications sharing the same group id? Or is this feature not=20 > > implemented (yet :)? > > > > Thanks -- Ph. Wiede |
|
From: Sam H. <sa...@da...> - 2003-10-08 01:38:21
|
Groups help when you have many different scripts that you want to run
within a single interpreter. If your only concern is a single cgi-bin
script, groups won't make a difference. But using groups won't hurt,
as long as your code is OK running that way.
In addition:
- Make sure all the domains run under the same user id.
- Make sure all domains use the same exact file. If you have multiple
copies of the script (for example one copy per virtual host), it
wont work right - each copy will have a different inode in the
filesystem, and therefore its own package (with groups),
or its own interpreter (without groups). If you need to put the
same script in multiple locations, make sure to use links (hard or
soft) instead of copying the file.
- As much as possible use "use" instead of "require" to load modules.
- If your application involves multiple scripts in a single group,
make sure that all scripts "use" all the modules that the scripts
in that group will need. That way no matter which script is started
first, the interpreter will compile into the parent all the
modules that group will need.
- Upgrade to the latest version - there is more sharing of compiled code
in 2.20 and later, and there is better control on the number of
backends spawned.
> Thanks for taking the time to reply, Sam. I'm using the speedycgi apache=20
> module v2.11 with Apache/1.3.23 (Unix).
>
> As to the socket connections, with netstat -a the UDP connections are=20
> shown to be caused by the HTTPD process. There are around 100 virtual=20
> hosts.
>
> As I said the memory leaks are caused by the speedy childs not the=20
> backend. I have found the cause by now, it's not a leak but seems to be=20
> my misunderstanding of the group feature. Say, I have 10 domains running=20
> the same Perl application (named for this example "Tracker", utilizing=20
> identical modules) each out of their CGI-BIN, using the same speedycgi=20
> group id. Tracker's persistent vars are initiated only if undefined, as=20
> is a database sizing 3 MB that is then loaded into memory. So the domain=20
> that is first accessed launches Tracker and the speedycgi backend via a=20
> SSI call. This works as documented - one backend, one child process=20
> (ore more depending on traffic).
>
> But my understanding was that the loaded speedycgi backend would treat=20
> all Trackers with the same group id as one application across all=20
> virtual hosts. Unfortunately, this is not the case as speedycgi seems to=20
> treat each domain's Tracker as an unique application in terms of=20
> Namespace and variables. The supposed "memory leak" is just caused by=20
> each domain's Tracker loading its own static vars and database instead=20
> of this being done once and globally by the first launched Tracker.
>
> Why is this so? How can one preload a set of modules to be reused by=20
> applications sharing the same group id? Or is this feature not=20
> implemented (yet :)?
>
>
> Thanks -- Ph. Wiede
>
> Sam Horrocks wrote:
> > Speedycgi doesn't do any dns operations or hostname lookups directly, s=
> o
> > it's probably not the source of those sockets. Are you using the apach=
> e
> > module, or the executable? Which version of apache? Please try upgrad=
> ing
> > to the latest version of speedycgi to see if the problem is still there.
> >=20
> > If you're still having problems you should lower the MaxRuns value (-r)
> > until you can figure what it is. Also, it doesn't look like your probl=
> em
> > is related to the memory leak that James reported.
> >=20
> > > Hello list
> > > > Sorry for my late reply. I'm still investigating my mentioned pro=
> blems.=20
> > > I run different Perl applications under Speedy (Speedy v2.11), but h=
> ave=20
> > > only problems with one of them. The memory leakage affects only the=20
> > > child process(es) (quite severely), and the "Too many files opened"=20
> > > error happens occasionally.
> > > > Often, this application forces Speedy to run separate processes u=
> nder=20
> > > the same group id, due to its Web user tracking activity. The code b=
> ase=20
> > > (about 7000 lines excl. modules) is clean and verified (4 hashes:=20
> > > private vars (my) go out of scope, dynamic hashes are flushed on eac=
> h=20
> > > evocation, persistent user hash only records small data (such as coo=
> kies=20
> > > etc.), and the static config hash is never altered once loaded ), an=
> d=20
> > > runs using the strict pragma. So, vars leaking memory should really =
> not=20
> > > be the case here, except I am missing something fundamental. Sockets=
> and=20
> > > filehandles are always closed.
> > > > What I am also wondering is why each Apache/Speedy child opens su=
> ch a=20
> > > vast amount of sockets (500 and more, which can lead to running out =
> of=20
> > > file descriptors) on the DNS port 53:
> > > > COMMAND PID USER FD TYPE DEVICE NODE NAME
> > > speedy_ba 14333 wwwrun 150u IPv4 301994 UDP=20
> > > 1.megapublic.net:25553 ->ns.magnet.ch:domain
> > > > speedy_ba 14333 wwwrun 151u IPv4 301995 UDP=20
> > > 2.megapublic.net:25554->2.megapublic.net:domain
> > > > speedy_ba 14333 wwwrun 152u IPv4 302020 UDP=20
> > > 1.megapublic.net:25569->ns.magnet.ch:domain
> > > > speedy_ba 14333 wwwrun 153u IPv4 302023 UDP=20
> > > 2.megapublic.net:25572->2.megapublic.net:domain
> > > > speedy_ba 14333 wwwrun 154u IPv4 302054 UDP=20
> > > 1.megapublic.net:25587->ns.magnet.ch:domain
> > > > speedy_ba 14333 wwwrun 155u IPv4 302056 UDP=20
> > > 2.megapublic.net:25589->2.megapublic.net:domain
> > > > and so on up to 500+ per process ....
> > > > Any opinions? Thanks!
> > > > > Ph. Wiede
> > > > > James H. Thompson wrote:
> > > > > The memory leak only becomes obvious if have a small -r set
> > > > and do thousands of calls to the speedy_backend a day.
> > > > The leak is in the base speedy_backend controller process not in=
> the
> > > > speedy_backend children processes that run the user perl scripts.
> > > >
> > > > Since the problem seemed to be in the way the Perl API calls were=
> used
> > > > from the speedy_backend
> > > > controller process, it could easily behave differently on Solari=
> s,=20
> > > or > with different versions of Perl.
> > > >
> > > > Anyway, with my patch in place, the memory growth went away.
> > > >
> > > > Jim
> > > >
> > > > James H. Thompson
> > > > jh...@la...
> > > >
> > > >
> > > >> dear speedy users,
> > > >>
> > > >> here is another opinion on this one...
> > > >>
> > > >> my processes run for weeks at a time, and the resident memory si=
> ze of
> > > >> the processes remains a steady 7632K
> > > >>
> > > >> I am running a 2500 line perl cgi script. it uses sockets to se=
> nd IP
> > > >> packets to another machine (independantly of the incoming HTTP C=
> GI
> > > >> connections). I am using a custom OO module, and I am also usin=
> g...
> > > >> CGI, CGI::peristent, CGI::Carp, Net::LDAP, Net::LDAP::Util. it =
> does
> > > >> not write to any files but it does read html from eight text fil=
> es.
> > > >>
> > > >> OS is Sun Solaris 7
> > > >>
> > > >> Speedy version unknown - it does not respond to the -v option, s=
> o its
> > > >> probably v2.11
> > > >>
> > > >> is it possible that the leaks are in your code? you can get int=
> o
> > > >> some weird situations when your program runs under speedy, where=
> you
> > > >> end up doing things many times when you only wanted to do them o=
> nce.
> > > >> my program ran for about a year before a realised it was reading=
> in
> > > >> the eight text files on every HTTP connection, when I should rea=
> lly
> > > >> only read them once the first time thru.
> > > >>
> > > >> alternatively, it might be worth you trying speedy v2.11
> > > >>
> > > >> regards,
> > > >> mark.
> > > >>
> > > >> At 12:05 15/09/2003 +0200, PW wrote:
> > > >>
> > > >>> I have noticed the same behavior in a moderate traffic Perl
> > > >>> application (clean code, using custom OO modules, 1 hit/1-5=20
> > > minutes, >>> using the group function across several virtual domains=
> ,=20
> > > process
> > > >>> expiration set to 1 hour)on SuSE Linux, too. Processes grow fro=
> m an
> > > >>> initial 7 MB to 60 MB+ over 1-3 days. Additionally and as alrea=
> dy
> > > >>> mentioned on this list, there is as a problem with too many soc=
> ket
> > > >>> connections established, yielding eventually a "Too many files
> > > >>> opened" error by Apache.
> > > >>>
> > > >>> Ph. Wiede
>
> --=20
> Mit freundlichen Gr=FCssen
>
> Philippe Wiede
>
> megapublic=AE Inc.
> Senior Consultant for Global Corporate Branding and Integrated=20
> Communications
> Base station: Gemsberg 11, CH-4051 Basel, Switzerland
> Phone +41-61-263-33-36, Fax +41-61-263-33-37
> www.megapublic.com, www.megapublic.ch, www.novience.com
>
> Confidentiality Notice:
> The contents of this email from megapublic inc. are confidential to the
> ordinary user of the email address to whom it was addressed. No-one
> else may copy or forward all or any of it in any form. Please reply to
> the sender if you receive this email in error so that we can arrange
> for proper delivery and then please delete this message.
>
>
>
>
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf
> _______________________________________________
> Speedycgi-users mailing list
> Spe...@li...
> https://lists.sourceforge.net/lists/listinfo/speedycgi-users
>
|
|
From: PW <sub...@me...> - 2003-10-06 21:34:40
|
Thanks for taking the time to reply, Sam. I'm using the speedycgi apache=20 module v2.11 with Apache/1.3.23 (Unix). As to the socket connections, with netstat -a the UDP connections are=20 shown to be caused by the HTTPD process. There are around 100 virtual=20 hosts. As I said the memory leaks are caused by the speedy childs not the=20 backend. I have found the cause by now, it's not a leak but seems to be=20 my misunderstanding of the group feature. Say, I have 10 domains running=20 the same Perl application (named for this example "Tracker", utilizing=20 identical modules) each out of their CGI-BIN, using the same speedycgi=20 group id. Tracker's persistent vars are initiated only if undefined, as=20 is a database sizing 3 MB that is then loaded into memory. So the domain=20 that is first accessed launches Tracker and the speedycgi backend via a=20 SSI call. This works as documented - one backend, one child process=20 (ore more depending on traffic). But my understanding was that the loaded speedycgi backend would treat=20 all Trackers with the same group id as one application across all=20 virtual hosts. Unfortunately, this is not the case as speedycgi seems to=20 treat each domain's Tracker as an unique application in terms of=20 Namespace and variables. The supposed "memory leak" is just caused by=20 each domain's Tracker loading its own static vars and database instead=20 of this being done once and globally by the first launched Tracker. Why is this so? How can one preload a set of modules to be reused by=20 applications sharing the same group id? Or is this feature not=20 implemented (yet :)? Thanks -- Ph. Wiede Sam Horrocks wrote: > Speedycgi doesn't do any dns operations or hostname lookups directly, s= o > it's probably not the source of those sockets. Are you using the apach= e > module, or the executable? Which version of apache? Please try upgrad= ing > to the latest version of speedycgi to see if the problem is still there. >=20 > If you're still having problems you should lower the MaxRuns value (-r) > until you can figure what it is. Also, it doesn't look like your probl= em > is related to the memory leak that James reported. >=20 > > Hello list > > > Sorry for my late reply. I'm still investigating my mentioned pro= blems.=20 > > I run different Perl applications under Speedy (Speedy v2.11), but h= ave=20 > > only problems with one of them. The memory leakage affects only the=20 > > child process(es) (quite severely), and the "Too many files opened"=20 > > error happens occasionally. > > > Often, this application forces Speedy to run separate processes u= nder=20 > > the same group id, due to its Web user tracking activity. The code b= ase=20 > > (about 7000 lines excl. modules) is clean and verified (4 hashes:=20 > > private vars (my) go out of scope, dynamic hashes are flushed on eac= h=20 > > evocation, persistent user hash only records small data (such as coo= kies=20 > > etc.), and the static config hash is never altered once loaded ), an= d=20 > > runs using the strict pragma. So, vars leaking memory should really = not=20 > > be the case here, except I am missing something fundamental. Sockets= and=20 > > filehandles are always closed. > > > What I am also wondering is why each Apache/Speedy child opens su= ch a=20 > > vast amount of sockets (500 and more, which can lead to running out = of=20 > > file descriptors) on the DNS port 53: > > > COMMAND PID USER FD TYPE DEVICE NODE NAME > > speedy_ba 14333 wwwrun 150u IPv4 301994 UDP=20 > > 1.megapublic.net:25553 ->ns.magnet.ch:domain > > > speedy_ba 14333 wwwrun 151u IPv4 301995 UDP=20 > > 2.megapublic.net:25554->2.megapublic.net:domain > > > speedy_ba 14333 wwwrun 152u IPv4 302020 UDP=20 > > 1.megapublic.net:25569->ns.magnet.ch:domain > > > speedy_ba 14333 wwwrun 153u IPv4 302023 UDP=20 > > 2.megapublic.net:25572->2.megapublic.net:domain > > > speedy_ba 14333 wwwrun 154u IPv4 302054 UDP=20 > > 1.megapublic.net:25587->ns.magnet.ch:domain > > > speedy_ba 14333 wwwrun 155u IPv4 302056 UDP=20 > > 2.megapublic.net:25589->2.megapublic.net:domain > > > and so on up to 500+ per process .... > > > Any opinions? Thanks! > > > > Ph. Wiede > > > > James H. Thompson wrote: > > > > The memory leak only becomes obvious if have a small -r set > > > and do thousands of calls to the speedy_backend a day. > > > The leak is in the base speedy_backend controller process not in= the > > > speedy_backend children processes that run the user perl scripts. > > > > > > Since the problem seemed to be in the way the Perl API calls were= used > > > from the speedy_backend > > > controller process, it could easily behave differently on Solari= s,=20 > > or > with different versions of Perl. > > > > > > Anyway, with my patch in place, the memory growth went away. > > > > > > Jim > > > > > > James H. Thompson > > > jh...@la... > > > > > > > > >> dear speedy users, > > >> > > >> here is another opinion on this one... > > >> > > >> my processes run for weeks at a time, and the resident memory si= ze of > > >> the processes remains a steady 7632K > > >> > > >> I am running a 2500 line perl cgi script. it uses sockets to se= nd IP > > >> packets to another machine (independantly of the incoming HTTP C= GI > > >> connections). I am using a custom OO module, and I am also usin= g... > > >> CGI, CGI::peristent, CGI::Carp, Net::LDAP, Net::LDAP::Util. it = does > > >> not write to any files but it does read html from eight text fil= es. > > >> > > >> OS is Sun Solaris 7 > > >> > > >> Speedy version unknown - it does not respond to the -v option, s= o its > > >> probably v2.11 > > >> > > >> is it possible that the leaks are in your code? you can get int= o > > >> some weird situations when your program runs under speedy, where= you > > >> end up doing things many times when you only wanted to do them o= nce. > > >> my program ran for about a year before a realised it was reading= in > > >> the eight text files on every HTTP connection, when I should rea= lly > > >> only read them once the first time thru. > > >> > > >> alternatively, it might be worth you trying speedy v2.11 > > >> > > >> regards, > > >> mark. > > >> > > >> At 12:05 15/09/2003 +0200, PW wrote: > > >> > > >>> I have noticed the same behavior in a moderate traffic Perl > > >>> application (clean code, using custom OO modules, 1 hit/1-5=20 > > minutes, >>> using the group function across several virtual domains= ,=20 > > process > > >>> expiration set to 1 hour)on SuSE Linux, too. Processes grow fro= m an > > >>> initial 7 MB to 60 MB+ over 1-3 days. Additionally and as alrea= dy > > >>> mentioned on this list, there is as a problem with too many soc= ket > > >>> connections established, yielding eventually a "Too many files > > >>> opened" error by Apache. > > >>> > > >>> Ph. Wiede --=20 Mit freundlichen Gr=FCssen Philippe Wiede megapublic=AE Inc. Senior Consultant for Global Corporate Branding and Integrated=20 Communications Base station: Gemsberg 11, CH-4051 Basel, Switzerland Phone +41-61-263-33-36, Fax +41-61-263-33-37 www.megapublic.com, www.megapublic.ch, www.novience.com Confidentiality Notice: The contents of this email from megapublic inc. are confidential to the ordinary user of the email address to whom it was addressed. No-one else may copy or forward all or any of it in any form. Please reply to the sender if you receive this email in error so that we can arrange for proper delivery and then please delete this message. |
|
From: Sam H. <sa...@da...> - 2003-10-06 19:49:21
|
Try checking out the latest copy of the entire tree and see if that fixes it. > My first attempt to compile ended with the setdefout() problem. I > replaced speedy_backend_main.c to REV 1.22 and speedy_inc.h with REV > 1.15 from CVS.=20 > > Now, my current problem is below: > > > [root@nnm CGI-SpeedyCGI-2.21]# make > cd src && make > make[1]: Entering directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' > make[1]: Nothing to be done for `makemakerdflt'. > make[1]: Leaving directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' > make[1]: Entering directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' > make[1]: Nothing to be done for `all'. > make[1]: Leaving directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' > make[1]: Entering directory > `/usr/local/src/CGI-SpeedyCGI-2.21/speedy_backend' > cd ../src && make > make[2]: Entering directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' > make[2]: Nothing to be done for `makemakerdflt'. > make[2]: Leaving directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include > -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=3D64 -I/usr/include/gdbm -O > -DVERSION=3D\"2.21\" -DXS_VERSION=3D\"2.21\" -fPIC > "-I/usr/lib/perl5/5.8.0/i386-linux-thread-multi/CORE" > -DSPEEDY_PROGNAME=3D\"speedy_backend\" -DSPEEDY_VERSION=3D\"2.21\" > -DSPEEDY_BACKEND speedy_backend_main.c > speedy_backend_main.c: In function `main': > speedy_backend_main.c:175: `my_perl' undeclared (first use in this > function) > speedy_backend_main.c:175: (Each undeclared identifier is reported only > once > speedy_backend_main.c:175: for each function it appears in.) > make[1]: *** [speedy_backend_main.o] Error 1 > make[1]: Leaving directory > `/usr/local/src/CGI-SpeedyCGI-2.21/speedy_backend' > make: *** [subdirs] Error 2 > [root@nnm CGI-SpeedyCGI-2.21]#=20 > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users > |
|
From: Gary B. <gbo...@ae...> - 2003-10-06 18:40:56
|
My first attempt to compile ended with the setdefout() problem. I replaced speedy_backend_main.c to REV 1.22 and speedy_inc.h with REV 1.15 from CVS.=20 Now, my current problem is below: [root@nnm CGI-SpeedyCGI-2.21]# make cd src && make make[1]: Entering directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' make[1]: Nothing to be done for `makemakerdflt'. make[1]: Leaving directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' make[1]: Entering directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' make[1]: Entering directory `/usr/local/src/CGI-SpeedyCGI-2.21/speedy_backend' cd ../src && make make[2]: Entering directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' make[2]: Nothing to be done for `makemakerdflt'. make[2]: Leaving directory `/usr/local/src/CGI-SpeedyCGI-2.21/src' gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=3D64 -I/usr/include/gdbm -O -DVERSION=3D\"2.21\" -DXS_VERSION=3D\"2.21\" -fPIC "-I/usr/lib/perl5/5.8.0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=3D\"speedy_backend\" -DSPEEDY_VERSION=3D\"2.21\" -DSPEEDY_BACKEND speedy_backend_main.c speedy_backend_main.c: In function `main': speedy_backend_main.c:175: `my_perl' undeclared (first use in this function) speedy_backend_main.c:175: (Each undeclared identifier is reported only once speedy_backend_main.c:175: for each function it appears in.) make[1]: *** [speedy_backend_main.o] Error 1 make[1]: Leaving directory `/usr/local/src/CGI-SpeedyCGI-2.21/speedy_backend' make: *** [subdirs] Error 2 [root@nnm CGI-SpeedyCGI-2.21]#=20 |
|
From: James H. T. <jh...@la...> - 2003-10-06 17:24:05
|
Thats great news! Thanks much for creating and maintaining this program. Jim James H. Thompson jh...@la... ----- Original Message ----- From: "Sam Horrocks" <sa...@da...> To: "James H. Thompson" <jh...@la...> Cc: <spe...@li...> Sent: Sunday, October 05, 2003 8:47 PM Subject: Re: [Speedycgi-users] Memory leak in speedy_backend > THanks for the bug report and the patch. There's new code on the cvs > server to fix this. > > The patch you submitted reduced the leak quite a bit, but it didn't > go away altogether. I couldn't get eval_sv to work without leaking, so > I replaced it with a call_sv which seems to work. > > > > > This is a multi-part message in MIME format. > > > > ------=_NextPart_000_0047_01C37B19.2DEAB060 > > Content-Type: text/plain; > > charset="Windows-1252" > > Content-Transfer-Encoding: quoted-printable > > > > I'm using SpeedyCGI in a high volume application on Redhat 8, and = > > noticed that the memory usage of speedy_backend continually grew over = > > time, even if the -r option was set to low number like 3. > > > > The memory consumption seemed to be tied to speedy_backend creating a = > > new child. > > > > I created an experimental patch to fix this. > > > > The patch is here: > > www.voip-info.org/speedycgi > > > > > > Jim > > > > James H. Thompson > > jh...@la... > > > > ------=_NextPart_000_0047_01C37B19.2DEAB060 > > Content-Type: text/html; > > charset="Windows-1252" > > Content-Transfer-Encoding: quoted-printable > > > > <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> > > <HTML><HEAD> > > <META http-equiv=3DContent-Type content=3D"text/html; = > > charset=3Dwindows-1252"> > > <META content=3D"MSHTML 6.00.2800.1126" name=3DGENERATOR> > > <STYLE></STYLE> > > </HEAD> > > <BODY bgColor=3D#ffffff> > > <DIV><FONT size=3D2>I'm using SpeedyCGI in a high volume application on = > > Redhat 8,=20 > > and noticed that the memory usage of speedy_backend continually grew = > > over time,=20 > > even if the -r option was set to low number like 3.</FONT></DIV> > > <DIV><FONT size=3D2></FONT> </DIV> > > <DIV><FONT size=3D2>The memory consumption seemed to be tied=20 > > to speedy_backend creating a new child.</FONT></DIV> > > <DIV><FONT size=3D2></FONT> </DIV> > > <DIV><FONT size=3D2>I created an experimental patch to fix = > > this.</FONT></DIV> > > <DIV><FONT size=3D2></FONT> </DIV> > > <DIV><FONT size=3D2>The patch is here:</FONT></DIV> > > <DIV><FONT size=3D2> <A=20 > > href=3D"http://www.voip-info.org/speedycgi">www.voip-info.org/speedycgi</= > > A></FONT></DIV> > > <DIV> </DIV> > > <DIV><FONT size=3D2></FONT> </DIV> > > <DIV><FONT size=3D2>Jim</FONT></DIV> > > <DIV> </DIV> > > <DIV><FONT size=3D2>James H. Thompson<BR><A=20 > > href=3D"mailto:jh...@la...">jh...@la...</A><BR></FONT></DIV></BODY></HT= > > ML> > > > > ------=_NextPart_000_0047_01C37B19.2DEAB060-- > > > > > > > > ------------------------------------------------------- > > This sf.net email is sponsored by:ThinkGeek > > Welcome to geek heaven. > > http://thinkgeek.com/sf > > _______________________________________________ > > Speedycgi-users mailing list > > Spe...@li... > > https://lists.sourceforge.net/lists/listinfo/speedycgi-users > |
|
From: Sam H. <sa...@da...> - 2003-10-06 06:47:55
|
THanks for the bug report and the patch. There's new code on the cvs server to fix this. The patch you submitted reduced the leak quite a bit, but it didn't go away altogether. I couldn't get eval_sv to work without leaking, so I replaced it with a call_sv which seems to work. > This is a multi-part message in MIME format. > > ------=_NextPart_000_0047_01C37B19.2DEAB060 > Content-Type: text/plain; > charset="Windows-1252" > Content-Transfer-Encoding: quoted-printable > > I'm using SpeedyCGI in a high volume application on Redhat 8, and = > noticed that the memory usage of speedy_backend continually grew over = > time, even if the -r option was set to low number like 3. > > The memory consumption seemed to be tied to speedy_backend creating a = > new child. > > I created an experimental patch to fix this. > > The patch is here: > www.voip-info.org/speedycgi > > > Jim > > James H. Thompson > jh...@la... > > ------=_NextPart_000_0047_01C37B19.2DEAB060 > Content-Type: text/html; > charset="Windows-1252" > Content-Transfer-Encoding: quoted-printable > > <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> > <HTML><HEAD> > <META http-equiv=3DContent-Type content=3D"text/html; = > charset=3Dwindows-1252"> > <META content=3D"MSHTML 6.00.2800.1126" name=3DGENERATOR> > <STYLE></STYLE> > </HEAD> > <BODY bgColor=3D#ffffff> > <DIV><FONT size=3D2>I'm using SpeedyCGI in a high volume application on = > Redhat 8,=20 > and noticed that the memory usage of speedy_backend continually grew = > over time,=20 > even if the -r option was set to low number like 3.</FONT></DIV> > <DIV><FONT size=3D2></FONT> </DIV> > <DIV><FONT size=3D2>The memory consumption seemed to be tied=20 > to speedy_backend creating a new child.</FONT></DIV> > <DIV><FONT size=3D2></FONT> </DIV> > <DIV><FONT size=3D2>I created an experimental patch to fix = > this.</FONT></DIV> > <DIV><FONT size=3D2></FONT> </DIV> > <DIV><FONT size=3D2>The patch is here:</FONT></DIV> > <DIV><FONT size=3D2> <A=20 > href=3D"http://www.voip-info.org/speedycgi">www.voip-info.org/speedycgi</= > A></FONT></DIV> > <DIV> </DIV> > <DIV><FONT size=3D2></FONT> </DIV> > <DIV><FONT size=3D2>Jim</FONT></DIV> > <DIV> </DIV> > <DIV><FONT size=3D2>James H. Thompson<BR><A=20 > href=3D"mailto:jh...@la...">jh...@la...</A><BR></FONT></DIV></BODY></HT= > ML> > > ------=_NextPart_000_0047_01C37B19.2DEAB060-- > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users |
|
From: Sam H. <sa...@da...> - 2003-10-05 05:00:23
|
Thanks for the bug reports and the patch. The new code is on the cvs server.
For Issue 2, I changed the code to set the file_removed flag only after
a successful unlink(), which should fix the problem.
> Issue 1 (patch at the end of the message):
>
> I've found a logical error in file_lock() (from speedy_file.c). Note how
> the test inside the for loop decrements the variable after the check is
> performed. Right after the loop, there's if (!tries). If file could not
> be locked, 'tries' is -1 at that point, not zero. The easiest fix is of
> course to change when decrement takes place.
>
>
> Issue 2:
>
> I found this bug because I found a box after power failure that segfaulted
> apache each time a CGI handled by SpeedyCGI was requested. Turns out, a
> stale state file (/tmp/speedy.6.f.F), had file_removed byte = 1.
> Apparently, the power went right after the flag was set but before unlink
> could do its evil thing.
>
> Now comes the question: is there a way to modify SpeedyCGI's algorithm so
> that it could recover from these conditions itself, without having to
> manually delete the offending file? Right now, if this flag is set
> (manually or after a power failure, as it was in my case), SpeedyCGI is
> taken out of commission. Maybe, besides being a boolean flag,
> file_removed could specify a PID of the process that removed it (but that
> raises several more questions)?
>
>
> My setup:
> Linux 2.4.21
> Apache 2.0.46
> SpeedyCGI 2.21
>
> Thanks,
>
> - Dmitri.
>
> *** speedy_file.c.orig Mon Sep 8 18:54:37 2003
> --- speedy_file.c.fixed Mon Sep 8 18:54:33 2003
> ***************
> *** 183,189 ****
> file_close2();
> }
>
> ! for (tries = 5; tries--;) {
> /* If file is not open, open it */
> if (file_fd == -1) {
> str_replace(&saved_tmpbase,
> speedy_util_strdup(OPTVAL_TMPBASE));
> --- 183,189 ----
> file_close2();
> }
>
> ! for (tries = 5; tries; --tries) {
> /* If file is not open, open it */
> if (file_fd == -1) {
> str_replace(&saved_tmpbase,
> speedy_util_strdup(OPTVAL_TMPBASE));
>
>
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf
> _______________________________________________
> Speedycgi-users mailing list
> Spe...@li...
> https://lists.sourceforge.net/lists/listinfo/speedycgi-users
>
|
|
From: Sam H. <sa...@da...> - 2003-10-05 01:19:24
|
There is no win32 version. > Hi: > > I found speedyCGI very interesting but i did not found a ppm for win32 even > in activestate. > somebody knows if exists? > > Thank you for your response > > Carlos Kassab > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users > |
|
From: Sam H. <sa...@da...> - 2003-10-05 01:03:45
|
Speedycgi doesn't do any dns operations or hostname lookups directly, so it's probably not the source of those sockets. Are you using the apache module, or the executable? Which version of apache? Please try upgrading to the latest version of speedycgi to see if the problem is still there. If you're still having problems you should lower the MaxRuns value (-r) until you can figure what it is. Also, it doesn't look like your problem is related to the memory leak that James reported. > Hello list > > Sorry for my late reply. I'm still investigating my mentioned problems. > I run different Perl applications under Speedy (Speedy v2.11), but have > only problems with one of them. The memory leakage affects only the > child process(es) (quite severely), and the "Too many files opened" > error happens occasionally. > > Often, this application forces Speedy to run separate processes under > the same group id, due to its Web user tracking activity. The code base > (about 7000 lines excl. modules) is clean and verified (4 hashes: > private vars (my) go out of scope, dynamic hashes are flushed on each > evocation, persistent user hash only records small data (such as cookies > etc.), and the static config hash is never altered once loaded ), and > runs using the strict pragma. So, vars leaking memory should really not > be the case here, except I am missing something fundamental. Sockets and > filehandles are always closed. > > What I am also wondering is why each Apache/Speedy child opens such a > vast amount of sockets (500 and more, which can lead to running out of > file descriptors) on the DNS port 53: > > COMMAND PID USER FD TYPE DEVICE NODE NAME > speedy_ba 14333 wwwrun 150u IPv4 301994 UDP > 1.megapublic.net:25553 ->ns.magnet.ch:domain > > speedy_ba 14333 wwwrun 151u IPv4 301995 UDP > 2.megapublic.net:25554->2.megapublic.net:domain > > speedy_ba 14333 wwwrun 152u IPv4 302020 UDP > 1.megapublic.net:25569->ns.magnet.ch:domain > > speedy_ba 14333 wwwrun 153u IPv4 302023 UDP > 2.megapublic.net:25572->2.megapublic.net:domain > > speedy_ba 14333 wwwrun 154u IPv4 302054 UDP > 1.megapublic.net:25587->ns.magnet.ch:domain > > speedy_ba 14333 wwwrun 155u IPv4 302056 UDP > 2.megapublic.net:25589->2.megapublic.net:domain > > and so on up to 500+ per process .... > > Any opinions? Thanks! > > > Ph. Wiede > > > James H. Thompson wrote: > > > The memory leak only becomes obvious if have a small -r set > > and do thousands of calls to the speedy_backend a day. > > The leak is in the base speedy_backend controller process not in the > > speedy_backend children processes that run the user perl scripts. > > > > Since the problem seemed to be in the way the Perl API calls were used > > from the speedy_backend > > controller process, it could easily behave differently on Solaris, > or > with different versions of Perl. > > > > Anyway, with my patch in place, the memory growth went away. > > > > Jim > > > > James H. Thompson > > jh...@la... > > > > > >> dear speedy users, > >> > >> here is another opinion on this one... > >> > >> my processes run for weeks at a time, and the resident memory size of > >> the processes remains a steady 7632K > >> > >> I am running a 2500 line perl cgi script. it uses sockets to send IP > >> packets to another machine (independantly of the incoming HTTP CGI > >> connections). I am using a custom OO module, and I am also using... > >> CGI, CGI::peristent, CGI::Carp, Net::LDAP, Net::LDAP::Util. it does > >> not write to any files but it does read html from eight text files. > >> > >> OS is Sun Solaris 7 > >> > >> Speedy version unknown - it does not respond to the -v option, so its > >> probably v2.11 > >> > >> is it possible that the leaks are in your code? you can get into > >> some weird situations when your program runs under speedy, where you > >> end up doing things many times when you only wanted to do them once. > >> my program ran for about a year before a realised it was reading in > >> the eight text files on every HTTP connection, when I should really > >> only read them once the first time thru. > >> > >> alternatively, it might be worth you trying speedy v2.11 > >> > >> regards, > >> mark. > >> > >> At 12:05 15/09/2003 +0200, PW wrote: > >> > >>> I have noticed the same behavior in a moderate traffic Perl > >>> application (clean code, using custom OO modules, 1 hit/1-5 > minutes, >>> using the group function across several virtual domains, > process > >>> expiration set to 1 hour)on SuSE Linux, too. Processes grow from an > >>> initial 7 MB to 60 MB+ over 1-3 days. Additionally and as already > >>> mentioned on this list, there is as a problem with too many socket > >>> connections established, yielding eventually a "Too many files > >>> opened" error by Apache. > >>> > >>> Ph. Wiede > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users > |
|
From: Sam H. <sa...@da...> - 2003-10-01 20:18:40
|
The latest version on the cvs server should fix this. > Hi, > > I've been running SpeedyCGI (for my Openwebmail environment) for over a year, > running under Red Hat 8 and perl-5.8.0-55. > > Red Hat recently released perl-5.8.0-88.3, which fixes some security issues > with the previous release perl. > > Unfortunately these perl RPM's caused issues with my Openwebmail setup, so I > decided to recompile speedy under the new perl and it no longer compiles. > > When I move back to the older perl is compiles fine. > > Below is the output: > > # make > cd src && make > make[1]: Entering directory `/root/CGI-SpeedyCGI-2.21/src' > /usr/bin/perl -w optdefs.pl /usr/bin > Writing speedy_optdefs.c > Writing speedy_optdefs.h > Writing mod_speedycgi_cmds.c > Writing mod_speedycgi2_cmds.c > Writing SpeedyCGI.pm > make[1]: Leaving directory `/root/CGI-SpeedyCGI-2.21/src' > cp src/SpeedyCGI.pm blib/lib/CGI/SpeedyCGI.pm > make[1]: Entering directory `/root/CGI-SpeedyCGI-2.21/src' > make[1]: Nothing to be done for `all'. > make[1]: Leaving directory `/root/CGI-SpeedyCGI-2.21/src' > make[1]: Entering directory `/root/CGI-SpeedyCGI-2.21/speedy_backend' > rm -f speedy_backend_main.c > cp ../src/speedy_backend_main.c speedy_backend_main.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_backend_main.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_perl.c > cp ../src/speedy_perl.c speedy_perl.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_perl.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > speedy_perl.c: In function `onerun': > speedy_perl.c:794: warning: comparison between pointer and integer > speedy_perl.c:795: warning: comparison between pointer and integer > speedy_perl.c:796: warning: comparison between pointer and integer > rm -f speedy_util.c > cp ../src/speedy_util.c speedy_util.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_util.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_sig.c > cp ../src/speedy_sig.c speedy_sig.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_sig.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_frontend.c > cp ../src/speedy_frontend.c speedy_frontend.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_frontend.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_backend.c > cp ../src/speedy_backend.c speedy_backend.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_backend.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_file.c > cp ../src/speedy_file.c speedy_file.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_file.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_slot.c > cp ../src/speedy_slot.c speedy_slot.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_slot.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_poll.c > cp ../src/speedy_poll.c speedy_poll.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_poll.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_ipc.c > cp ../src/speedy_ipc.c speedy_ipc.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_ipc.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_group.c > cp ../src/speedy_group.c speedy_group.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_group.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_script.c > cp ../src/speedy_script.c speedy_script.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_script.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_opt.c > cp ../src/speedy_opt.c speedy_opt.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_opt.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > speedy_opt.c: In function `speedy_opt_init': > speedy_opt.c:292: warning: assignment discards qualifiers from pointer target > type > rm -f speedy_optdefs.c > cp ../src/speedy_optdefs.c speedy_optdefs.c > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_optdefs.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS > -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" > -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" > -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND xsinit.c > cc1: warning: changing search order for system directory "/usr/local/include" > cc1: warning: as it has already been specified as a non-system directory > rm -f speedy_backend > gcc -o speedy_backend speedy_backend_main.o speedy_perl.o speedy_util.o > speedy_sig.o speedy_frontend.o speedy_backend.o speedy_file.o speedy_slot.o > speedy_poll.o speedy_ipc.o speedy_group.o speedy_script.o speedy_opt.o > speedy_optdefs.o xsinit.o -rdynamic -Wl,-rpath,/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE -L/usr/local/lib /usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/auto/DynaLoader/DynaLoader.a -L/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE -lperl -lnsl -ldl -lm -lpthread -lc -lcrypt > -lutil > speedy_perl.o: In function `onerun': > speedy_perl.o(.text+0xfa3): undefined reference to `setdefout' > collect2: ld returned 1 exit status > make[1]: *** [speedy_backend] Error 1 > make[1]: Leaving directory `/root/CGI-SpeedyCGI-2.21/speedy_backend' > make: *** [subdirs] Error 2 > > > [root@gazelle CGI-SpeedyCGI-2.21]# make test > make[1]: Entering directory `/root/CGI-SpeedyCGI-2.21/src' > make[1]: Nothing to be done for `all'. > make[1]: Leaving directory `/root/CGI-SpeedyCGI-2.21/src' > make[1]: Entering directory `/root/CGI-SpeedyCGI-2.21/speedy_backend' > rm -f speedy_backend > gcc -o speedy_backend speedy_backend_main.o speedy_perl.o speedy_util.o > speedy_sig.o speedy_frontend.o speedy_backend.o speedy_file.o speedy_slot.o > speedy_poll.o speedy_ipc.o speedy_group.o speedy_script.o speedy_opt.o > speedy_optdefs.o xsinit.o -rdynamic -Wl,-rpath,/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE -L/usr/local/lib /usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/auto/DynaLoader/DynaLoader.a -L/usr/lib/perl5/5.8. > 0/i386-linux-thread-multi/CORE -lperl -lnsl -ldl -lm -lpthread -lc -lcrypt > -lutil > speedy_perl.o: In function `onerun': > speedy_perl.o(.text+0xfa3): undefined reference to `setdefout' > collect2: ld returned 1 exit status > make[1]: *** [speedy_backend] Error 1 > make[1]: Leaving directory `/root/CGI-SpeedyCGI-2.21/speedy_backend' > make: *** [subdirs] Error 2 > > > Any ideas as to what the problem could be? > > Thanks. > > Michael. > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users > |
|
From: Michael M. <mi...@np...> - 2003-09-30 05:11:36
|
Hi, I've been running SpeedyCGI (for my Openwebmail environment) for over a year, running under Red Hat 8 and perl-5.8.0-55. Red Hat recently released perl-5.8.0-88.3, which fixes some security issues with the previous release perl. Unfortunately these perl RPM's caused issues with my Openwebmail setup, so I decided to recompile speedy under the new perl and it no longer compiles. When I move back to the older perl is compiles fine. Below is the output: # make cd src && make make[1]: Entering directory `/root/CGI-SpeedyCGI-2.21/src' /usr/bin/perl -w optdefs.pl /usr/bin Writing speedy_optdefs.c Writing speedy_optdefs.h Writing mod_speedycgi_cmds.c Writing mod_speedycgi2_cmds.c Writing SpeedyCGI.pm make[1]: Leaving directory `/root/CGI-SpeedyCGI-2.21/src' cp src/SpeedyCGI.pm blib/lib/CGI/SpeedyCGI.pm make[1]: Entering directory `/root/CGI-SpeedyCGI-2.21/src' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/root/CGI-SpeedyCGI-2.21/src' make[1]: Entering directory `/root/CGI-SpeedyCGI-2.21/speedy_backend' rm -f speedy_backend_main.c cp ../src/speedy_backend_main.c speedy_backend_main.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_backend_main.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_perl.c cp ../src/speedy_perl.c speedy_perl.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_perl.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory speedy_perl.c: In function `onerun': speedy_perl.c:794: warning: comparison between pointer and integer speedy_perl.c:795: warning: comparison between pointer and integer speedy_perl.c:796: warning: comparison between pointer and integer rm -f speedy_util.c cp ../src/speedy_util.c speedy_util.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_util.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_sig.c cp ../src/speedy_sig.c speedy_sig.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_sig.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_frontend.c cp ../src/speedy_frontend.c speedy_frontend.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_frontend.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_backend.c cp ../src/speedy_backend.c speedy_backend.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_backend.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_file.c cp ../src/speedy_file.c speedy_file.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_file.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_slot.c cp ../src/speedy_slot.c speedy_slot.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_slot.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_poll.c cp ../src/speedy_poll.c speedy_poll.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_poll.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_ipc.c cp ../src/speedy_ipc.c speedy_ipc.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_ipc.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_group.c cp ../src/speedy_group.c speedy_group.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_group.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_script.c cp ../src/speedy_script.c speedy_script.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_script.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_opt.c cp ../src/speedy_opt.c speedy_opt.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_opt.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory speedy_opt.c: In function `speedy_opt_init': speedy_opt.c:292: warning: assignment discards qualifiers from pointer target type rm -f speedy_optdefs.c cp ../src/speedy_optdefs.c speedy_optdefs.c gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND speedy_optdefs.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory gcc -c -I../src -I. -D_REENTRANT -D_GNU_SOURCE -DTHREADS_HAVE_PIDS -DDEBUGGING -fno-strict-aliasing -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O -DVERSION=\"2.21\" -DXS_VERSION=\"2.21\" -fPIC "-I/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE" -DSPEEDY_PROGNAME=\"speedy_backend\" -DSPEEDY_VERSION=\"2.21\" -DSPEEDY_BACKEND xsinit.c cc1: warning: changing search order for system directory "/usr/local/include" cc1: warning: as it has already been specified as a non-system directory rm -f speedy_backend gcc -o speedy_backend speedy_backend_main.o speedy_perl.o speedy_util.o speedy_sig.o speedy_frontend.o speedy_backend.o speedy_file.o speedy_slot.o speedy_poll.o speedy_ipc.o speedy_group.o speedy_script.o speedy_opt.o speedy_optdefs.o xsinit.o -rdynamic -Wl,-rpath,/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE -L/usr/local/lib /usr/lib/perl5/5.8. 0/i386-linux-thread-multi/auto/DynaLoader/DynaLoader.a -L/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE -lperl -lnsl -ldl -lm -lpthread -lc -lcrypt -lutil speedy_perl.o: In function `onerun': speedy_perl.o(.text+0xfa3): undefined reference to `setdefout' collect2: ld returned 1 exit status make[1]: *** [speedy_backend] Error 1 make[1]: Leaving directory `/root/CGI-SpeedyCGI-2.21/speedy_backend' make: *** [subdirs] Error 2 [root@gazelle CGI-SpeedyCGI-2.21]# make test make[1]: Entering directory `/root/CGI-SpeedyCGI-2.21/src' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/root/CGI-SpeedyCGI-2.21/src' make[1]: Entering directory `/root/CGI-SpeedyCGI-2.21/speedy_backend' rm -f speedy_backend gcc -o speedy_backend speedy_backend_main.o speedy_perl.o speedy_util.o speedy_sig.o speedy_frontend.o speedy_backend.o speedy_file.o speedy_slot.o speedy_poll.o speedy_ipc.o speedy_group.o speedy_script.o speedy_opt.o speedy_optdefs.o xsinit.o -rdynamic -Wl,-rpath,/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE -L/usr/local/lib /usr/lib/perl5/5.8. 0/i386-linux-thread-multi/auto/DynaLoader/DynaLoader.a -L/usr/lib/perl5/5.8. 0/i386-linux-thread-multi/CORE -lperl -lnsl -ldl -lm -lpthread -lc -lcrypt -lutil speedy_perl.o: In function `onerun': speedy_perl.o(.text+0xfa3): undefined reference to `setdefout' collect2: ld returned 1 exit status make[1]: *** [speedy_backend] Error 1 make[1]: Leaving directory `/root/CGI-SpeedyCGI-2.21/speedy_backend' make: *** [subdirs] Error 2 Any ideas as to what the problem could be? Thanks. Michael. |
|
From: PW <sub...@me...> - 2003-09-28 14:19:20
|
Hello list Sorry for my late reply. I'm still investigating my mentioned problems. I run different Perl applications under Speedy (Speedy v2.11), but have only problems with one of them. The memory leakage affects only the child process(es) (quite severely), and the "Too many files opened" error happens occasionally. Often, this application forces Speedy to run separate processes under the same group id, due to its Web user tracking activity. The code base (about 7000 lines excl. modules) is clean and verified (4 hashes: private vars (my) go out of scope, dynamic hashes are flushed on each evocation, persistent user hash only records small data (such as cookies etc.), and the static config hash is never altered once loaded ), and runs using the strict pragma. So, vars leaking memory should really not be the case here, except I am missing something fundamental. Sockets and filehandles are always closed. What I am also wondering is why each Apache/Speedy child opens such a vast amount of sockets (500 and more, which can lead to running out of file descriptors) on the DNS port 53: COMMAND PID USER FD TYPE DEVICE NODE NAME speedy_ba 14333 wwwrun 150u IPv4 301994 UDP 1.megapublic.net:25553 ->ns.magnet.ch:domain speedy_ba 14333 wwwrun 151u IPv4 301995 UDP 2.megapublic.net:25554->2.megapublic.net:domain speedy_ba 14333 wwwrun 152u IPv4 302020 UDP 1.megapublic.net:25569->ns.magnet.ch:domain speedy_ba 14333 wwwrun 153u IPv4 302023 UDP 2.megapublic.net:25572->2.megapublic.net:domain speedy_ba 14333 wwwrun 154u IPv4 302054 UDP 1.megapublic.net:25587->ns.magnet.ch:domain speedy_ba 14333 wwwrun 155u IPv4 302056 UDP 2.megapublic.net:25589->2.megapublic.net:domain and so on up to 500+ per process .... Any opinions? Thanks! Ph. Wiede James H. Thompson wrote: > The memory leak only becomes obvious if have a small -r set > and do thousands of calls to the speedy_backend a day. > The leak is in the base speedy_backend controller process not in the > speedy_backend children processes that run the user perl scripts. > > Since the problem seemed to be in the way the Perl API calls were used > from the speedy_backend > controller process, it could easily behave differently on Solaris, or > with different versions of Perl. > > Anyway, with my patch in place, the memory growth went away. > > Jim > > James H. Thompson > jh...@la... > > >> dear speedy users, >> >> here is another opinion on this one... >> >> my processes run for weeks at a time, and the resident memory size of >> the processes remains a steady 7632K >> >> I am running a 2500 line perl cgi script. it uses sockets to send IP >> packets to another machine (independantly of the incoming HTTP CGI >> connections). I am using a custom OO module, and I am also using... >> CGI, CGI::peristent, CGI::Carp, Net::LDAP, Net::LDAP::Util. it does >> not write to any files but it does read html from eight text files. >> >> OS is Sun Solaris 7 >> >> Speedy version unknown - it does not respond to the -v option, so its >> probably v2.11 >> >> is it possible that the leaks are in your code? you can get into >> some weird situations when your program runs under speedy, where you >> end up doing things many times when you only wanted to do them once. >> my program ran for about a year before a realised it was reading in >> the eight text files on every HTTP connection, when I should really >> only read them once the first time thru. >> >> alternatively, it might be worth you trying speedy v2.11 >> >> regards, >> mark. >> >> At 12:05 15/09/2003 +0200, PW wrote: >> >>> I have noticed the same behavior in a moderate traffic Perl >>> application (clean code, using custom OO modules, 1 hit/1-5 minutes, >>> using the group function across several virtual domains, process >>> expiration set to 1 hour)on SuSE Linux, too. Processes grow from an >>> initial 7 MB to 60 MB+ over 1-3 days. Additionally and as already >>> mentioned on this list, there is as a problem with too many socket >>> connections established, yielding eventually a "Too many files >>> opened" error by Apache. >>> >>> Ph. Wiede |
|
From: James H. T. <jh...@la...> - 2003-09-15 19:10:16
|
The memory leak only becomes obvious if have a small -r set and do thousands of calls to the speedy_backend a day. The leak is in the base speedy_backend controller process not in the speedy_backend children processes that run the user perl scripts. Since the problem seemed to be in the way the Perl API calls were used from the speedy_backend controller process, it could easily behave differently on Solaris, or with different versions of Perl. Anyway, with my patch in place, the memory growth went away. Jim James H. Thompson jh...@la... ----- Original Message ----- From: "Mark Taylor" <M.T...@sh...> To: <spe...@li...> Sent: Monday, September 15, 2003 1:36 AM Subject: Re: [Speedycgi-users] Memory leak in speedy_backend > dear speedy users, > > here is another opinion on this one... > > my processes run for weeks at a time, and the resident memory size of the > processes remains a steady 7632K > > I am running a 2500 line perl cgi script. it uses sockets to send IP > packets to another machine (independantly of the incoming HTTP CGI > connections). I am using a custom OO module, and I am also using... CGI, > CGI::peristent, CGI::Carp, Net::LDAP, Net::LDAP::Util. it does not write > to any files but it does read html from eight text files. > > OS is Sun Solaris 7 > > Speedy version unknown - it does not respond to the -v option, so its > probably v2.11 > > is it possible that the leaks are in your code? you can get into some > weird situations when your program runs under speedy, where you end up > doing things many times when you only wanted to do them once. my program > ran for about a year before a realised it was reading in the eight text > files on every HTTP connection, when I should really only read them once > the first time thru. > > alternatively, it might be worth you trying speedy v2.11 > > regards, > mark. > > At 12:05 15/09/2003 +0200, PW wrote: > >I have noticed the same behavior in a moderate traffic Perl application > >(clean code, using custom OO modules, 1 hit/1-5 minutes, using the group > >function across several virtual domains, process expiration set to 1 hour) > >on SuSE Linux, too. Processes grow from an initial 7 MB to 60 MB+ over 1-3 > >days. Additionally and as already mentioned on this list, there is as a > >problem with too many socket connections established, yielding eventually > >a "Too many files opened" error by Apache. > > > >Ph. Wiede > > > > > > > >James H. Thompson wrote: > >>I'm using SpeedyCGI in a high volume application on Redhat 8, and noticed > >>that the memory usage of speedy_backend continually grew over time, even > >>if the -r option was set to low number like 3. > >>The memory consumption seemed to be tied to speedy_backend creating a new > >>child. > >>I created an experimental patch to fix this. > >>The patch is here: > >> www.voip-info.org/speedycgi > >> > >>Jim > >>James H. Thompson > >>jh...@la... > > > > > > > > > >------------------------------------------------------- > >This sf.net email is sponsored by:ThinkGeek > >Welcome to geek heaven. > >http://thinkgeek.com/sf > >_______________________________________________ > >Speedycgi-users mailing list > >Spe...@li... > >https://lists.sourceforge.net/lists/listinfo/speedycgi-users > > -- > Mark Taylor, > Department of Corporate Information & Computing Services, > Extension 21145. > (0114) 222 1145 > http://www.shef.ac.uk/~ad1mt > The opinions expressed in this email are mine and not those of the > University of Sheffield. > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users > |