speedycgi-users Mailing List for SpeedyCGI (Page 7)
Brought to you by:
samh
You can subscribe to this list here.
| 2002 |
Jan
|
Feb
(4) |
Mar
(12) |
Apr
(3) |
May
(1) |
Jun
(10) |
Jul
(12) |
Aug
(2) |
Sep
(8) |
Oct
(10) |
Nov
(4) |
Dec
(4) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
|
Feb
|
Mar
(12) |
Apr
(1) |
May
(18) |
Jun
(1) |
Jul
(3) |
Aug
(3) |
Sep
(9) |
Oct
(21) |
Nov
(11) |
Dec
(2) |
| 2004 |
Jan
(6) |
Feb
(1) |
Mar
(2) |
Apr
(1) |
May
(10) |
Jun
(3) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2005 |
Jan
|
Feb
|
Mar
(4) |
Apr
(1) |
May
|
Jun
(1) |
Jul
(6) |
Aug
(4) |
Sep
(1) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2006 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
| 2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2009 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Eichert, D. <de...@sa...> - 2002-11-11 14:41:56
|
Hello I hadn't seen traffic on the SpeedyCGI list in a long time, then I saw where someone had added a SpeedyCGI port to the OpenBSD ports system. Wandering over to the SpeedyCGI page I see you have a new project called PersistentPerl, which is exactly what I've always used SpeedyCGI for. You mentioned in the SpeedyCGI list that you were going to create a mail gateway between the 2 lists. Has that happened yet? thanks diana eichert |
|
From: swati <swa...@ya...> - 2002-11-05 08:18:31
|
Hi all, I would like to install speedy-cgi on cobalt Raq4 servers. Any suggestions/ comments on this installation will be more helpful. thanks in advance, swati. --- spe...@li... wrote: > Welcome to the Spe...@li... > mailing list! > > To post to this list, send your email to: > > spe...@li... > > General information about the mailing list is at: > > > https://lists.sourceforge.net/lists/listinfo/speedycgi-users > > If you ever want to unsubscribe or change your > options (eg, switch to > or from digest mode, change your password, etc.), > visit your > subscription page at: > > > https://lists.sourceforge.net/lists/options/speedycgi-users/swati_sandhya2001%40yahoo.com > > > You can also make such adjustments via email by > sending a message to: > > Spe...@li... > > with the word `help' in the subject or body (don't > include the > quotes), and you will get back a message with > instructions. > > You must know your password to change your options > (including changing > the password, itself) or to unsubscribe. It is: > > sailupra > > If you forget your password, don't worry, you will > receive a monthly > reminder telling you what all your > lists.sourceforge.net mailing list > passwords are, and how to unsubscribe or change your > options. There > is also a button on your options page that will > email your current > password to you. > > You may also have your password mailed to you > automatically off of the > Web page noted above. __________________________________________________ Do you Yahoo!? HotJobs - Search new jobs daily now http://hotjobs.yahoo.com/ |
|
From: Sam H. <sa...@da...> - 2002-10-18 17:04:09
|
> 2. Feature wish: > In SpeedyCGI there is the add_shutdown_handler for post-processing when > the backend terminates and there is the register_cleanup for doing the > same after each request. However, in cases such as handling user > sessions in a CGI environment using the SpeedyCGI Group directive, an > unique process may process a couple of users, holding their ongoing > session data in memory. Other processes come in, even launched through > the same Web visitor (user reloading page or requesting next page within > short time, etc.), which is a problem in itself (a special global memory > space, shared between active Group processes via the backend would be > great indeed!) I think other CPAN modules address the shared session problem. It looks like CGI::Session is one. Maybe you could extend that moduule to use memory instead of files, though if you use a ramdisk (like tmpfs in solaris), it's about the same. > yet the shutdown_handler will not come into action when > a process terminates, only when the backend itself shuts down. No, the shutdown_handler is called before the unix process terminates. There is one unix process for each backend. > So my urging feature wish would be a "pid_ shutdown_handler" that in my > current case would post-process any "in-memory per process" session data > to disk. Would this be doable? Since shutdown_handler is already per process, it should already do this. |
|
From: PW <sub...@me...> - 2002-10-18 09:27:17
|
Regarding my problem report (item 1):
After auditing again over 10000 lines of code, I finally found a well
hidden bug in the module that caused the symptoms which I wrongly
interpreted as an ALARM bug:
local( undef $/ );
...eval alarm routine ...
...trigger error if alarm and return;
...$/ = "\n";
which of course messed up the further processing in the main application
sharing the same name space if a timeout error was triggered, because I
forgot to setback the input record separator before and because the
localizing was bogus, it should have been: local { undef $/ }; The hard
part in tracing this error was that this also suppressed the
debugging/error output.
In this sense, sorry for a false alarm ;)
Philippe Wiede
megapublic inc.
PW wrote:
> 1. Problem:
> ALARM eval kills SpeedyCGI process each time, causing a full reload of
> my application on the next request. This happens both in a module (if a
> WHOIS timeout error is raised) and in an internal subroutine (were the
> eval is most ever successful). I know that the change file of SpeedyCGI
> lists this issue as fixed but it does not seem so. Maybe it has to do
> with the Group feature?
>
> Example code:
> .........
>
> Result: a timeout signal causes the process to die after the HTTP request
> .........
|
|
From: Sam H. <sa...@da...> - 2002-10-17 20:33:01
|
> I was wandering if there is a good rule of thumb to use > when running our scripts via speedy in regards to timeout and > number of times ran. The MaxRuns setting is mainly for resource (usually memory) leaks. The more leaky your program, the smaller the number should be, otherwise you won't reclaim those resources. If your program doesn't leak at all, then you could safely unlimit it. The downside to a small Maxruns value is more time is spent starting new perl interpreters rather than using ones already running. BTW, starting in 2.20, the cost of using a small Maxruns value decreased, especially if a lot of your overhead involves the static compilation of code. The statically compiled code is now shared. A re-spawn (after the maxruns limit is reached) no longer involves a re-compile of that code. The timeout is there to reclaim resources when a perl script is no longer being used. Setting the timeout low, frees memory (and any other resources that script is using - like database handles) sooner, when your script is no longer needed. The downside of a low timeout is that, on average, the time spent starting new perl interpreters increases. A good timeout value can be based on how often you think the script will be used. If you have a script that's used only once an hour, you wouldn't want to set the timout to 30 minutes, because the script would just timeout, exit and then get re-spwaned every hour. There wouldn't be any persistence at all. So, maybe a good rule of thumb would be to choose a timeout value that is double the average rate at which the script is usually used (ie if the average is once per houur, choose a two hour timeout - something like that). Also, if you have plenty of memory, you can afford to keep your timeouts high. If you're short on memory, start lowering your timeouts, so you free up memory as soon as possible. It's a time/space tradeoff - it's the time cost of starting a new interpreter versus the space cost of keeping an interpreter around in memory. Sam |
|
From: Ronald D. W. <rdw...@ti...> - 2002-10-17 16:33:50
|
we are using speedy as a backend process to keep our perl scripts loaded in memory. CGI::SpeedyCGI verison is 2.11. I was wandering if there is a good rule of thumb to use when running our scripts via speedy in regards to timeout and number of times ran. here are the options we are using now to call the perl script: /usr/local/bin/speedy -w -- -r30 -t60 is there a good document out there that covers this? I looked in the archive contained at http://sourceforge.net/mailarchive/forum.php?forum=speedycgi-users to no avail. I also scanned over the doc contained at http://daemoninc.com/speedycgi/ and didn't see anything there. I am new to speedy so anyhelp will be appreciated. Thanks and Regards, Ron |
|
From: PW <sub...@me...> - 2002-10-16 21:28:15
|
Hi Sam,
First off, thanks a lot for the newest SpeedyCGI release.
While beta-testing a new session tracking Web site application running
persistently in the background via SpeedyCGI (using the Group directive)
across a couple of virtual hosts for some time, I noticed one particular
problem and one particular issue resulting in a feature request:
1. Problem:
ALARM eval kills SpeedyCGI process each time, causing a full reload of
my application on the next request. This happens both in a module (if a
WHOIS timeout error is raised) and in an internal subroutine (were the
eval is most ever successful). I know that the change file of SpeedyCGI
lists this issue as fixed but it does not seem so. Maybe it has to do
with the Group feature?
Example code:
sub xxx {
my ($timeout,$missed,$currtime,$sema) = (2,0,time,"${db}.LCK");
eval {
local $SIG{ALRM} = sub { die 'lock busy!'; };
alarm $timeout;
while (1) {
last if sysopen(LF, $sema, O_RDWR|O_CREAT, 0644) and flock LF, LOCK_EX;
do { print STDERR "$$ lock busy\n"; alarm 0; last; } if
$missed++ > 5 && !-f $sema;
sleep 1; do { print STDERR "$$ lock timeout\n"; alarm 0; last; }
if time > $currtime + 15;
}
};
alarm 0;
if ( $@ =~ /lock busy!/ ) { print STDERR "timeout: $@\n"; close LF; }
... further code accesses a Berkeley DBM database ...
}
Result: a timeout signal causes the process to die after the HTTP request
Server details:
- SpeedyCGI 2.21
- Apache 1.36 Linux with mod_speedycgi enabled
- Perl 6.1
2. Feature wish:
In SpeedyCGI there is the add_shutdown_handler for post-processing when
the backend terminates and there is the register_cleanup for doing the
same after each request. However, in cases such as handling user
sessions in a CGI environment using the SpeedyCGI Group directive, an
unique process may process a couple of users, holding their ongoing
session data in memory. Other processes come in, even launched through
the same Web visitor (user reloading page or requesting next page within
short time, etc.), which is a problem in itself (a special global memory
space, shared between active Group processes via the backend would be
great indeed!) - yet the shutdown_handler will not come into action when
a process terminates, only when the backend itself shuts down. So my
urging feature wish would be a "pid_ shutdown_handler" that in my
current case would post-process any "in-memory per process" session data
to disk. Would this be doable?
Thanks!
Philippe Wiede
megapublic inc.
|
|
From: Haviland, M. <mar...@nw...> - 2002-10-02 21:38:25
|
> -----Original Message----- > From: Sam Horrocks [mailto:sa...@da...] > Sent: Wednesday, October 02, 2002 12:46 PM > To: Haviland, Mark > Cc: spe...@li... > Subject: Re: [Speedycgi-users] Compilation problems...=20 >=20 >=20 > I think other people have reported AIX problems like this. I don't > remember anyone reporting a fix though. >=20 > The remove_libs command is optional - it reduces the startup time for > the speedy binary by removing extraneous shared libs. It's possible > it's part of the problem, in which case you can just remove it and run > the rest of the command (from "ld" on). >=20 OK - I'll give this a try. > Have you tried gcc? >=20 No - I didn't think that it was installed on the box I have to compile = on but I just found it (version 2.95.2 19991024) so I'll try it as well. > Or if you can give me an account on an AIX box I can try to fix it. >=20 I wish I could :) Thanks for your response. I'll let you know what I find out. -Mark |
|
From: Sam H. <sa...@da...> - 2002-10-02 17:45:57
|
I think other people have reported AIX problems like this. I don't remember anyone reporting a fix though. The remove_libs command is optional - it reduces the startup time for the speedy binary by removing extraneous shared libs. It's possible it's part of the problem, in which case you can just remove it and run the rest of the command (from "ld" on). Have you tried gcc? Or if you can give me an account on an AIX box I can try to fix it. Sam > Hi all, > =20 > I'm trying to compile PersistentPerl-2.21 (aka speedyCGI) on AIX 4.3 = > (using the native compiler) but seem to be having some problems and am = > wondering if anyone has successfully compiled under AIX...or could help = > me out. > =20 > Here's what's happening: > =20 > When I do the standard 'perl Makefile.PL; make' > =20 > I get the following link error: > =20 > ../util/remove_libs ld -o perperl perperl_main.o perperl_cb.o = > perperl_circ.o perperl_util.o perperl_sig.o perperl_frontend.o = > perperl_backend.o perperl_file.o perperl_slot.o perperl_poll.o = > perperl_ipc.o perperl_group.o perperl_script.o perperl_opt.o = > perperl_optdefs.o = > -bE:/usr/local/lib/perl56/5.6.1/aix-ld/CORE/perl.exp -L/usr/local/lib = > -b32 /usr/local/lib/perl56/5.6.1/aix-ld/auto/DynaLoader/DynaLoader.a = > -L/usr/local/lib/perl56/5.6.1/aix-ld/CORE -lperl -lbind -lnsl -ldl -lld = > -lm -lC -lc -lcrypt -lbsd -lPW -liconv > ld: 0711-327 WARNING: Entry point not found: __start > > However if I edit the perperl/Makefile and add '-bnoentry' to the = > following line: > =20 > ../util/remove_libs $(LD) -o perperl $(OBJECT) = > -bE:/usr/local/lib/perl56/5.6.1/aix-ld/CORE/perl.exp -L/usr/local/lib = > -b32 /usr/local/lib/perl56/5.6.1/aix-ld/auto/DynaLoader/DynaLoader.a = > -L/usr/local/lib/perl56/5.6.1/aix-ld/CORE -lperl -lbind -lnsl -ldl -lld = > -lm -lC -lc -lcrypt -lbsd -lPW -liconv -bnoentry > > I no longer get the link warning, but when I try to run the 'make test' = > I get this: > =20 > No tests defined for perperl_backend extension. > PERL_DL_NONLAZY=3D1 = > PERPERL=3D/weblogs/markh/src/PersistentPerl-2.21/perperl/perperl = > PERPERL_BACKENDPROG=3D/weblogs/markh/src/PersistentPerl-2.21/perperl_back= > end/perperl_backend = > PERPERL_MODULE=3D/weblogs/markh/src/PersistentPerl-2.21// = > PERPERL_TIMEOUT=3D300 /usr/local/bin/perl -I../blib/arch -I../blib/lib = > -I/usr/local/lib/perl56/5.6.1/aix-ld -I/usr/local/lib/perl56/5.6.1 -e = > 'use Test::Harness qw(&runtests $verbose); $verbose=3D0; runtests = > @ARGV;' t/*.t > t/alarm.............exec(): 0509-036 Cannot load program = > /weblogs/markh/src/PersistentPerl-2.21/perperl/perperl because of the = > following errors: > 0509-151 The program does not have an entry point or > the o_snentry field in the auxiliary header is = > invalid. > 0509-194 Examine file headers with the 'dump -ohv' command. > exec(): 0509-036 Cannot load program = > /weblogs/markh/src/PersistentPerl-2.21/perperl/perperl because of the = > following errors: > 0509-151 The program does not have an entry point or > the o_snentry field in the auxiliary header is = > invalid. > > Are there some symbols that still need to be retained (ie. via a -u = > flag) ? > =20 > Any ideas ? > =20 > -Mark Haviland |
|
From: Haviland, M. <mar...@nw...> - 2002-10-02 15:21:44
|
Hi all,
=20
I'm trying to compile PersistentPerl-2.21 (aka speedyCGI) on AIX 4.3 =
(using the native compiler) but seem to be having some problems and am =
wondering if anyone has successfully compiled under AIX...or could help =
me out.
=20
Here's what's happening:
=20
When I do the standard 'perl Makefile.PL; make'
=20
I get the following link error:
=20
../util/remove_libs ld -o perperl perperl_main.o perperl_cb.o =
perperl_circ.o perperl_util.o perperl_sig.o perperl_frontend.o =
perperl_backend.o perperl_file.o perperl_slot.o perperl_poll.o =
perperl_ipc.o perperl_group.o perperl_script.o perperl_opt.o =
perperl_optdefs.o =
-bE:/usr/local/lib/perl56/5.6.1/aix-ld/CORE/perl.exp -L/usr/local/lib =
-b32 /usr/local/lib/perl56/5.6.1/aix-ld/auto/DynaLoader/DynaLoader.a =
-L/usr/local/lib/perl56/5.6.1/aix-ld/CORE -lperl -lbind -lnsl -ldl -lld =
-lm -lC -lc -lcrypt -lbsd -lPW -liconv
ld: 0711-327 WARNING: Entry point not found: __start
However if I edit the perperl/Makefile and add '-bnoentry' to the =
following line:
=20
../util/remove_libs $(LD) -o perperl $(OBJECT) =
-bE:/usr/local/lib/perl56/5.6.1/aix-ld/CORE/perl.exp -L/usr/local/lib =
-b32 /usr/local/lib/perl56/5.6.1/aix-ld/auto/DynaLoader/DynaLoader.a =
-L/usr/local/lib/perl56/5.6.1/aix-ld/CORE -lperl -lbind -lnsl -ldl -lld =
-lm -lC -lc -lcrypt -lbsd -lPW -liconv -bnoentry
I no longer get the link warning, but when I try to run the 'make test' =
I get this:
=20
No tests defined for perperl_backend extension.
PERL_DL_NONLAZY=3D1 =
PERPERL=3D/weblogs/markh/src/PersistentPerl-2.21/perperl/perperl =
PERPERL_BACKENDPROG=3D/weblogs/markh/src/PersistentPerl-2.21/perperl_back=
end/perperl_backend =
PERPERL_MODULE=3D/weblogs/markh/src/PersistentPerl-2.21// =
PERPERL_TIMEOUT=3D300 /usr/local/bin/perl -I../blib/arch -I../blib/lib =
-I/usr/local/lib/perl56/5.6.1/aix-ld -I/usr/local/lib/perl56/5.6.1 -e =
'use Test::Harness qw(&runtests $verbose); $verbose=3D0; runtests =
@ARGV;' t/*.t
t/alarm.............exec(): 0509-036 Cannot load program =
/weblogs/markh/src/PersistentPerl-2.21/perperl/perperl because of the =
following errors:
0509-151 The program does not have an entry point or
the o_snentry field in the auxiliary header is =
invalid.
0509-194 Examine file headers with the 'dump -ohv' command.
exec(): 0509-036 Cannot load program =
/weblogs/markh/src/PersistentPerl-2.21/perperl/perperl because of the =
following errors:
0509-151 The program does not have an entry point or
the o_snentry field in the auxiliary header is =
invalid.
Are there some symbols that still need to be retained (ie. via a -u =
flag) ?
=20
Any ideas ?
=20
-Mark Haviland
|
|
From: Sam H. <sa...@da...> - 2002-10-01 22:05:27
|
Looks like $^X is not defiuned - I'll look into it. In the meantime
here's a workaround:
BEGIN {
$^X = 'perl' if ($CGI::SpeedyCGI::i_am_speedy);
$0 = $^X unless ($^X =~ m%(^|[/\\])(perl)|(perl.exe)$%i );
}
use FindBin;
> Hi Sam,
>
> The good news is the problems I reported some time back with
>
> use CGI::LogCarp qw(:STDLOG :STDBUG :STDERR);
>
> have been fixed with 2.20. Thanks.
>
> There is still an outstanding issue, the following code :
>
> #!/usr/local/bin/speedy
> BEGIN {
> $0 = $^X unless ($^X =~ m%(^|[/\\])(perl)|(perl.exe)$%i );
> }
> # Problem for speedy CGI
> use FindBin;
>
> Give the following error :
>
> Cannot find current script '' at /usr/local/lib/perl5/5.6.1/FindBin.pm line
> 166
> BEGIN failed--compilation aborted at /usr/local/lib/perl5/5.6.1/FindBin.pm
> line 166.
> Compilation failed in require at ./test_speedy.cgi line 7.
> BEGIN failed--compilation aborted at ./test_speedy.cgi line 7.
> speedy_backend[1363]: perl_parse error
> speedy[1361]: Cannot spawn backend process
>
> The BEGIN block is a recommended fix to allow the use of FindBin with
> perl2exe. Perhaps you can see why this causes problems under speedyCGI ?
>
> Thanks for your help,
> Mark
>
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by: DEDICATED SERVERS only $89!
> Linux or FreeBSD, FREE setup, FAST network. Get your own server
> today at http://www.ServePath.com/indexfm.htm
> _______________________________________________
> Speedycgi-users mailing list
> Spe...@li...
> https://lists.sourceforge.net/lists/listinfo/speedycgi-users
|
|
From: Mark H. <mar...@wa...> - 2002-10-01 17:45:44
|
Hi Sam,
The good news is the problems I reported some time back with
use CGI::LogCarp qw(:STDLOG :STDBUG :STDERR);
have been fixed with 2.20. Thanks.
There is still an outstanding issue, the following code :
#!/usr/local/bin/speedy
BEGIN {
$0 = $^X unless ($^X =~ m%(^|[/\\])(perl)|(perl.exe)$%i );
}
# Problem for speedy CGI
use FindBin;
Give the following error :
Cannot find current script '' at /usr/local/lib/perl5/5.6.1/FindBin.pm line
166
BEGIN failed--compilation aborted at /usr/local/lib/perl5/5.6.1/FindBin.pm
line 166.
Compilation failed in require at ./test_speedy.cgi line 7.
BEGIN failed--compilation aborted at ./test_speedy.cgi line 7.
speedy_backend[1363]: perl_parse error
speedy[1361]: Cannot spawn backend process
The BEGIN block is a recommended fix to allow the use of FindBin with
perl2exe. Perhaps you can see why this causes problems under speedyCGI ?
Thanks for your help,
Mark
|
|
From: Sam H. <sa...@da...> - 2002-09-30 08:10:43
|
SpeedyCGI release 2.21 is available at:
http://daemoninc.com/SpeedyCGI/download.html
The changes since 2.20 are:
- Fix too many backends problem reported by Theo Petersen. The
problem is due to temp-file corruption that occurs when the
web-server sends a TERM signal to the frontend while it is working
on the temp file. It also results in some backends failing due
to the corruption. Added a fix so that signals are always blocked
while working on the temp file.
- Shutdown handler should be called after script is touched.
- Fixes for Mac OS X 10.1. Workaround the sigpending() bug,
and add msync() which appears to fix a shared-memory flushing
problem when temp-file is expanded.
|
|
From: PW <sub...@me...> - 2002-09-23 15:16:53
|
Thanks for the "Group" explanation and revised SpeedyCGI update, Sam. The Group feature is indeed very useful, particular, when running the same application on various virtual hosts. Much more efficient in terms of memory consumption, memory (code) sharing (especially modules) and recompiling/start-ups. With "Configuration examples to pass in mod_speedycgi environment variables" I actually meant examples on how to pass in SpeedyCGI parameters (Setting Option Values) via Apache/mod_speedycgi, script or command line. This question is now clearly documented and plausible to me. Best regards Philippe Wiede megapublic inc. [r] advertising and branding agency Gemsberg 11, CH-4051 Basel, Switzerland www.megapublic.com, www.megapublic.ch Sam Horrocks wrote: > > > A slightly more extensive documentation would really be great, if your > > time permits. My wish list: > > ... > > - Configuration examples to pass in mod_speedycgi environment variables > > I documented more about the groups, but I don't understand what you mean > by this one - SetEnv can be used pretty much anywhere in Apache if you > have mod_env, so it doesn't seem specific to SpeedyCGI. > > Can you explain more what you want documented? |
|
From: Sam H. <sa...@da...> - 2002-09-22 21:42:48
|
Looks like the problem is a bug in the Mac OS X sigpending() system call. It looks like it's fixed in OS X 10.2. The 10.2 release notes (http://developer.apple.com/technotes/tn2002/tn2053.html) say: A problem where the routine sigpending() was not returning the pending signals in the sigset_t pointed to by its parameter has been corrected. (r. 2831405). I've put a workaround into the SpeedyCGI code, and with that I was able to do a successful "make test" on OSX 10.1. You can get the workaround from the SpeedyCGI cvs tree, or manually patch the 2.20 code as follows: - Edit src/speedy_sig.c - Find the line: if (sigpending(&set) == -1) - Right *before* this line, insert: sigemptyset(&set); Since sigpending on MacOSX is essentially a no-op, if we clear out the signals first, then we get no pending signals, and we don't get stuck in sigsuspend(). > > Hi > > Does anybody know of a successful port of SpeedyCGI to MacOSX ? > > I could compile the source on OSX , but it stops functioning after a few > invocations. > > (locking or threading problem?) > > tnx > > ~henq > > > > ------------------------------------------------------- > This sf.net email is sponsored by: Jabber - The world's fastest growing > real-time communications platform! Don't just IM. Build it in! > http://www.jabber.com/osdn/xim > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users |
|
From: Sam H. <sa...@da...> - 2002-09-20 05:29:38
|
I've released the SpeedyCGI code under the name "PersistentPerl" to try
to clear up some confusion about what SpeedyCGI does. The PersistentPerl
code is currently generated automatically from the SpeedyCGI source,
so both releases will behave the same - one doesn't work better than
the other. At some point, PersistentPerl may replace SpeedyCGI, but
for now they'll both be kept in sync. PersistentPerl has a separate
namespace so it can co-exist with SpeedyCGI on the same system.
The home page for PersistentPerl is:
http://daemoninc.com/PersistentPerl
PersistentPerl has separate mailing lists from SpeedyCGI. I'll try
to get an email gateway going between the two projects.
Sam
|
|
From: Sam H. <sa...@da...> - 2002-09-20 01:02:39
|
SpeedyCGI release 2.20 is available at:
http://daemoninc.com/SpeedyCGI/download.html
There have been a couple small bug fixes since release 2.20pre2.
The changes since the last major release (version 2.11) are:
- Added a mod_speedycgi module that works under Apache 2.0.39 or
later. Works with the default prefork mpm, but not with threaded
mpm's.
- Exit status is now passed from the backend to the frontend.
The frontend now exits soon after the backend exits instead of
when all sockets close.
- Fixed bug where alarm's were unusable from within perl code.
- Signal handling in the backend has been cleaned up. Signal
settings will no longer be altered between perl runs.
- Find scripts by dev/ino/group-name instead of by dev/ino
- In the "speedy" executable buffering has been changed so
that BufsizGet and BufsizPut are now maximum values instead of
absolute values. The buffers will start small and grow
to this value if needed. The default values for these options
have been increased.
- Backend allocation is now controlled better. Another backend isn't
allocated until the previous backend has had a chance to start.
This should reduce the tendency to use too many backends when the
load fluctuates.
- Initially compiled perl-code is now shared among interpreters
within the same group (or same script if not using groups).
- To implement the new shared perl code and exit status features,
an extra parent process is created for each group (or for each
script, if not using groups). This process should use very little
cpu or un-shared memory.
- New code provides doubly linked lists for all slots in the
temp file. This eliminates some of the more obscure linked list
code in various places, and enables some minor performance
improvements.
|
|
From: Sam H. <sa...@da...> - 2002-09-19 19:25:17
|
From the upcoming release....
USING GROUPS
The group feature in SpeedyCGI can be used to help reduce the amount of
memory used by the perl interpreters. When groups are not used (ie when
group name is "none"), each perl script is given its own set of perl
interpreters, separate from the perl interpreters used for other
scripts. In SpeedyCGI each perl interpreter is also a separate unix
process.
When grouping is used, perl interpreters are put into a group. All perl
interpreters in that group can run perl scripts in that same group. What
this means is that by putting all your scripts into one group, there
could be one perl interpreter running all the perl scripts on your
system. This can greatly reduce your memory needs when running lots of
different perl scripts.
SpeedyCGI group names are entities unto themselves. They are not
associated with Unix groups, or with the Group directive in Apache.
Group names are created by the person running SpeedyCGI based on their
needs. There are two special group names "none" and "default". All other
group names are created by the user of SpeedyCGI using the Group option
described in the section on "OPTIONS".
If you want to use the maximum amount of grouping possible (ie all
scripts in the same interpreter), then you should always use the group
name "default". When you do this, you will get the fewest number of perl
interpreters possible. Each perl interpreter will be able to run any of
your perl scripts.
Although using group "default" for all scripts results in the most
efficient use of resources, it's not always possible or desirable to do
this. You may want to use other group names for the following reasons:
* To isolate misbehaving scripts, or scripts that don't work in groups.
Some scripts cannot work in groups. When perl scripts are grouped
together they are each given their own unique package name - they
are not run out of the "main" package as they normally would be. So,
for example, a script that explicitly uses "main" somewhere in its
code to find its subroutines or variables probably won't work in
groups. In this case, it's probably best to run such a script with
group "none", so it is compiled and run out of package main, and
always given its own interpreter.
Other scripts may make changes to included packages, etc, that may
break other scripts running in the same interpreter. In this case
such scripts can be given their own group name (like group name
"pariah") to keep them away from other scripts that they are
incompatible with. The rest of your scripts can then run out of
group "default". This will ensure that the "pariah" scripts won't
run within the same interpreter as your other scripts.
* To pass different perl or SpeedyCGI parameters to different scripts.
The first script to start up in a group sets the perl and SpeedyCGI
parameters used from then on for all scripts in that group. You may
want to use separate groups to create separate policies for
different scripts.
Say you have an email application that contains ten perl scripts,
and since the common perl code used in this application has a bad
memory leak, you want to user a MaxRuns setting of 5 for all of
these scripts. You then want all your other scripts to run in a
separate group with a normal MaxRuns policy. What you can do is edit
the ten email scripts, and at the top, put in the line:
#!/usr/bin/speedy -- -gmail -r5
In the rest of your perl scripts you can use:
#!/usr/bin/speedy -- -g
What this will do is put the ten email scripts into a group of their
own (named "mail") and give them all the default MaxRuns value of 5.
All other scripts will be put into the group named "default", and
this group will have the normal MaxRuns setting.
|
|
From: Sam H. <sa...@da...> - 2002-09-17 05:32:13
|
If you have an OSX box with ktrace kernel support, I'll give it a try. I compiled speedy on the Mac that's in the sourceforge compile farm, but "make test" failed. Ktrace would be a good way to debug it, but sourceforge can't install support for that in their kernel without losing some other mac features. Sam > > Hi > > Does anybody know of a successful port of SpeedyCGI to MacOSX ? > > I could compile the source on OSX , but it stops functioning after a few > invocations. > > (locking or threading problem?) > > tnx > > ~henq > > > > ------------------------------------------------------- > This sf.net email is sponsored by: Jabber - The world's fastest growing > real-time communications platform! Don't just IM. Build it in! > http://www.jabber.com/osdn/xim > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users |
|
From: Sam H. <sa...@da...> - 2002-09-07 01:05:34
|
Have you tried gcc instead of the HP compiler? I've seen problems like this come up when perl and speedy are compiled with two different C compilers. The flags that worked when compiling perl with the first compiler don't work when compiling speedy with the other compiler. > Hi all, > > I'm new to the list and to SpeedyCGI, so in case this question was asked > before, forgive my ignorance. (I browsed the archives, but didn't find > relevant previous postings on the subject.) > > I'm trying to build SpeedyCGI on an HP-UX 10.20 system. Configuration data: > > - HP-UX 10.20 > - HP cc compiler > - Perl 5.8.0 > - Apache 1.3.20 > - SpeedyCGI 2.11 > > Makefile generation works fine using "perl Makefile.PL". When I run > 'make', I first run into an error linker in the speedy_backend > directory. The Makefile there uses options such as "-Wl,-E" in the > command line for ld which the linker doesn't grok. After fixing that (by > changing "-Wl,-E" to "-E"), the build proceeds a little further, but > then bails out when trying to link the speedy_backend executable. The > message which I get is "Unsatisified symbols: $global$ (data)". > > Some documentation I found on this error message leads me to think that > it might mean that this symbol is not properly exported from some file > and therefore cannot be found. The recommendations I found mentioned > using the linker option "-E" to export all symbols, but this is already > done for the SpeedyCGI file, as it seems, so I'm scratching my head now > and could use a little help 8-) > > I've tried this on two different HP-UX systems, with various versions of > Perl. Same problem everywhere. Anybody here who managed to build > SpeedyCGI on HP-UX before? > > Thanks! > > Claus > > --cla...@co...----------------------------------------------- > Claus Brod, CoCreate R&D Have you hugged your manager today? > CoCreate Software GmbH > http://www.cocreate.com Phone: +49 7031 951 2152 > --http://clausbrod.com --------------------------#include <disclaimer>-- > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users |
|
From: Claus B. <cla...@co...> - 2002-08-29 20:11:54
|
Hi all, I'm new to the list and to SpeedyCGI, so in case this question was asked before, forgive my ignorance. (I browsed the archives, but didn't find relevant previous postings on the subject.) I'm trying to build SpeedyCGI on an HP-UX 10.20 system. Configuration data: - HP-UX 10.20 - HP cc compiler - Perl 5.8.0 - Apache 1.3.20 - SpeedyCGI 2.11 Makefile generation works fine using "perl Makefile.PL". When I run 'make', I first run into an error linker in the speedy_backend directory. The Makefile there uses options such as "-Wl,-E" in the command line for ld which the linker doesn't grok. After fixing that (by changing "-Wl,-E" to "-E"), the build proceeds a little further, but then bails out when trying to link the speedy_backend executable. The message which I get is "Unsatisified symbols: $global$ (data)". Some documentation I found on this error message leads me to think that it might mean that this symbol is not properly exported from some file and therefore cannot be found. The recommendations I found mentioned using the linker option "-E" to export all symbols, but this is already done for the SpeedyCGI file, as it seems, so I'm scratching my head now and could use a little help 8-) I've tried this on two different HP-UX systems, with various versions of Perl. Same problem everywhere. Anybody here who managed to build SpeedyCGI on HP-UX before? Thanks! Claus --cla...@co...----------------------------------------------- Claus Brod, CoCreate R&D Have you hugged your manager today? CoCreate Software GmbH http://www.cocreate.com Phone: +49 7031 951 2152 --http://clausbrod.com --------------------------#include <disclaimer>-- |
|
From: Sam H. <sa...@da...> - 2002-08-01 05:47:00
|
Problems like this seem to come up when your perl was compiled/linked with a different compiler/linker than what you're using now. The options that worked on the other compiler/linker don't work on this one. Probably the most straightforward workaround is to compile and install your own perl, then use that perl to build speedy. > Dear all, > > I am having a problem installing SpeedyCGI on HPUX > 10.20. I don't have root access, so I am currently > trying to install it under my home directory, e.g. > > perl Makefile.PL prefix=$HOME > > which works fine. However, when I try to do a 'make', > the compilation continues until the object files are > about to be linked together. I get the following > error: > > ld -o speedy_backend speedy_backend_main.o > speedy_perl.o speedy_util.o speedy_sig.o > speedy_frontend.o speedy_backend.o speedy_file.o > speedy_slot.o speedy_poll.o speedy_ipc.o > speedy_group.o speedy_script.o speedy_opt.o > speedy_optdefs.o xsinit.o -Wl,-E -Wl,-B,deferred > -L/usr/local/lib > /opt/perl5/lib/5.6.0/PA-RISC1.1/auto/DynaLoader/DynaLoader.a > -L/opt/perl5/lib/5.6.0/PA-RISC1.1/CORE -lperl -lnsl_s > -lndbm -ldld -lm -lc -lndir -lcrypt -lsec > ld: Unrecognized argument: -Wl,-E > ld: Usage: ld flags... files... > *** Error exit code 1 > > Stop. > *** Error exit code 1 > > Stop. > > It looks like the ld command does not like the -Wl,-E > option (and presumably the -Wl,-B,deferred option > too). The output of perl -V is: > > Summary of my perl5 (revision 5.0 version 6 subversion > 0) configuration: > Platform: > osname=hpux, osvers=10.20, archname=PA-RISC1.1 > uname='hp-ux fdhp b.10.20 c 9000831 2002997242 > 32-user license ' > config_args='-de' > hint=recommended, useposix=true, > d_sigaction=define > usethreads=undef use5005threads=undef > useithreads=undef usemultiplicity=undef > useperlio=undef d_sfio=undef uselargefiles=define > use64bitint=undef use64bitall=undef > uselongdouble=undef usesocks=undef > Compiler: > cc='cc', optimize='-O', gccversion= > cppflags='-D_HPUX_SOURCE -Aa' > ccflags =' -D_HPUX_SOURCE -D_LARGEFILE_SOURCE > -D_FILE_OFFSET_BITS=64 -Ae' > stdchar='unsigned char', d_stdstdio=define, > usevfork=false > intsize=4, longsize=4, ptrsize=4, doublesize=8 > d_longlong=define, longlongsize=8, > d_longdbl=define, longdblsize=16 > ivtype='long', ivsize=4, nvtype='double', > nvsize=8, Off_t='off_t', lseeksize=8 > alignbytes=8, usemymalloc=y, prototype=define > Linker and Libraries: > ld='ld', ldflags =' -L/usr/local/lib' > libpth=/usr/local/lib /lib /usr/lib /usr/ccs/lib > libs=-lnsl_s -lndbm -ldld -lm -lc -lndir -lcrypt > -lsec > libc=/lib/libc.sl, so=sl, useshrplib=false, > libperl=libperl.a > Dynamic Linking: > dlsrc=dl_hpux.xs, dlext=sl, d_dlsymun=undef, > ccdlflags='-Wl,-E -Wl,-B,deferred ' > cccdlflags='+z', lddlflags='-b +vnocompatwarnings > -L/usr/local/lib' > > > Characteristics of this binary (from libperl): > Compile-time options: USE_LARGE_FILES > Built under hpux > Compiled at Apr 26 2000 16:12:40 > @INC: > /opt/perl5/lib/5.6.0/PA-RISC1.1 > /opt/perl5/lib/5.6.0 > /opt/perl5/lib/site_perl/5.6.0/PA-RISC1.1 > /opt/perl5/lib/site_perl/5.6.0 > /opt/perl5/lib/site_perl > . > > The documentation doesn't seem to suggest that it > won't compile on HPUX - though the OSs listed doesn't > include HPUX. > Has something gone wrong when perl was installed that > somehow mangled the 'ccdlflags' setting of Config.pm? > Or am I barking up the wrong tree? > Any clues how I might fix this? > > Thanks, > > Rajesh Parmar > > __________________________________________________ > Do You Yahoo!? > Everything you'll ever need on one web page > from News and Sport to Email and Music Charts > http://uk.my.yahoo.com > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users |
|
From: <rcp...@ya...> - 2002-07-30 15:57:20
|
Dear all,
I am having a problem installing SpeedyCGI on HPUX
10.20. I don't have root access, so I am currently
trying to install it under my home directory, e.g.
perl Makefile.PL prefix=$HOME
which works fine. However, when I try to do a 'make',
the compilation continues until the object files are
about to be linked together. I get the following
error:
ld -o speedy_backend speedy_backend_main.o
speedy_perl.o speedy_util.o speedy_sig.o
speedy_frontend.o speedy_backend.o speedy_file.o
speedy_slot.o speedy_poll.o speedy_ipc.o
speedy_group.o speedy_script.o speedy_opt.o
speedy_optdefs.o xsinit.o -Wl,-E -Wl,-B,deferred
-L/usr/local/lib
/opt/perl5/lib/5.6.0/PA-RISC1.1/auto/DynaLoader/DynaLoader.a
-L/opt/perl5/lib/5.6.0/PA-RISC1.1/CORE -lperl -lnsl_s
-lndbm -ldld -lm -lc -lndir -lcrypt -lsec
ld: Unrecognized argument: -Wl,-E
ld: Usage: ld flags... files...
*** Error exit code 1
Stop.
*** Error exit code 1
Stop.
It looks like the ld command does not like the -Wl,-E
option (and presumably the -Wl,-B,deferred option
too). The output of perl -V is:
Summary of my perl5 (revision 5.0 version 6 subversion
0) configuration:
Platform:
osname=hpux, osvers=10.20, archname=PA-RISC1.1
uname='hp-ux fdhp b.10.20 c 9000831 2002997242
32-user license '
config_args='-de'
hint=recommended, useposix=true,
d_sigaction=define
usethreads=undef use5005threads=undef
useithreads=undef usemultiplicity=undef
useperlio=undef d_sfio=undef uselargefiles=define
use64bitint=undef use64bitall=undef
uselongdouble=undef usesocks=undef
Compiler:
cc='cc', optimize='-O', gccversion=
cppflags='-D_HPUX_SOURCE -Aa'
ccflags =' -D_HPUX_SOURCE -D_LARGEFILE_SOURCE
-D_FILE_OFFSET_BITS=64 -Ae'
stdchar='unsigned char', d_stdstdio=define,
usevfork=false
intsize=4, longsize=4, ptrsize=4, doublesize=8
d_longlong=define, longlongsize=8,
d_longdbl=define, longdblsize=16
ivtype='long', ivsize=4, nvtype='double',
nvsize=8, Off_t='off_t', lseeksize=8
alignbytes=8, usemymalloc=y, prototype=define
Linker and Libraries:
ld='ld', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib /usr/ccs/lib
libs=-lnsl_s -lndbm -ldld -lm -lc -lndir -lcrypt
-lsec
libc=/lib/libc.sl, so=sl, useshrplib=false,
libperl=libperl.a
Dynamic Linking:
dlsrc=dl_hpux.xs, dlext=sl, d_dlsymun=undef,
ccdlflags='-Wl,-E -Wl,-B,deferred '
cccdlflags='+z', lddlflags='-b +vnocompatwarnings
-L/usr/local/lib'
Characteristics of this binary (from libperl):
Compile-time options: USE_LARGE_FILES
Built under hpux
Compiled at Apr 26 2000 16:12:40
@INC:
/opt/perl5/lib/5.6.0/PA-RISC1.1
/opt/perl5/lib/5.6.0
/opt/perl5/lib/site_perl/5.6.0/PA-RISC1.1
/opt/perl5/lib/site_perl/5.6.0
/opt/perl5/lib/site_perl
.
The documentation doesn't seem to suggest that it
won't compile on HPUX - though the OSs listed doesn't
include HPUX.
Has something gone wrong when perl was installed that
somehow mangled the 'ccdlflags' setting of Config.pm?
Or am I barking up the wrong tree?
Any clues how I might fix this?
Thanks,
Rajesh Parmar
__________________________________________________
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com
|
|
From: henq <he...@wa...> - 2002-07-25 20:55:53
|
Hi Does anybody know of a successful port of SpeedyCGI to MacOSX ? I could compile the source on OSX , but it stops functioning after a few invocations. (locking or threading problem?) tnx ~henq |
|
From: Sam H. <sa...@da...> - 2002-07-22 02:48:27
|
I can use both modules in a test script. I tried this script:
#!/usr/bin/speedy
use FindBin qw($Bin);
use CGI::LogCarp;
print "bin=$Bin\n";
carp "testing carp";
The output from the script was:
[Sun Jul 21 21:40:37 2002] 2036 t.pl ERR: testing carp at ./t.pl line 5
bin=/share/src
I'm using the following software:
SpeedyCGI speedy version 2.20pre2 built for perl version 5.006_00 on i386-linux
LogCarp-1.12
I also tried SpeedyCGI 2.11 and it worked there too.
Sam
> Hi,
>
> I was looking at the SpeedyCGI package - it seems like an excellent solution
> to speeding up perl CGI applications.
>
> I immediately ran into the following :
>
> Could not create STDBUG stream: Invalid argument at
> /usr/lib/perl5/site_perl/5.6.0/CGI/LogCarp.pm line 1463.
>
> FindBin.pm: Cannot find current script '' at /usr/lib/perl5/5.6.0/FindBin.pm
> line 166
>
> Can I use CGI::LogCarp and FindBin modules with CGI::Speedy ?
>
> Thanks,
> Mark
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf
> _______________________________________________
> Speedycgi-users mailing list
> Spe...@li...
> https://lists.sourceforge.net/lists/listinfo/speedycgi-users
|