speedycgi-users Mailing List for SpeedyCGI (Page 8)
Brought to you by:
samh
You can subscribe to this list here.
| 2002 |
Jan
|
Feb
(4) |
Mar
(12) |
Apr
(3) |
May
(1) |
Jun
(10) |
Jul
(12) |
Aug
(2) |
Sep
(8) |
Oct
(10) |
Nov
(4) |
Dec
(4) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
|
Feb
|
Mar
(12) |
Apr
(1) |
May
(18) |
Jun
(1) |
Jul
(3) |
Aug
(3) |
Sep
(9) |
Oct
(21) |
Nov
(11) |
Dec
(2) |
| 2004 |
Jan
(6) |
Feb
(1) |
Mar
(2) |
Apr
(1) |
May
(10) |
Jun
(3) |
Jul
(4) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2005 |
Jan
|
Feb
|
Mar
(4) |
Apr
(1) |
May
|
Jun
(1) |
Jul
(6) |
Aug
(4) |
Sep
(1) |
Oct
(3) |
Nov
(2) |
Dec
(1) |
| 2006 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
| 2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2009 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Mark H. <mar...@wa...> - 2002-07-15 17:58:46
|
Hi, I was looking at the SpeedyCGI package - it seems like an excellent solution to speeding up perl CGI applications. I immediately ran into the following : Could not create STDBUG stream: Invalid argument at /usr/lib/perl5/site_perl/5.6.0/CGI/LogCarp.pm line 1463. FindBin.pm: Cannot find current script '' at /usr/lib/perl5/5.6.0/FindBin.pm line 166 Can I use CGI::LogCarp and FindBin modules with CGI::Speedy ? Thanks, Mark |
|
From: Sam H. <sa...@da...> - 2002-07-09 20:08:05
|
> On Tue, Jul 09, 2002 at 11:53:52AM +0200, Lars Uffmann wrote:
> > Thanx Sam, i got the suid working. Unhappily the qmail-queue wrapper
> > won't work out of the box. I will try to debug this further. Could it be
> > a problem that the qmail-queue wrapper dup's STDOUT (fd 1) and open it
> > for reading?
If it's trying to read from stdout that won't work. Speedycgi currently only
supports output from stdout, not input to stdout.
> Ok, here a some more details. All test are made on linux 2.4. I first
> tried 2.20pre2, compiled with -DIAMSUID. Aparently, pre2 doesn't work at
> all, suid or not. Are there any known problems with pre2 on linux? The
> calling process blocks on writing to the speedy_frontend.
Pre2 passes "make test" on my linux system, and I'm using it for some of
my cgi's. Also, there wasn't much changed in the frontend between pre1 and
pre2. But it is a pre-release, meaning it needs testing before the real
release.
Does it pass "make test" for you? Is there something I could run to
reproduce the problem? If there is a problem with pre2 I need
to fix it before the 2.20 release.
> I then tried 2.20pre1 which does work. Setuid works too. I then came to
> the problem i suspected in my first posting: The following code does not
> work, the while(<SOUT>) loop is never reached, but neither the open() nor
> the close() fail.
>
> >>>
> sub grab_envelope_hdrs {
> select(STDOUT); $|=1;
> open(SOUT,"<&STDOUT")||&tempfail("cannot dup fd 0 - $!");
> while (<SOUT>) {
> [...]
> }
> close(SOUT)||&tempfail("cannot close fd 1 - $!");
> <<<
I'm guessing the <SOUT> statement is coming back with an error immediately,
since the STDOUT socket has been shutdown for writes.
If you want to be able to read from stdout, you'll probably need
file-descriptor passing. That is not in the current speedycgi code - it'll
probably be coming in the next release after 2.20.
PPerl supposedly does file-descriptor passing in the current release.
I've never used it so I have no idea how well it would work. Or if
you want to try grafting file-descriptor passing into the speedycgi code,
I'll incorporate your changes as best I can.
> regards,
> --
> Lars Uffmann, <lar...@me...>, fon: +49 5246 80 1330
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Stuff, things, and much much more.
> http://thinkgeek.com/sf
> _______________________________________________
> Speedycgi-users mailing list
> Spe...@li...
> https://lists.sourceforge.net/lists/listinfo/speedycgi-users
|
|
From: Lars U. <lu...@me...> - 2002-07-09 11:42:57
|
On Tue, Jul 09, 2002 at 11:53:52AM +0200, Lars Uffmann wrote:
> Thanx Sam, i got the suid working. Unhappily the qmail-queue wrapper=20
> won't work out of the box. I will try to debug this further. Could it be=
=20
> a problem that the qmail-queue wrapper dup's STDOUT (fd 1) and open it=20
> for reading?
Ok, here a some more details. All test are made on linux 2.4. I first
tried 2.20pre2, compiled with -DIAMSUID. Aparently, pre2 doesn't work at
all, suid or not. Are there any known problems with pre2 on linux? The
calling process blocks on writing to the speedy_frontend.
I then tried 2.20pre1 which does work. Setuid works too. I then came to
the problem i suspected in my first posting: The following code does not
work, the while(<SOUT>) loop is never reached, but neither the open() nor
the close() fail.
>>>
sub grab_envelope_hdrs {
select(STDOUT); $|=3D1;
open(SOUT,"<&STDOUT")||&tempfail("cannot dup fd 0 - $!");
while (<SOUT>) {
[...]
}
close(SOUT)||&tempfail("cannot close fd 1 - $!");
<<<
regards,
--=20
Lars Uffmann, <lar...@me...>, fon: +49 5246 80 1330
|
|
From: Lars U. <lu...@me...> - 2002-07-09 09:53:44
|
Sam Horrocks wrote:
[ ... ]
>
> - Take the resulting "speedy" binary and install it suid-root as
> /usr/bin/speedy_suid
>
> - Change your setuid scripts to use /usr/bin/speedy_suid as the
> interpreter.
>
>
>
Thanx Sam, i got the suid working. Unhappily the qmail-queue wrapper
won't work out of the box. I will try to debug this further. Could it be
a problem that the qmail-queue wrapper dup's STDOUT (fd 1) and open it
for reading?
regards,
Lars
>
>
|
|
From: PW <sub...@me...> - 2002-07-09 08:24:44
|
Hi Sam
Thanks, works perfectly utilized as shown below! I was not aware that
you could mix both modules together.
(defined $s{OS} && $s{OS}) || init_constants();
# ... your main routines ....
sub init_constants() {
$s{OS} = $^O;
sub shutdown_process {
.... your cleanup routine ....
}
if ( eval { require CGI::SpeedyCGI } && CGI::SpeedyCGI->i_am_speedy ) {
CGI::SpeedyCGI->add_shutdown_handler( \&shutdown_process );
}
}
Philippe Wiede
megapublic inc.
Sam Horrocks wrote:
>
> You should be able to do this with a shutdown handler. The perl
> interperter backend is the same whether you use "mod_speedycgi" or
> "speedy" as the frontend.
>
> Here's a test script. Put this in your mod_speedycgi directory and
> run it by doing an http request. Five seconds later, a file named
> "/tmp/data" should show up when the backend times out and shuts down.
>
> #!/usr/bin/speedy -- -t5
> use CGI::SpeedyCGI;
> sub save_data {
> open(F, ">>/tmp/data"); print F "data\n"; close(F);
> }
> BEGIN {
> CGI::SpeedyCGI->add_shutdown_handler(\&save_data);
> }
> print "Content-type: text/plain\n\nhello world\n";
>
> > Folks
> >
> > Is there are way to catch the end of life of a SpeedyCGI front-end
> > process? What I am looking for is sort of a shutdown handler like the
> > SpeedyCGI CGI module provides, however I am utilizing the SpeedyCGI
> > Apache module which does not provide this functionality. Obviously, END
> > blocks are ignored in a persistent environment and trying to do it via
> > signal handling seems a little hairy if not unreliable.
> >
> > As a specific example: a CGI app uses session tracking, keeping the
> > session data as a per process hash persistently in memory. If the
> > process dies due to inactivity, max runs or whatever, it should perform
> > one last clean-up job, i.e. flushing the remains of the session data to
> > disk. Of course, one could store or tie the session data on each
> > invocation to disk, loosing some efficiency, but I want to reap the
> > benefits of persistency!
|
|
From: Sam H. <sa...@da...> - 2002-07-08 14:53:54
|
There is some poorly documented support for setuid scripts in the existing
release. It only works on some OSes though (freebsd/linux - not solaris).
To use it you first install a setuid-root "/usr/bin/speedy_suid",
change your #! line to use that, and then make your perl script
setuid to qmailq.
Here are the instructions for installing it (this should be documented - I'll
put that on the todo list for the 2.20 release).
To install the setuid speedy do the following:
- Run "perl Makefile.PL"
- Edit speedy/Makefile and add "-DIAMSUID" to the end of the "DEFINE = "
line.
- Run make
- Take the resulting "speedy" binary and install it suid-root as
/usr/bin/speedy_suid
- Change your setuid scripts to use /usr/bin/speedy_suid as the
interpreter.
> Hi,
>
> I recently tried to use speedy to speed up qmail-scanner, a server-side
> email scanner for qmail. It works as a drop-in replacemanet fur
> qmail-queue, which is a setuid (qmailq/qmail) binary.
> How could speedy handle setuid? A possible solution would be to arrange
> for an suid root speedy_backend like suidperl. We could then support
> somethoing like a run_as($user) CGI::SpeedyCGI, or another commandline
> switch to the speedy frontend.
>
> regards,
> Lars
>
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Oh, it's good to be a geek.
> http://thinkgeek.com/sf
> _______________________________________________
> Speedycgi-users mailing list
> Spe...@li...
> https://lists.sourceforge.net/lists/listinfo/speedycgi-users
|
|
From: Sam H. <sa...@da...> - 2002-07-08 14:31:42
|
You should be able to do this with a shutdown handler. The perl
interperter backend is the same whether you use "mod_speedycgi" or
"speedy" as the frontend.
Here's a test script. Put this in your mod_speedycgi directory and
run it by doing an http request. Five seconds later, a file named
"/tmp/data" should show up when the backend times out and shuts down.
#!/usr/bin/speedy -- -t5
use CGI::SpeedyCGI;
sub save_data {
open(F, ">>/tmp/data"); print F "data\n"; close(F);
}
BEGIN {
CGI::SpeedyCGI->add_shutdown_handler(\&save_data);
}
print "Content-type: text/plain\n\nhello world\n";
> Folks
>
> Is there are way to catch the end of life of a SpeedyCGI front-end
> process? What I am looking for is sort of a shutdown handler like the
> SpeedyCGI CGI module provides, however I am utilizing the SpeedyCGI
> Apache module which does not provide this functionality. Obviously, END
> blocks are ignored in a persistent environment and trying to do it via
> signal handling seems a little hairy if not unreliable.
>
> As a specific example: a CGI app uses session tracking, keeping the
> session data as a per process hash persistently in memory. If the
> process dies due to inactivity, max runs or whatever, it should perform
> one last clean-up job, i.e. flushing the remains of the session data to
> disk. Of course, one could store or tie the session data on each
> invocation to disk, loosing some efficiency, but I want to reap the
> benefits of persistency!
>
>
> Thanks!
>
> Philippe Wiede
> megapublic inc.
>
> Attention, please ensure for your reply not to use one
> of the following mail server addresses as they are not
> allowed to access the megapublic network:
>
> altavista. - bigfoot.com - bol.com.br - caramail.com - desertmail.com
> email.com - emediatemail.com - eudoramail.com - excite.
> flashmail.com - hotbot.com - hotmail. - iwon. - juno.com
> katamail.com - lycos.com - msn.com - mail.com - mail4you.de
> mailcity.com - n2mail.com - netscape. - newmail.com
> sprintmail.com - uol. - usa.net - visitmail.com - webtv.net - yahoo.
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Oh, it's good to be a geek.
> http://thinkgeek.com/sf
> _______________________________________________
> Speedycgi-users mailing list
> Spe...@li...
> https://lists.sourceforge.net/lists/listinfo/speedycgi-users
|
|
From: PW <sub...@me...> - 2002-07-08 09:02:29
|
Folks Is there are way to catch the end of life of a SpeedyCGI front-end process? What I am looking for is sort of a shutdown handler like the SpeedyCGI CGI module provides, however I am utilizing the SpeedyCGI Apache module which does not provide this functionality. Obviously, END blocks are ignored in a persistent environment and trying to do it via signal handling seems a little hairy if not unreliable. As a specific example: a CGI app uses session tracking, keeping the session data as a per process hash persistently in memory. If the process dies due to inactivity, max runs or whatever, it should perform one last clean-up job, i.e. flushing the remains of the session data to disk. Of course, one could store or tie the session data on each invocation to disk, loosing some efficiency, but I want to reap the benefits of persistency! Thanks! Philippe Wiede megapublic inc. Attention, please ensure for your reply not to use one of the following mail server addresses as they are not allowed to access the megapublic network: altavista. - bigfoot.com - bol.com.br - caramail.com - desertmail.com email.com - emediatemail.com - eudoramail.com - excite. flashmail.com - hotbot.com - hotmail. - iwon. - juno.com katamail.com - lycos.com - msn.com - mail.com - mail4you.de mailcity.com - n2mail.com - netscape. - newmail.com sprintmail.com - uol. - usa.net - visitmail.com - webtv.net - yahoo. |
|
From: Lars U. <lar...@me...> - 2002-07-04 13:59:32
|
Hi,
I recently tried to use speedy to speed up qmail-scanner, a server-side
email scanner for qmail. It works as a drop-in replacemanet fur
qmail-queue, which is a setuid (qmailq/qmail) binary.
How could speedy handle setuid? A possible solution would be to arrange
for an suid root speedy_backend like suidperl. We could then support
somethoing like a run_as($user) CGI::SpeedyCGI, or another commandline
switch to the speedy frontend.
regards,
Lars
|
|
From: Vladimir T. <vla...@fo...> - 2002-06-30 19:28:54
|
Hello,
I answer a little too late, but I was too busy.
I just want to write, that I use SpeedyCGI on some ocasions, especially when
I want speedy cgi and cannot use mod_perl.
However I do know, that SpeedyCGI can be used outside of web enviroment and
would be perfect for mail filters and other types of systems, that can be
extended with perl scripts (e.g. netsaint). I used it with htdig once.
I was wondering myself, that SpeedyCGI seems to be unknown in the community,
'cause I think it's very useful.
As for the name. When I've seen pperl for the first time, I've thought -
what a perfect name for SpeedyCGI. It's exactly as it's name is - the perl
[interpreter] becomes persistent, not "speedy".
So I like SpeedyCGI for it's functions, but I think the name pperl is the
propper one. Besides, acronym sperl, that could be used with speedy perl is
misleading,
However, english is not my primary language, so maybe you could find another
propper name.
Anyway, thanks for the excelent program!
Bye,
Vladimir Tomasovic
|
|
From: Sam H. <sa...@da...> - 2002-06-29 19:02:19
|
SpeedyCGI 2.20 pre-release 2 is available at:
http://daemoninc.com/SpeedyCGI/download.html
The changes since 2.20pre1 are:
- Added a mod_speedycgi module that works under Apache 2.0.39 or
later. Works with the default prefork mpm, but not with threaded
mpm's.
There were also a couple of changes in the core code that were needed
due to problems found during apache-2 module testing on Solaris.
They may fix other problems too, so if you had trouble with 2.20pre1,
this release might help.
|
|
From: PW <sub...@me...> - 2002-06-24 09:55:36
|
Sam, Please forget my assumed bug report. After taking a closer look at the log files this morning I discovered that at the same time an indentical request also was performed for the domain running the SpeedyCGI application. %ENV was set correctly therefore. Sorry, Philippe Wiede Megapublic Inc. www.megapublic.com Sam Horrocks wrote: > ......... Could be a bug in the module. Is it reproducable? ......... |
|
From: Sam H. <sa...@da...> - 2002-06-24 02:18:03
|
> Neither was I aware that SpeedyCGI supports this feature. A slightly > more extensive documentation would really be great, if your time > permits. My wish list: > > - Additional information regarding the Group feature > (i.e. what defines a Group name or user? Apache's "nogroup" for example?) > - How would you declare a main CGI application and its associated > modules as a Group? > - Examples of Group configuration strategies and associated benefits > - Configuration examples to pass in mod_speedycgi environment variables OK, I'll try to do this. > Another question, yesterday I observed a weird behaviour with SpeedyCGI: > an environment variable from a static CGI program not running under > SpeedyPerl (also hosted under a different domain/IP) managed to slip > into the environment of a persistent SpeedyCGI application. Is this > possible? Here is some info: > > Access log entry for a confused remote robot sending a request to host > "megapublic.de" for the page "http://www.ebay.com/" (each access > triggers a local static SSI-CGI program (the non-SpeedyPerl one, named clicktracer.cgi)): > > p5082fb38.dip.t-dialin.net - - [22/Jun/2002:23:39:49 +0200] "GET > http://www.ebay.com/ HTTP/1.1" 200 16575 "-" "Mozilla/4.0 (compatible; > MSIE 4.01; Windows 95)" "p5082fb38.dip.t-dialin.net.219521024781986427" 3 Is this the log entry for the first access? > Somehow the variable ENV{REQUEST_URI} containing "http://www.ebay.com/" > managed to push itself into the %ENV of a SpeedyPerl application named > clicktracer.scgi and running persistently on a different domain, "megapublic.com". > > Pretty weird, no? Could be a bug in the module. Is it reproducable? > As for the second name, SpeedyPerl sounds ok. > > Thanks! > > Philippe Wiede > Megapublic Inc. > www.megapublic.com > ClickTracer [tm] > > > Sam Horrocks wrote: > > > > Speedy has always supported non-CGI persistent perl programs as sort > > of secondary development goal. It's never been at the forefront until > > recently. But, for example, the exit-status code that's coming in 2.20 > > was written purely for non-CGI programs - only non-CGI programs would > > care about that feature. > > > > I don't think I've communicated the non-CGI capabilites of SpeedyCGI > > very well. For example, Matt Sargeant released a program called "pperl" > > because he couldn't get speedy to compile on his box, and wasn't sure if > > it did non-CGI in the first place. If pperl works for people, that's > > great, but I'm guessing that people would like to know that SpeedyCGI > > is also an option in case pperl can't do everything they want. > > > > I'm planning to change the documentation to make it a little more clear > > that Speedy works with non-CGI programs. Also, I think it might be a good > > idea to petition the CPAN maintainers for a second name for SpeedyCGI - > > SpeedyPerl seems logical. Any opinions?* > > > ------------------------------------------------------- > Sponsored by: > ThinkGeek at http://www.ThinkGeek.com/ > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users |
|
From: PW <sub...@me...> - 2002-06-23 11:41:56
|
Neither was I aware that SpeedyCGI supports this feature. A slightly more extensive documentation would really be great, if your time permits. My wish list: - Additional information regarding the Group feature (i.e. what defines a Group name or user? Apache's "nogroup" for example?) - How would you declare a main CGI application and its associated modules as a Group? - Examples of Group configuration strategies and associated benefits - Configuration examples to pass in mod_speedycgi environment variables Another question, yesterday I observed a weird behaviour with SpeedyCGI: an environment variable from a static CGI program not running under SpeedyPerl (also hosted under a different domain/IP) managed to slip into the environment of a persistent SpeedyCGI application. Is this possible? Here is some info: Access log entry for a confused remote robot sending a request to host "megapublic.de" for the page "http://www.ebay.com/" (each access triggers a local static SSI-CGI program (the non-SpeedyPerl one, named clicktracer.cgi)): p5082fb38.dip.t-dialin.net - - [22/Jun/2002:23:39:49 +0200] "GET http://www.ebay.com/ HTTP/1.1" 200 16575 "-" "Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)" "p5082fb38.dip.t-dialin.net.219521024781986427" 3 Somehow the variable ENV{REQUEST_URI} containing "http://www.ebay.com/" managed to push itself into the %ENV of a SpeedyPerl application named clicktracer.scgi and running persistently on a different domain, "megapublic.com". Pretty weird, no? As for the second name, SpeedyPerl sounds ok. Thanks! Philippe Wiede Megapublic Inc. www.megapublic.com ClickTracer [tm] Sam Horrocks wrote: > > Speedy has always supported non-CGI persistent perl programs as sort > of secondary development goal. It's never been at the forefront until > recently. But, for example, the exit-status code that's coming in 2.20 > was written purely for non-CGI programs - only non-CGI programs would > care about that feature. > > I don't think I've communicated the non-CGI capabilites of SpeedyCGI > very well. For example, Matt Sargeant released a program called "pperl" > because he couldn't get speedy to compile on his box, and wasn't sure if > it did non-CGI in the first place. If pperl works for people, that's > great, but I'm guessing that people would like to know that SpeedyCGI > is also an option in case pperl can't do everything they want. > > I'm planning to change the documentation to make it a little more clear > that Speedy works with non-CGI programs. Also, I think it might be a good > idea to petition the CPAN maintainers for a second name for SpeedyCGI - > SpeedyPerl seems logical. Any opinions?* |
|
From: Sam H. <sa...@da...> - 2002-06-21 15:44:28
|
Speedy has always supported non-CGI persistent perl programs as sort of secondary development goal. It's never been at the forefront until recently. But, for example, the exit-status code that's coming in 2.20 was written purely for non-CGI programs - only non-CGI programs would care about that feature. I don't think I've communicated the non-CGI capabilites of SpeedyCGI very well. For example, Matt Sargeant released a program called "pperl" because he couldn't get speedy to compile on his box, and wasn't sure if it did non-CGI in the first place. If pperl works for people, that's great, but I'm guessing that people would like to know that SpeedyCGI is also an option in case pperl can't do everything they want. I'm planning to change the documentation to make it a little more clear that Speedy works with non-CGI programs. Also, I think it might be a good idea to petition the CPAN maintainers for a second name for SpeedyCGI - SpeedyPerl seems logical. Any opinions? |
|
From: PW <sub...@me...> - 2002-06-15 11:45:24
|
Great news! Thanks! Philippe Wiede Megapublic [r], Inc. www.megapublic.com Sam Horrocks wrote: > > SpeedyCGI 2.20 pre-release 1 is available for testing at: > > http://daemoninc.com/SpeedyCGI/download.html > > The official 2.20 release should be coming soon, assuming there are no > major problems with the code. > > The changes since version 2.11 are: > > .............. |
|
From: Sam H. <sa...@da...> - 2002-06-15 05:29:11
|
Why not just use suexec/cgiwrap with regular speedy? > I think the principle of this idea is a good one. > > However, for the implementation, I would urge that lessons learned from > cgiwrap and suexec on Apache be integrated in terms of what setup options > there are to make them orthogonal to existing packages that seek to deal > with suid issues and script security. > > At 07:56 PM 7/17/2001 +0100, an...@ic... wrote: > >Hi Sam > > > >I am using speedy on my web service providor's system and would like > >to tighten up the security of my web scripts. > > > >I have my own installation of speedy that runs setuid to my user id. > >This allows my scripts can access my files without making them world > >writable, but it also means that other users on the system could also > >access my files by running their scripts from my copy of speedy. > > > >I wondered whether it might be possible to change the behaviour of > >speedy such that if the speedy executable is setuid to a user other > >than root then the frontend will refuse to execute any script that is > >(i) not owned by the owner of the speedy executable, or (ii) has the > >same owner but is world- or group-writable. This behaviour could of > >course be a compile-time option. > > > >What do you think of the idea? > > > >Regards > >Andrew > >-- > >Andrew Ford, Director Ford & Mason Ltd Tel: +44 1531 829900 > >A....@fo... South Wing, Compton House Fax: +44 1531 829901 > >http://ford-mason.co.uk Compton Green, Redmarley Mobile: +44 7785 258278 > >http://pauntley-press.co.uk Gloucester, GL19 3JB > >http://refcards.com Great Britain > > __________________________________________________ > Gunther Birznieks (gun...@eX...) > eXtropia - The Open Web Technology Company > http://www.eXtropia.com/ |
|
From: Sam H. <sa...@da...> - 2002-06-15 01:53:45
|
Most code can be written without using those methods or any other methods in the CGI::SpeedyCGI module. In SpeedyCGI your perl code is run several times before finally being removed from memory. In normal perl, you just run once, and then the code is removed from memory. Both of these methods are similar to using "END" statements. If you register a "cleanup" function, it's executed at the end of each run of your perl code. A shutdown_handler is run right before your perl code is removed altogether (ie after all the runs of your perl code are done, right before your perl code is removed from memory). > I want to implement CGI::SpeedyCGI, but have some questions on your > methods, i.e., > > 1. register_cleanup($function_ref) > 2. add_shutdown_handler($function_ref) > 3. set_shutdown_handler($function_ref) > > Where can I go to learn about how and why I need to cleanup/shutdown, etc., > in addition to all of the other techniques I should be using to keep things > clean? I want to make sure that I am coding correctly before getting to > far into it! > > Thanks in advance. > > Gregory |
|
From: Sam H. <sa...@da...> - 2002-06-14 22:26:12
|
SpeedyCGI 2.20 pre-release 1 is available for testing at:
http://daemoninc.com/SpeedyCGI/download.html
The official 2.20 release should be coming soon, assuming there are no
major problems with the code.
The changes since version 2.11 are:
- Exit status is now passed from the backend to the frontend.
The frontend now exits soon after the backend exits instead of
when all sockets close.
- Fixed bug where alarm's were unusable from within perl code.
- Signal handling in the backend has been cleaned up. Signal
settings will no longer be altered between perl runs.
- Find scripts by dev/ino/group-name instead of by dev/ino
- In the "speedy" executable buffering has been changed so
that BufsizGet and BufsizPut are now maximum values instead of
absolute values. The buffers will start small and grow
to this value if needed. The default values for these options
have been increased.
- Backend allocation is now controlled better. Another backend isn't
allocated until the previous backend has had a chance to start.
This should reduce the tendency to use too many backends when the
load fluctuates.
- Initially compiled perl-code is now shared among interpreters
within the same group (or same script if not using groups).
- To implement the new shared perl code and exit status features,
an extra parent process is created for each group (or for each
script, if not using groups). This process should use very little
cpu or un-shared memory.
- New code provides doubly linked lists for all slots in the
temp file. This eliminates some of the more obscure linked list
code in various places, and enables some minor performance
improvements.
|
|
From: List M. <li...@iw...> - 2002-05-22 23:53:46
|
I want to implement CGI::SpeedyCGI, but have some questions on your methods, i.e., 1. register_cleanup($function_ref) 2. add_shutdown_handler($function_ref) 3. set_shutdown_handler($function_ref) Where can I go to learn about how and why I need to cleanup/shutdown, etc., in addition to all of the other techniques I should be using to keep things clean? I want to make sure that I am coding correctly before getting to far into it! Thanks in advance. Gregory |
|
From: Sam H. <sa...@da...> - 2002-04-18 16:46:36
|
If you have lots of different cgi's run under the same user-id, SpeedyGroup will help with the memory sharing. It lets you run multiple cgi's in one process. The default is no-grouping. I think what you're looking for is something like this: http://sourceforge.net/tracker/index.php?func=detail&aid=226647&group_id=2208&atid=102208 I don't think pre-load will ever work across different users though. > Hello Sam, > > First off, many thanks for providing and maintaining the excellent > SpeedyCGI module. I think it's a super alternative to the heavy ModPerl. > > A question came up while I tested a rather code-rich search engine using > SpeedyCGI as back-end in live usage. First, my impression was that > SpeedyCGI shares the same CGI application and all of its loaded modules > between different user processes, except that it forks off new Perl > interpreters for each new process. > > However, I discovered that SpeedyCGI sets up a completely new > environment for each new PID, meaning that the startup penalty for > loading large Perl applications (including modules) applies each time here. > > Question: am I missing something here or is there an option to allow > SpeedyCGI to compile the code of one particular application and share > it across multiple user processes? The SpeedyGroup directive seems > rather to provide an option to allow one Perl interpreter to run > different applications per one unique user process. Or am I totally off > the mark here? > > Thanks! > > Best regards, > > Philippe Wiede > > Megapublic [r], Inc. > www.megapublic.com > > _______________________________________________ > Speedycgi-users mailing list > Spe...@li... > https://lists.sourceforge.net/lists/listinfo/speedycgi-users |
|
From: PW <ph...@me...> - 2002-04-12 08:27:24
|
Hello Sam, First off, many thanks for providing and maintaining the excellent SpeedyCGI module. I think it's a super alternative to the heavy ModPerl. A question came up while I tested a rather code-rich search engine using SpeedyCGI as back-end in live usage. First, my impression was that SpeedyCGI shares the same CGI application and all of its loaded modules between different user processes, except that it forks off new Perl interpreters for each new process. However, I discovered that SpeedyCGI sets up a completely new environment for each new PID, meaning that the startup penalty for loading large Perl applications (including modules) applies each time here. Question: am I missing something here or is there an option to allow SpeedyCGI to compile the code of one particular application and share it across multiple user processes? The SpeedyGroup directive seems rather to provide an option to allow one Perl interpreter to run different applications per one unique user process. Or am I totally off the mark here? Thanks! Best regards, Philippe Wiede Megapublic [r], Inc. www.megapublic.com |
|
From: Haviland, M. <mar...@nw...> - 2002-04-10 19:35:27
|
Hi All,
Has anyone gotten speedy to work on AIX ? When I try to do a 'make =
test' it fails with:
.
.
.
No tests defined for speedy_backend extension.
PERL_DL_NONLAZY=3D1 =
SPEEDY=3D/weblogs/markh/work/perl.5.6/mods/CGI-SpeedyCGI-2.11/speedy/spee=
dy =
SPEEDY_BACKENDPROG=3D/weblogs/markh/work/perl.5.6/mods/CGI-SpeedyCGI-2.11=
/speedy_backend/speedy_backend =
SPEEDY_MODULE=3D/weblogs/markh/work/perl.5.6/mods/CGI-SpeedyCGI-2.11/mod_=
speedycgi/mod_speedycgi.so /usr/local/bin/perl5.6.1 -I../blib/arch =
-I../blib/lib -I/usr/local/lib/perl56/5.6.1/aix-ld =
-I/usr/local/lib/perl56/5.6.1 -e 'use Test::Harness qw(&runtests =
$verbose); $verbose=3D0; runtests @ARGV;' t/*.t
t/basic1............Could not load program =
/weblogs/markh/work/perl.5.6/mods/CGI-SpeedyCGI-2.11/speedy/speedy:
The program does not have an entry point or
the o_snentry field in the auxiliary header is invalid.
.
.
.
.
Any ideas ?
thanks,
-Mark
|
|
From: Sam H. <sa...@da...> - 2002-03-31 09:33:14
|
Here's a patch for the 2.11 source. The old code just split up all the
args on whitespace. The new code only splits the args that contain perl
and speedy options .
*** /tmp/CGI-SpeedyCGI-2.11/src/speedy_opt.c Mon Mar 19 03:55:29 2001
--- speedy_opt.c Sun Mar 31 01:00:24 2002
***************
*** 128,164 ****
StrList *speedy_opts, StrList *script_args
)
{
! StrList split;
! char **p;
/* Arg-0 */
if (arg0)
*arg0 = speedy_util_strdup(*in);
++in;
! /* Split on spaces */
! strlist_init(&split);
! strlist_split(&split, in);
! p = strlist_export(&split);
!
! /* Perl args & Speedy options */
! for (; *p && **p == '-'; ++p) {
! if (p[0][1] == '-' && p[0][2] == '\0') {
! for (++p; *p && **p == '-'; ++p)
! strlist_append(speedy_opts, *p);
break;
- } else {
- strlist_append(perl_args, *p);
}
- }
! /* Script argv */
! if (script_args) {
! for (; *p; ++p)
! strlist_append(script_args, *p);
}
!
! strlist_free(&split);
}
--- 128,187 ----
StrList *speedy_opts, StrList *script_args
)
{
! int doing_speedy_opts = 0;
/* Arg-0 */
if (arg0)
*arg0 = speedy_util_strdup(*in);
++in;
! for (; *in; ++in) {
! char **p;
! StrList split;
!
! /* Split on spaces */
! {
! const char *temp[2];
!
! temp[0] = *in;
! temp[1] = NULL;
! strlist_init(&split);
! strlist_split(&split, temp);
! p = strlist_export(&split);
! }
!
! /*
! * If there are no options in this arg, give the whole unsplit
! * piece to the script_argv.
! */
! if (!*p || **p != '-') {
! strlist_free(&split);
break;
}
! /* Perl args & Speedy options */
! for (; *p && **p == '-'; ++p) {
! if (!doing_speedy_opts)
! if ((doing_speedy_opts = (p[0][1] == '-' && p[0][2] == '\0')))
! ++p;
! if (*p)
! strlist_append(doing_speedy_opts ? speedy_opts : perl_args, *p);
! }
!
! if (*p) {
! ++in;
! /* Give the remaining non-options in this arg to the script */
! if (script_args)
! strlist_concat2(script_args, (const char * const *)p);
! strlist_free(&split);
! break;
! }
! strlist_free(&split);
}
!
! /* Take the remaining args (without splits) and give to script_args */
! if (script_args)
! strlist_concat2(script_args, (const char * const *)in);
}
>
> I am using speedy with a non-CGI script and the parameters are getting
> corrupted.
>
> i.e: what should be one single parameter is being broken up into many
> seperate parameters.
>
> has anyone else had this problem?
>
> (I have worked around this by using <STDIN> to pass the data into my
> script, but I would ideally like to use command-line parameters).
>
> thanks, mark.
>
> _______________________________________________
> Speedycgi-users mailing list
> Spe...@li...
> https://lists.sourceforge.net/lists/listinfo/speedycgi-users
|
|
From: Sam H. <sa...@da...> - 2002-03-30 17:10:28
|
I tried a one-line "exit" program and it doesn't hang, so there's probably
something else going on.
One possible cause is that you forked off another process that still has
stdin or stdout open. For example, this program will hang under speedy
but not under perl:
system("sleep 300 &");
exit();
A workaround would be:
if (fork == 0) {
close(STDOUT);
close(STDIN);
close(STDERR);
exec("sleep 300");
}
exit();
Currently the frontend exits only when all communication on the stdio
sockets ceases.
>
> I am using speedy with a non-CGI script, and it is hanging after the
> exit().
>
> has anyone else encountered this?
>
> thanks, mark.
>
> _______________________________________________
> Speedycgi-users mailing list
> Spe...@li...
> https://lists.sourceforge.net/lists/listinfo/speedycgi-users
|