[Speedycgi-users] Possible ALARM bug report and a feature wish
Brought to you by:
samh
|
From: PW <sub...@me...> - 2002-10-16 21:28:15
|
Hi Sam,
First off, thanks a lot for the newest SpeedyCGI release.
While beta-testing a new session tracking Web site application running
persistently in the background via SpeedyCGI (using the Group directive)
across a couple of virtual hosts for some time, I noticed one particular
problem and one particular issue resulting in a feature request:
1. Problem:
ALARM eval kills SpeedyCGI process each time, causing a full reload of
my application on the next request. This happens both in a module (if a
WHOIS timeout error is raised) and in an internal subroutine (were the
eval is most ever successful). I know that the change file of SpeedyCGI
lists this issue as fixed but it does not seem so. Maybe it has to do
with the Group feature?
Example code:
sub xxx {
my ($timeout,$missed,$currtime,$sema) = (2,0,time,"${db}.LCK");
eval {
local $SIG{ALRM} = sub { die 'lock busy!'; };
alarm $timeout;
while (1) {
last if sysopen(LF, $sema, O_RDWR|O_CREAT, 0644) and flock LF, LOCK_EX;
do { print STDERR "$$ lock busy\n"; alarm 0; last; } if
$missed++ > 5 && !-f $sema;
sleep 1; do { print STDERR "$$ lock timeout\n"; alarm 0; last; }
if time > $currtime + 15;
}
};
alarm 0;
if ( $@ =~ /lock busy!/ ) { print STDERR "timeout: $@\n"; close LF; }
... further code accesses a Berkeley DBM database ...
}
Result: a timeout signal causes the process to die after the HTTP request
Server details:
- SpeedyCGI 2.21
- Apache 1.36 Linux with mod_speedycgi enabled
- Perl 6.1
2. Feature wish:
In SpeedyCGI there is the add_shutdown_handler for post-processing when
the backend terminates and there is the register_cleanup for doing the
same after each request. However, in cases such as handling user
sessions in a CGI environment using the SpeedyCGI Group directive, an
unique process may process a couple of users, holding their ongoing
session data in memory. Other processes come in, even launched through
the same Web visitor (user reloading page or requesting next page within
short time, etc.), which is a problem in itself (a special global memory
space, shared between active Group processes via the backend would be
great indeed!) - yet the shutdown_handler will not come into action when
a process terminates, only when the backend itself shuts down. So my
urging feature wish would be a "pid_ shutdown_handler" that in my
current case would post-process any "in-memory per process" session data
to disk. Would this be doable?
Thanks!
Philippe Wiede
megapublic inc.
|