Re: [Ssh-sftp-perl-users] Question to BatchMode
Brought to you by:
dbrobins
|
From: Stefan A. <ste...@gm...> - 2009-02-18 09:25:28
|
>From Russ Brewer
> Stefan,
>
> eval {
> my ($stdout, $stderr, $exit) = $ssh->cmd(BashScript);
>
> @sdtout_result = split(/\n/, $stdout);
> @stderr_result = split(/\n/, $stderr);
> $BashScript_exit_status = $exit;
> };
>
> if (($BashScript_exit_status ne 0) || ($@))
> # then there was a problem
>
> if ($@) {
> print "\$@ = $@\n";
> } else {
> print "ERROR: remote script exit status =
> $BashScript_exit_status\n";
> }
>
> foreach (@stderr_result) {
> print "$_\n";
> }
>
> } else {
>
> # No error was detected
> foreach (@stdout_result) {
> print "$_\n";
> }
> }
>
The script idea itself works, but it does not solve my problem. I will
explain my problem again in detail at the bottom.
> I have not used the register_handler command because the above code gets me
> what I need.
> Be sure to program BashScript to exit with a non_zero value if it encounters
> an error while running on the remote server.
>
> You can also program BashScript to run STDERR error free when it encounters
> no run problem and then test @stderr_result for content. If content exists,
> something went wrong. The exception might be an FTP command which utilizes
> the STDERR channel for routine FTP communication. I like to write BashScript
> to log its own errors remotely and then have it check that log for size. If
> the remote error log holds content then terminate BashScript with exit(1)
> instead of exit(0).
>
> I hope this helps. I works for me.
>
>
> Russ Brewer
>
>
>
Original Post
>>
>> I'm running Net::SSH::Perl 1.34.
>>
>> I execute a bash-script via ssh on a remote server.
>>
>> I register two handles for stdout and stderr:
>>
>> $ssh->register_handler(
>> "stdout",
>> sub {
>> my ( $channel, $buffer ) = @_;
>> print $buffer->bytes;
>> print LOGFILE $buffer->bytes;
>> }
>> );
>> $ssh->register_handler(
>> "stderr",
>> sub {
>> my ( $channel, $buffer ) = @_;
>> print $buffer->bytes;
>> print LOGFILE $buffer->bytes;
>> }
>> );
>>
>> Then is send my command with:
>>
>> $ssh->cmd("BashScript");
>>
>> wait until it finishes.
>>
>> The question I have is:
>>
>> How do I detect a connection loss?
>>
>> The script on the remote server my take a very long time without any
>> activity on the console.
>>
>> I have tried invoking my connection with "options => [
>> "ServerAliveInterval 15 ServerAliveCountMax 10" ]" or "options => [
>> "BatchMode yes" ]". However whenever I test it, by pulling out the
>> network cable while the script runs, it just sits there. The longest I
>> have waited is 45 min until I aborted manually.
>>
>> Shouldn't the connection be checked every 300 seconds with "BatchMode
>> yes"?
>>
>> Any help would be appreciated.
>>
>>
>> ------------------------------------------------------------------------------
>> Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco,
>> CA
>> -OSBC tackles the biggest issue in open source: Open Sourcing the
>> Enterprise
>> -Strategies to boost innovation and cut costs with open source
>> participation
>> -Receive a $600 discount off the registration fee with the source code:
>> SFAD
>> http://p.sf.net/sfu/XcvMzF8H
>> _______________________________________________
>> Ssh-sftp-perl-users mailing list
>> Ssh...@li...
>> https://lists.sourceforge.net/lists/listinfo/ssh-sftp-perl-users
>
>
Okay:
I have two machines. One runs the perl script. The script connects
through ssh and runs a bash script on the second machine. This bash
script can run quiet some time without giving back any feed back, but
it still runs.
Here is the example how I would run it directly from the command line with ssh:
ssh -o "ServerAliveInterval 15" user@2ndMachine /home/user/BashScript.sh
One thing that is important for me, is to detect if I'm still
connected to the server. If I run it in the command line as described
and pull the network cable on the second machine, I get after some
time a time out. This is a desired behavior for me.
I have tried the same thing with a perl script. Heres an example script:
#!/usr/bin/perl -w
use strict;
use warnings;
use Net::SSH::Perl;
{
my $ssh = Net::SSH::Perl->new( "2ndMachine", protocol => 2, options
=> [ "ServerAliveInterval 15"] );
$ssh->login("user", "password");
my @sdtout_result;
my @stderr_result;
my $BashScript_exit_status;
eval {
my ($stdout, $stderr, $exit) =
$ssh->cmd("/srv/import/bin/backup_topsis.sh");
@sdtout_result = split(/\n/, $stdout);
@stderr_result = split(/\n/, $stderr);
$BashScript_exit_status = $exit;
};
if (($BashScript_exit_status ne 0) || ($@))
# then there was a problem
{
if ($@)
{
print "\$@ = $@\n";
}
else
{
print "ERROR: remote script exit status = $BashScript_exit_status\n";
}
foreach (@stderr_result)
{
print "$_\n";
}
}
else
{
# No error was detected
foreach ( @sdtout_result )
{
print "$_\n";
}
}
}
In this case, however, I do not get a time out if I pull the network
cable on the second machine. The script will just sit there forever.
It is important for me to detect network errors and in this case I can
not detect them.
I do not have a problem detecting the exit code of the remote script.
That is already possible. I have used the handler method (see first
post), because I wanted a continuous output of the progress of the
remote script.
If anyone has an idea how I can detect any network errors, it would be great.
Stefan
|