Well, I figured out how to change the buffer size (using do_read instead =
of
get), but it didn't really affect the transfer time much. Is it =
possible
that Net::SFTP (or possibly the underlying Net::SSH::Perl) just isn't =
good
for transferring large files? Please let me know if anyone has had
experience that would indicate otherwise; I'd truly love to be wrong =
about
this...
> From: Daniel Werner <daniel@ds...>
> Net::SFTP copying in 8192-byte chunks? =20
> 2005-03-08 08:40 =20
>
> Hi, I"m trying to use Net::SFTP to set up an automated sftp of about =
35 GB
> (yes, that"s GB) worth of data to run once a week between 2 Solaris
> machines. I tried a bare bones test of a 1 GB file using the "get"
method,
> but it was incredibly slow. It might have finished after a few days,
where
> the openssh version of the (interactive) sftp command took only a few
> minutes to copy the entire thing.
>=20
> I turned on the verbose option for Net::SFTP and saw that it was =
copying
> only 8192-byte chunks at a time, and I think that"s why it was slow. =
How
> does one raise this value? I looked in the CPAN docs but couldn"t =
figure
it
> out.
>=20
> Thanks for any help,
> Dan
>
=20
|