I've coded up a nice pure java file transfer utility for my application
jsch. It works well, has neat configuration, auto backup of successfuly
files, auto retry and good logging. There's only one problem; it runs
than a lame dog after a heavy meal.
The basic code looks like this...
ChannelSftp sftp = getSftpChannel(dest);
long startTime = System.currentTimeMillis();
long endTime = System.currentTimeMillis();
// delete the target file if it already exists,
// or else the rename will fail
log.log(Priority.DEBUG, "Copied " + kiloBytes + "Kb at "
+ transferRate + "Kbps.);
So as you can see the transfer code is nothing special. However, here
the transfer times I get with the big files I'm moving around:
Copied 42449Kb at 156Kbps.
Copied 42449Kb at 154Kbps.
Copied 42449Kb at 69Kbps. 
Copied 42449Kb at 152Kbps
Copied 42449Kb at 122Kbps.
Copied 42449Kb at 68Kbps.
 load average: 5.11, 5.15, 4.04, other transfers had less load
Now considering that both boxes are on the same LAN the transfer rates
pretty low. When I then try to copy one of the same files across using
the command line:
myuser@...$ time scp 050228112725.csv dlsigma02:
to the list of known hosts.
050228112725.csv 100% 41MB 1.5MB/s 00:27
I'm willing to accept that encryption in java won't be as native
but the 10 times difference looks rather large. How can I find out what
bottle necks are, or can anyone suggest ways to improve the performance?
myuser@...$ java -version
java version "1.4.2_02"
Java(TM) 2 Runtime Environment, Standard Edition (build
Java HotSpot(TM) Client VM (build 1.4.2_02-b03, mixed mode)
myuser@...$ ssh -v
OpenSSH_3.7.1p2, SSH protocols 1.5/2.0, OpenSSL 0.9.7b 10 Apr
Usage: ssh [options] host [command]