Re: [JSch-users] USe of sftp
Status: Alpha
Brought to you by:
ymnk
From: Paul E. <Pau...@gm...> - 2011-07-14 23:45:37
|
Håkon Sagehaug skribis: > We use jsch as the library to download files over sftp in our system Input > for the sftp module is a folder on a remote host, this folders often > contains alot of subfolders with many small files, total amout of data can > be a couple of 100gb. Now I think I have just a very standard way of > downloading each file for all the subfolders on the host. So I was wondering > if there was any tricks I could to speed up things? E.g. some sort of batch > download of a directory. After the file have been downloaded we add it to a > tar archive. I guess I could use 'scp -r ', will there be any implications > with respect to performance etc? scp is not really fast, as it needs a roundtrip between each file, even if using `-r`. In theory, sftp can be faster by requesting multiple files in parallel, but I'm not sure that/how the JSch implementation can do this. If you have a shell/exec access to the remote system (not only sftp), I think the fastest would be to create the tar file there and transfer it at once. You don't even need to save it as a file and transfer via sftp, simply output the tar to the standard output, which then can be piped to a file on the client side. Your command (in an exec channel) would be: tar -c -f- folder Maybe you want to gzip it immediately, then use -z. If you don't, you might think about enabling transport-level compression with JSch (see the examples). Paŭlo |