Recently I have very bad performances (around 1400 KB/s) over network (100base-t) with dump and restore (dump 0.4b40 (using libext2fs 1.37 of 21-Mar-2005) or 0.4b37 (using libext2fs 1.35 of 28-Feb-2004)). Any other software full the network w/o problem... I have tested various configuration (from/to RH9, Fedora, CentOS) and I'm always at ~1400KB/s. Locally (12-15MB/s) and on tape (3-4MB/s DDS-4) no problem... but over network... user@remote:/file.., user@remote:/dev/null, user@remote:/dev/tape, etc always... ~1400KB/s. If I start 2 dump instances at the same time to a remote server... each dump is around 1400KB/s. I have tried various blocksizes... no effect. SSH isn't the bottleneck because I have 10MB/s on 'cat /dev/zero | ssh user@remote 'cat - > /dev/null'
I use... for 0.4b40:
export RMT='/usr/local/sbin/rmt' ;
export RSH='/usr/bin/ssh' ;
/usr/local/sbin/dump 0 -b 64 -f user@remote:/file.. /
for 0.4b37 (default config from rpm):
export RMT='/sbin/rmt' ;
export RSH='/usr/bin/ssh' ;
/sbin/dump 0 -b 64 -f user@remote:/file.. /
dump -0 -f - / | ssh user@remote cat - > /tmp/test ?
If dump locally works ok and ssh isn't a bottleneck then the above should go at maximum speed. Is it ?
Thanks for your reply!
As expected I get ~10MB/s stable w/o any problem.
Are you able to reproduce the slow performance when using rmt over network?
Don't hesitate if I can do anything to help resolving the (my?) *problem*.
I did a quick testing and I get about 3 MB/s when piping into ssh, and only 1 MB/s when using rmt (over ssh).
stracing the ssh shows two things:
* the blocksize is forced to 16 KB by ssh
* when piping the writing is done in consecutive 16 KB writes, but when using rmt there are also smaller writes corresponding to the rmt commands.
I didn't expect those things to have such an influence on the speed... I need to look a bit closer at all this.
Anyway, I suspect the performance with a plain rsh in place of ssh should be much better...
You are right for rsh... with RSH="/usr/bin/rsh" I'm gettting the max speed. The weird part is that previously I was getting max speed even with SSH. Can it be kernel related? I will try to rollback to an earlier kernel version... just in case... since I have moved from 2.4.30 to 2.4.31 at the beginning of the month...
I would say this is more likely caused by ssh and not by the kernel. Did you upgrade ssh (or change the prefs like crypto algorithms etc) lately ?
Old kernel doesn't affect the performance... I will try from a fresh dist installation to see what change is needed to reproduce the slowdown.
For openssh... same old version (RH9)... w/o any config change since the slowdown...
Maybe something changed on the remote side ?
Long time no see... ;-)
Yesterday I've launched gnome-terminal to discover that if I execute dump (or my dump script) through gnome-terminal I get full speed on a remote backup using ssh... But on any /dev/tty or via SSH I'm stalled at 1.4MB/s
Server A -> Server B (with DLT 7000)
Are some combination and the result:
Any TTY on Server A -> dump -f server_b:/dev/... => 1.4MB
Any TTY on Server B -> ssh user@server_a -> dump -f server_b:/dev/... => 1.4MB
Gnome on Server A -> gnome-terminal -> dump -f server_b:/dev/... => Full speed
Gnome on Server B -> gnome-terminal -> ssh user@server_a -> dump -f server_b:/dev/... => Full speed
X11.app (MacOSX) -> ssh user@server_a -> dump -f server_b:/dev/... => 1.4MB
X11.app (MacOSX) -> ssh -X user@server_a -> dump -f server_b:/dev/... => Full speed
Terminal.app (MacOSX) or Any TTY on Server B -> ssh user@server_a -> dump -f server_b:/dev/... => 1.4MB
Terminal.app (MacOSX) or Any TTY on Server B -> ssh -X user@server_a -> dump -f server_b:/dev/... => 1.4MB
export command returns roughly the same values in all cases. Any idea for the next step in my quest? ;-)
I really fail to see what gnome-terminal has to do with dump's speed ....