Hello, I've been trying to set up a unicast transfer between 2 locations with an average 70 ms ping response up to 160 Mbits. The issue I have is a lot of NAKs and the client sending notifications back to the sever a lot of time after the server has stopped sending sections. I assume this may have to do with timing but I have no clue where to start debuggin this. If I use the congestion control, transfer speed is way too low. Running with the exact same parameters with a file 400 MB size runs fine, while a 34 GB file just keeps doing serveral passes until it aborts because of timeout.
Thanks for the help
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
With the large file size, it's possible that disk I/O could be a limiting factor. You could try increasing the lower limit of the round trip time to give more time to write to disk and respond to requests by using the -r option to the server, or you could try adjusting the cache size using the -c option on the client.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Increasing the lower limit helped to get a more stable connection. But Im getting many NAKs accross the transfer and when it gets to the second pass it runs very fast and breaks with a transfer error "Transfer aborted by 0xnnnnnnnnn: Transfer timed out". Do I need to increase it more? I raised lower limit from 0.05 to .2 , would you suggest to go way up ?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello, I've been trying to set up a unicast transfer between 2 locations with an average 70 ms ping response up to 160 Mbits. The issue I have is a lot of NAKs and the client sending notifications back to the sever a lot of time after the server has stopped sending sections. I assume this may have to do with timing but I have no clue where to start debuggin this. If I use the congestion control, transfer speed is way too low. Running with the exact same parameters with a file 400 MB size runs fine, while a 34 GB file just keeps doing serveral passes until it aborts because of timeout.
Thanks for the help
With the large file size, it's possible that disk I/O could be a limiting factor. You could try increasing the lower limit of the round trip time to give more time to write to disk and respond to requests by using the -r option to the server, or you could try adjusting the cache size using the -c option on the client.
Increasing the lower limit helped to get a more stable connection. But Im getting many NAKs accross the transfer and when it gets to the second pass it runs very fast and breaks with a transfer error "Transfer aborted by 0xnnnnnnnnn: Transfer timed out". Do I need to increase it more? I raised lower limit from 0.05 to .2 , would you suggest to go way up ?
You could try bringing up the minimum RTT more, or (given the number of NAKs) you could also try reducing the rate.