Wondering if anyone has any experience tuning for transfer across a 10GbE network.
Currently just testing with one client and server and getting lots of nacks when letting uftp run as fast as it can. I get about 1Gbps.
If I set the max rate lower (it seems the max value is the max of a signed int, otherwise it overflows and goes negative) I get basically zero nacks and a steady transfer rate of 2Gbps.
Using an block size of 8500 and setting the cache size on both ends to the maximum.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You are correct. UFTP currently uses an int to keep track of the rate, so the maximum value it can currently support is roughly 2.1 Gbps.
At that rate, even with a block size of 8500 there's only a 4 microsecond delay between packets, and a microsecond is the smallest granularity it can pause for, so there might be some peculiar behavior regarding delay at those speeds. It might take sending multiple packets in between waits. In any case, I'll work on supporting a 64 bit value for the rate.
In the meantime you can try using -1 for the rate, which means no delay between sending each packet, so the sender will send as fast as the network interface will allow.
Last edit: Dennis Bush 2017-03-14
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm trying to run uftp over a 10GbE network as well and was wondering if this was resolved? I ran some tests and only got to about 1.4Gbps. I need to run this as close to 10Gbps as possible, any advice here would be greatly appricated.
Thanks!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This issue is still open. It will involve changing the packet wait time to nanoseconds instead of microseconds, and changing both the rate and and wait time to 64 bit values.
I'll try to spend some time on this.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I just tried this again but this time using a larger block size and enabling jumbo packets on the NIC, forgot to do this last time, and I'm actually getting ~6Gbps with one server and one client. So is the issue only if we try to specify 10Gbs for the transfer rate because I was using -1 for this test?
So the next question, is the ~6Gbps due to other hardware limitations now and not the uftp?
Thanks!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
In theory you should be able to get closer to the line speed assuming the disk can keep up. Increasing the cache size on the receiver can help with that, as can increasing the UDP send/receive buffer size on both sides.
Once I push out the changes to support a rate larger than 2Gbps that should further improve performance.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
We are using SATA drives so that could be our limiting factor since the SATA are rated for 6Gbps. We will be testing with NVMe memory at some point so we'll see if the 6Gbps transfer will increase closer to the 10Gbps line speed.
Do you have an estimate on when you will push out the change?
Thanks!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I've downloaded the updated version of uftp and trying to transfer files on a 10GbE connection but I'm not getting a reliable transfer. At times, it would transfer up to 6Gbps and other times it would be 2Gbps and other times I would get a Lost Connection error. For reference, on the server I'm using the following command:
Both client and server are using PCIe x4 NVMe memory so the limitation of the transfer shouldn't be the disk speed. Any idea on why I would get such random results and not closer to the 10GbE speed?
Thanks!
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi,
Wondering if anyone has any experience tuning for transfer across a 10GbE network.
Currently just testing with one client and server and getting lots of nacks when letting uftp run as fast as it can. I get about 1Gbps.
If I set the max rate lower (it seems the max value is the max of a signed int, otherwise it overflows and goes negative) I get basically zero nacks and a steady transfer rate of 2Gbps.
Using an block size of 8500 and setting the cache size on both ends to the maximum.
You are correct. UFTP currently uses an int to keep track of the rate, so the maximum value it can currently support is roughly 2.1 Gbps.
At that rate, even with a block size of 8500 there's only a 4 microsecond delay between packets, and a microsecond is the smallest granularity it can pause for, so there might be some peculiar behavior regarding delay at those speeds. It might take sending multiple packets in between waits. In any case, I'll work on supporting a 64 bit value for the rate.
In the meantime you can try using -1 for the rate, which means no delay between sending each packet, so the sender will send as fast as the network interface will allow.
Last edit: Dennis Bush 2017-03-14
Hi,
I'm trying to run uftp over a 10GbE network as well and was wondering if this was resolved? I ran some tests and only got to about 1.4Gbps. I need to run this as close to 10Gbps as possible, any advice here would be greatly appricated.
Thanks!
This issue is still open. It will involve changing the packet wait time to nanoseconds instead of microseconds, and changing both the rate and and wait time to 64 bit values.
I'll try to spend some time on this.
I just tried this again but this time using a larger block size and enabling jumbo packets on the NIC, forgot to do this last time, and I'm actually getting ~6Gbps with one server and one client. So is the issue only if we try to specify 10Gbs for the transfer rate because I was using -1 for this test?
So the next question, is the ~6Gbps due to other hardware limitations now and not the uftp?
Thanks!
In theory you should be able to get closer to the line speed assuming the disk can keep up. Increasing the cache size on the receiver can help with that, as can increasing the UDP send/receive buffer size on both sides.
Once I push out the changes to support a rate larger than 2Gbps that should further improve performance.
Dennis,
Thanks for the update and the great support.
We are using SATA drives so that could be our limiting factor since the SATA are rated for 6Gbps. We will be testing with NVMe memory at some point so we'll see if the 6Gbps transfer will increase closer to the 10Gbps line speed.
Do you have an estimate on when you will push out the change?
Thanks!
Hopefully in the next week or so. I need to do more testing.
Great, thanks!
Hi Dennis,
I've downloaded the updated version of uftp and trying to transfer files on a 10GbE connection but I'm not getting a reliable transfer. At times, it would transfer up to 6Gbps and other times it would be 2Gbps and other times I would get a Lost Connection error. For reference, on the server I'm using the following command:
uftp -B 104857600 -b 8800 -R 10000000 <file></file>
And on the client, I'm using the following command:
uftpd -B 104857600 -D D:\Temp\uftp_data -c 20971520
Both client and server are using PCIe x4 NVMe memory so the limitation of the transfer shouldn't be the disk speed. Any idea on why I would get such random results and not closer to the 10GbE speed?
Thanks!