Hello, I'm trying to understand how best to tune the cache and buffer parameters for best performance.
The setup is as follows
1 server, and 6 clients on a gigabit network. All systems are pretty low end, but are running Windows 10 and have SSDs.
Using version 5. No encryption. Unlimited rate.
The data being sent is a couple larger 100mb or so sized files and a dozen smaller 1mb sized files.
Jumbo frames are enabled on the network and all clients, so the block size is 8800.
Using the defaults for cache and buffer the performance wasn't that great, raising them to the max was a bit better, but I'm not sure how best to tune it. I've been trying to find some guidance or examples or some logic behind it, but haven't found anything yet.
Any help is appreciated. Thank you.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Try setting the block size to 8192, that way it's a multiple of the disk allocation size.
Playing with the cache size should also have an effect. First try 2097152 (2MB) then go up in increments of 1048576 (1MB) to see how that changes the throughput.
Regards,
Dennis
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Try setting the block size to 8192, that way it's a multiple of the disk
allocation size.
Playing with the cache size should also have an effect. First try 2097152
(2MB) then go up in increments of 1048576 (1MB) to see how that changes the
throughput.
After playing with the cache it seems to be best around 3mb, and increasing the UDP buffer to 1500k seems to be better. I've applied the FastSendDatagramThreshold 1500 registry fix for windows.
Hello, I'm trying to understand how best to tune the cache and buffer parameters for best performance.
The setup is as follows
1 server, and 6 clients on a gigabit network. All systems are pretty low end, but are running Windows 10 and have SSDs.
Using version 5. No encryption. Unlimited rate.
The data being sent is a couple larger 100mb or so sized files and a dozen smaller 1mb sized files.
Jumbo frames are enabled on the network and all clients, so the block size is 8800.
Using the defaults for cache and buffer the performance wasn't that great, raising them to the max was a bit better, but I'm not sure how best to tune it. I've been trying to find some guidance or examples or some logic behind it, but haven't found anything yet.
Any help is appreciated. Thank you.
Jason,
Try setting the block size to 8192, that way it's a multiple of the disk allocation size.
Playing with the cache size should also have an effect. First try 2097152 (2MB) then go up in increments of 1048576 (1MB) to see how that changes the throughput.
Regards,
Dennis
Thanks a lot. I'll give that a try.
On Sat, Jul 3, 2021 at 10:07 PM Dennis Bush dennisbush@users.sourceforge.net wrote:
--
- Jason Znack
How about the UDP buffer size? Leave as default?
After playing with the cache it seems to be best around 3mb, and increasing the UDP buffer to 1500k seems to be better. I've applied the FastSendDatagramThreshold 1500 registry fix for windows.
Preliminary results from today's testing:
-c 2097152 -B 153600 = 90 sec
-c 2097152 -B 768000 = 22 sec
-c 2097152 -B 1536000 = 20 sec
-c 2097152 -B 3072000 = 26 sec
-c 3145728 -B 1536000 = 25 sec
-c 10485760 -B 104857600 = 34 sec
Last edit: jason znack 2021-07-05
I'd say go with at least 1MB (1048576) for the UDP buffer size. You can also adjust that up or down as needed.
Thanks. That jives with what I was able to see from some testing (Edited my post above with results)
I'll do some further testing to fine tune.