Martin T je dne 21/08/14 15:53 napisal-a:
if I executed "iperf -c 10.10.10.1 -fm -t 600 -i 60 -u -b 500m" in a
virtual-machine with GigE vNIC, then Iperf client(version 2.0.5) under
Debian sent traffic at 120Mbps during all the intervals. If I replaced
the OS in virtual-server with CentOS, the same Iperf release with the
same command was able to send traffic at 500Mbps.
There used to be a discussion on this mailing list a few weeks ago
... some linux kernel drivers (together with IP stack drivers) work
in blocking mode, and some in non-blocking mode. The difference,
visible for application, is that in blocking mode application will
not be able to TX more data until driver buffer gets flushed
(enough) while in non-blocking mode kernel driver will discard data
when buffers are full.
What you should tell us is the following: what kind of performance
do you see on the receiving side? Is it the same as on sending side
(in both OS cases) or is it much different for either of OS choices?
I would guess that the real GigE vNIC is around 120Mbps and Debian
exposes that to iperf while CentOS drops data and thus hiding real
performance from iperf.
-- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'
-- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc
BOFH excuse #178:
short leg on process table