#545 bad network performance with 10Gbit

open
nobody
intel (43)
5
2012-10-09
2010-06-29
Anonymous
No

Hello,
I have trouble with the network performance inside my virtual machines.

My KVM-Host machine is connected to a 10Gbit Network. All interfaces are configured to a mtu of 4132. On this host I have no problems and I can use the full bandwidth:

CPU_Info:
2x Intel Xeon X5570
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida tpr_shadow vnmi flexpriority ept vpid

KVM Version:
QEMU PC emulator version 0.12.3 (qemu-kvm-0.12.3), Copyright (c) 2003-2008 Fabrice Bellard
0.12.3+noroms-0ubuntu9

KVM Host Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Host OS:
Ubuntu 10.04 LTS
Codename: lucid

KVM Guest Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Guest OS:
Ubuntu 10.04 LTS
Codename: lucid

iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P4

[ ID] Interval Transfer Bandwidth
[ 4] 0.0-60.0 sec 18.8 GBytes 2.69 Gbits/sec
[ 5] 0.0-60.0 sec 15.0 GBytes 2.14 Gbits/sec
[ 6] 0.0-60.0 sec 19.3 GBytes 2.76 Gbits/sec
[ 3] 0.0-60.0 sec 15.1 GBytes 2.16 Gbits/sec
[SUM] 0.0-60.0 sec 68.1 GBytes 9.75 Gbits/sec

Inside a virtual machine don't reach this result:

iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P 4

[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 5.65 GBytes 808 Mbits/sec
[ 4] 0.0-60.0 sec 5.52 GBytes 790 Mbits/sec
[ 5] 0.0-60.0 sec 5.66 GBytes 811 Mbits/sec
[ 6] 0.0-60.0 sec 5.70 GBytes 816 Mbits/sec
[SUM] 0.0-60.0 sec 22.5 GBytes 3.23 Gbits/sec

I only can use 3,23Gbits of 10Gbits. I use the virtio driver for all of my vms, but I have also tried to use the e1000 nic device instead.

With starting the iperf performance test on multiple vms simultaneously I can use the full bandwidth of the kvm host's interface. But only one vm can't use the full bandwith. Is this a known limitation, or can I improve this performance?

Does anyone have an idea how I can improve my network performance? It's very important, because I want to use the network interface to boot all vms via AOE (ATA over Ethernet).

If I mount a harddisk via AOE inside a vm I get only this results:
Write |CPU |Rewrite |CPU |Read |CPU
102440 |10 |51343 |5 |104249 |3

On the KVM Host I get those results on a mouted AOE Device:
Write |CPU |Rewrite |CPU |Read |CPU
205597 |19 |139118 |11 |391316 |11

If I mount the AOE Device directly on the kvm-host and put a virtual harddisk-file in it I got the following results inside a vm using this harddisk-file:
Write |CPU |Rewrite |CPU |Read |CPU
175140 |12 |136113 |24 |599989 |29

Discussion

  • Jes Sorensen
    Jes Sorensen
    2010-06-29

    What kind of CPU load are you seeing when running at that kind of rates?

    There has been some work in the virtio-ring handling code that might improve the situation slightly, but I don't think that is in the Ubuntu kernel Getting 3.2Gbps from within a guest isn't actually bad. The packet rates over 10GigE are insane.

    You may want to look at the virtio ring sizes, it could be that it isn't big enough for that kind of packet rate too.

    Cheers,
    Jes

     
  • Jes Sorensen
    Jes Sorensen
    2010-06-30

    If you want more speed you are going to need to look at vhost_net. However support for this isn't in
    your 2.6.32 kernel, you need something more recent, plus a matching qemu-kvm to go with it.

    This isn't really a bug, it would be better if you had a look a vhort_net and took it to a mailing list discussion.

    Cheers,
    Jes

     
  • Comment has been marked as spam. 
    Undo

    You can see all pending comments posted by this user  here


    Anonymous
    2010-07-01

    Thanks a lot! I'm going to have a look at vhost_net!

     
  • Comment has been marked as spam. 
    Undo

    You can see all pending comments posted by this user  here


    Anonymous
    2010-07-01

    Hey Jes,
    I have just tested vhost_net, but without success.
    I have upgraded my kernel to 2.6.35-6 with vhost_net support and have installed the qemu-kvm version from git://git.kernel.org/pub/scm/linux/kernel/git/mst/qemu-kvm.git
    But I still have the same results as before.

    Can you tell me please, which mailing list I can use to discuss this problem? I had already posted my problem in two forums, but still got no answers.

    best regards
    Rene

     
  • Jes Sorensen
    Jes Sorensen
    2010-08-19

    Rene,

    Did you get this sorted out? The best place to discus this is the kvm mailing list I believe.

    Sorry for not getting back to you earlier.

    Jes