From: Tomasz C. <ma...@wp...> - 2008-05-14 13:52:41
|
Avi Kivity wrote: > This is the second release of network drivers for Windows guests running > on a kvm host. The drivers are intended for Windows 2000 and Windows > XP, and Windows 2003. Both x86 and x64 variants are provided. kvm-61 > or later is needed in the host. At the moment only binaries are available. Hi, This is great news! Do you have any performance numbers for networking to see how it compares to the real hardware? - Linux host (or: real Windows running on that host) - PV Windows (network driver) - non-PV Windows -- Tomasz Chmielewski http://wpkg.org |
From: Dor L. <dor...@qu...> - 2008-05-14 15:13:34
|
On Wed, 2008-05-14 at 15:52 +0200, Tomasz Chmielewski wrote: > Avi Kivity wrote: > > > This is the second release of network drivers for Windows guests running > > on a kvm host. The drivers are intended for Windows 2000 and Windows > > XP, and Windows 2003. Both x86 and x64 variants are provided. kvm-61 > > or later is needed in the host. At the moment only binaries are available. > > Hi, > > This is great news! > > Do you have any performance numbers for networking to see how it compares to the > real hardware? > > - Linux host (or: real Windows running on that host) For host you can measure yourself but for Linux guest (to host) it currently do about 1G, using TSO (work in progress) it can do 2.5G, and there is also work in progress to make the kernel know virtio through the tap interface which will further boot performance. > - PV Windows (network driver) About 700Mb+-, there is currently extra copy that we need to omit. Thanks for Anthony, we just have to change the driver. > - non-PV Windows What do you mean? Other fully emulated nics like e1000? It does not perform as pv but depending on the guest it can do up to 600Mb+-. |
From: Tomasz C. <ma...@wp...> - 2008-05-14 15:49:32
|
Dor Laor schrieb: (...) >> - PV Windows (network driver) > > About 700Mb+-, there is currently extra copy that we need to omit. > Thanks for Anthony, we just have to change the driver. > >> - non-PV Windows > > What do you mean? Other fully emulated nics like e1000? > It does not perform as pv but depending on the guest it can do up to > 600Mb+-. Just generally, how Windows PV drivers help to improve network performance. So, a PV network driver can do about 700Mb/s, and an emulated NIC can do about 600 Mb/s, Windows guest to host? That would be about 20% improvement? -- Tomasz Chmielewski http://wpkg.org |
From: Dor L. <dor...@qu...> - 2008-05-14 16:23:04
|
On Wed, 2008-05-14 at 17:49 +0200, Tomasz Chmielewski wrote: > Dor Laor schrieb: > > (...) > > >> - PV Windows (network driver) > > > > About 700Mb+-, there is currently extra copy that we need to omit. > > Thanks for Anthony, we just have to change the driver. > > > >> - non-PV Windows > > > > What do you mean? Other fully emulated nics like e1000? > > It does not perform as pv but depending on the guest it can do up to > > 600Mb+-. > > Just generally, how Windows PV drivers help to improve network performance. > > So, a PV network driver can do about 700Mb/s, and an emulated NIC can do about > 600 Mb/s, Windows guest to host? > > That would be about 20% improvement? > > It's work in progress, doing zero copy in the guest, adding TSO, using virtio'd tap will drastically boot performance. There is no reason the performance won't match Linux guest. Also I don't exactly remember the numbers but the gain in the tx pass is grater. |
From: Anthony L. <an...@co...> - 2008-05-14 17:50:18
|
Dor Laor wrote: > On Wed, 2008-05-14 at 17:49 +0200, Tomasz Chmielewski wrote: > >> Dor Laor schrieb: >> >> (...) >> >> >>>> - PV Windows (network driver) >>>> >>> About 700Mb+-, there is currently extra copy that we need to omit. >>> Thanks for Anthony, we just have to change the driver. >>> >>> >>>> - non-PV Windows >>>> >>> What do you mean? Other fully emulated nics like e1000? >>> It does not perform as pv but depending on the guest it can do up to >>> 600Mb+-. >>> >> Just generally, how Windows PV drivers help to improve network performance. >> >> So, a PV network driver can do about 700Mb/s, and an emulated NIC can do about >> 600 Mb/s, Windows guest to host? >> >> That would be about 20% improvement? >> FWIW, virtio-net is much better with my patches applied. The difference between the e1000 and virtio-net is that e1000 consumes almost twice as much CPU as virtio-net so in my testing, the performance improvement with virtio-net is about 2x. We were loosing about 20-30% throughput because of the delays in handling incoming packets. Regards, Anthony LIguori >> > > It's work in progress, doing zero copy in the guest, adding TSO, using > virtio'd tap will drastically boot performance. There is no reason the > performance won't match Linux guest. > Also I don't exactly remember the numbers but the gain in the tx pass is > grater. > > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > kvm-devel mailing list > kvm...@li... > https://lists.sourceforge.net/lists/listinfo/kvm-devel > |
From: Avi K. <av...@qu...> - 2008-05-15 08:02:36
|
Anthony Liguori wrote: > FWIW, virtio-net is much better with my patches applied. The can_receive patches? Again, I'm not opposed to them in principle, I just think that if they help that this points at a virtio deficiency. Virtio should never leave the rx queue empty. Consider the case where the virtio queue isn't tied to a socket buffer, but directly to hardware. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. |
From: Anthony L. <an...@co...> - 2008-05-15 13:57:07
|
Avi Kivity wrote: > Anthony Liguori wrote: >> FWIW, virtio-net is much better with my patches applied. > > The can_receive patches? > > Again, I'm not opposed to them in principle, I just think that if they > help that this points at a virtio deficiency. Virtio should never > leave the rx queue empty. Consider the case where the virtio queue > isn't tied to a socket buffer, but directly to hardware. For RX performance: right now [ 3] 0.0-10.0 sec 1016 MBytes 852 Mbits/sec revert tap hack [ 3] 0.0-10.0 sec 564 MBytes 473 Mbits/sec all patches applied [ 3] 0.0-10.0 sec 1.17 GBytes 1.01 Gbits/sec drop lots of packets [ 3] 0.0-10.0 sec 1.05 GBytes 905 Mbits/sec The last patch is not in my series but it basically makes the ring size 512 and drops packets when we run out of descriptors. That was to valid that we're not hiding a virtio deficiency. The reason I want to buffer packets is that it avoids having to deal with tuning. For vringfd/vmdq, we'll have to make sure to get the tuning right though. Regards, Anthony Liguori |
From: Avi K. <av...@qu...> - 2008-05-15 15:30:31
|
Anthony Liguori wrote: > Avi Kivity wrote: >> Anthony Liguori wrote: >>> FWIW, virtio-net is much better with my patches applied. >> >> The can_receive patches? >> >> Again, I'm not opposed to them in principle, I just think that if >> they help that this points at a virtio deficiency. Virtio should >> never leave the rx queue empty. Consider the case where the virtio >> queue isn't tied to a socket buffer, but directly to hardware. > > For RX performance: > > > right now > [ 3] 0.0-10.0 sec 1016 MBytes 852 Mbits/sec > > revert tap hack > [ 3] 0.0-10.0 sec 564 MBytes 473 Mbits/sec > > all patches applied > [ 3] 0.0-10.0 sec 1.17 GBytes 1.01 Gbits/sec > > drop lots of packets > [ 3] 0.0-10.0 sec 1.05 GBytes 905 Mbits/sec > > > The last patch is not in my series but it basically makes the ring > size 512 and drops packets when we run out of descriptors. That was > to valid that we're not hiding a virtio deficiency. The reason I want > to buffer packets is that it avoids having to deal with tuning. For > vringfd/vmdq, we'll have to make sure to get the tuning right though. Okay; I'll apply the patches. Hopefully we won't diverge too much from upstream qemu. -- error compiling committee.c: too many arguments to function |
From: Anthony L. <an...@co...> - 2008-05-15 15:43:51
|
Avi Kivity wrote: > Anthony Liguori wrote: >> Avi Kivity wrote: >>> Anthony Liguori wrote: >>>> FWIW, virtio-net is much better with my patches applied. >>> >>> The can_receive patches? >>> >>> Again, I'm not opposed to them in principle, I just think that if >>> they help that this points at a virtio deficiency. Virtio should >>> never leave the rx queue empty. Consider the case where the virtio >>> queue isn't tied to a socket buffer, but directly to hardware. >> >> For RX performance: >> >> >> right now >> [ 3] 0.0-10.0 sec 1016 MBytes 852 Mbits/sec >> >> revert tap hack >> [ 3] 0.0-10.0 sec 564 MBytes 473 Mbits/sec >> >> all patches applied >> [ 3] 0.0-10.0 sec 1.17 GBytes 1.01 Gbits/sec >> >> drop lots of packets >> [ 3] 0.0-10.0 sec 1.05 GBytes 905 Mbits/sec >> >> >> The last patch is not in my series but it basically makes the ring >> size 512 and drops packets when we run out of descriptors. That was >> to valid that we're not hiding a virtio deficiency. The reason I >> want to buffer packets is that it avoids having to deal with >> tuning. For vringfd/vmdq, we'll have to make sure to get the tuning >> right though. > > Okay; I'll apply the patches. Hopefully we won't diverge too much > from upstream qemu. I am going to push these upstream. I need to finish the page_desc cache first b/c right now the version of virtio that could go into upstream QEMU has unacceptable performance for KVM. Regards, Anthony Liguori |
From: Muli Ben-Y. <mu...@il...> - 2008-05-14 18:31:52
|
On Wed, May 14, 2008 at 06:09:42PM +0300, Dor Laor wrote: > > Do you have any performance numbers for networking to see how it > > compares to the real hardware? > > > > - Linux host (or: real Windows running on that host) > > For host you can measure yourself but for Linux guest (to host) it > currently do about 1G, using TSO (work in progress) it can do 2.5G, > and there is also work in progress to make the kernel know virtio > through the tap interface which will further boot performance. ... with what kind of CPU utilization? > > - PV Windows (network driver) > > About 700Mb+-, there is currently extra copy that we need to omit. > Thanks for Anthony, we just have to change the driver. Same question (although it's less interesting if we can't even saturate the pipe). > > - non-PV Windows > > What do you mean? Other fully emulated nics like e1000? > It does not perform as pv but depending on the guest it can do up to > 600Mb+-. Same question (although again it's less interesting if we can't even saturate the pipe). Cheers, Muli |
From: Tomasz C. <ma...@wp...> - 2008-05-14 21:10:10
|
Anthony Liguori schrieb: (...) >>> So, a PV network driver can do about 700Mb/s, and an emulated NIC can >>> do about 600 Mb/s, Windows guest to host? >>> >>> That would be about 20% improvement? >>> > > FWIW, virtio-net is much better with my patches applied. The difference > between the e1000 and virtio-net is that e1000 consumes almost twice as > much CPU as virtio-net so in my testing, the performance improvement > with virtio-net is about 2x. We were loosing about 20-30% throughput > because of the delays in handling incoming packets. Do you by chance have any recent numbers on disk performance (i.e., Windows guest vs Linux host)? -- Tomasz Chmielewski http://wpkg.org |
From: Dor L. <dor...@qu...> - 2008-05-15 07:01:09
|
On Wed, 2008-05-14 at 23:09 +0200, Tomasz Chmielewski wrote: > Anthony Liguori schrieb: > > (...) > > >>> So, a PV network driver can do about 700Mb/s, and an emulated NIC can > >>> do about 600 Mb/s, Windows guest to host? > >>> > >>> That would be about 20% improvement? > >>> > > > > FWIW, virtio-net is much better with my patches applied. The difference > > between the e1000 and virtio-net is that e1000 consumes almost twice as > > much CPU as virtio-net so in my testing, the performance improvement > > with virtio-net is about 2x. We were loosing about 20-30% throughput > > because of the delays in handling incoming packets. > > Do you by chance have any recent numbers on disk performance (i.e., Windows > guest vs Linux host)? > > At the moment there is no pv block driver for Windows guests. (there is for linux) You can use scsi for windows, it should perform well. |
From: Tomasz C. <ma...@wp...> - 2008-05-15 08:02:37
|
Dor Laor schrieb: (...) >>> FWIW, virtio-net is much better with my patches applied. The difference >>> between the e1000 and virtio-net is that e1000 consumes almost twice as >>> much CPU as virtio-net so in my testing, the performance improvement >>> with virtio-net is about 2x. We were loosing about 20-30% throughput >>> because of the delays in handling incoming packets. >> Do you by chance have any recent numbers on disk performance (i.e., Windows >> guest vs Linux host)? >> >> > > At the moment there is no pv block driver for Windows guests. (there is > for linux) > You can use scsi for windows, it should perform well. How well, when compared to "bare metal"? Or when compared to a Linux guest with a pv block driver? Do you have any numbers? -- Tomasz Chmielewski http://wpkg.org |