From: Gerd v. E. <li...@eg...> - 2008-04-13 21:05:07
|
Hi, I just tried the virtio block device with the intent to boost disk throughput for my vm. I ran bonnie++ -r 512 -s 2048 -u nobody -d /tmp: Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP virtio_blk 2G 14274 42 20206 14 22363 37 31116 92 66731 81 140.8 13 kvm-ide 2G 26065 83 26435 28 24146 33 26587 84 57991 18 91.5 2 The host is a Xeon 3040, 1G RAM (I know that that is a bit few, it's just a test machine...), the vm gets 512MB of that. The data is stored on two SATA disks, mirrored (RAID1) with md, lvm running on top of that. Host and Client are running 2.6.25-0.200.rc8.git3.i686. This is a kernel from Fedora-Rawhide with kvm manually enabled by me. KVM version is 64. Especially writing seems to be slower using virtio, but reading isn't that much faster. I thought virtio would improve io speed significantly because of fewer steps needed to communicate betweeen host and client. What might be the reason that I can't see a speed boost? - Wrong setup (The virtio-client boots from /dev/vda1, so I think virtio is working) - virtio_blk is not matured/tuned enough to give a real speed boost - I'm missing a patch that is not included into 2.6.26-rc8 but can be found in kvm-git - The output of bonnie++ is bogus because timing is not that accurate within a kvm-client Any ideas welcome. Kind regards, Gerd -- Address (better: trap) for people I really don't want to get mail from: ja...@ca... |
From: Charles D. <Cha...@me...> - 2008-04-13 21:25:00
|
My understanding is that virtio is very latency-sensitive. Do you know whether your system is configured for high-resolution timer support, and (more to the point) which clock source KVM is using on your host? |
From: Gerd v. E. <li...@eg...> - 2008-04-13 21:55:56
|
Hi Charles, thanks for the quick reply. > My understanding is that virtio is very latency-sensitive. Do you know > whether your system is configured for high-resolution timer support, and > (more to the point) which clock source KVM is using on your host? On the host: > grep "Clock Event Device" /proc/timer_list Clock Event Device: hpet Clock Event Device: lapic Clock Event Device: lapic Within the vm: > grep "Clock Event Device" /proc/timer_list Clock Event Device: pit Clock Event Device: lapic Host and vm use the same kernel, CONFIG_HIGH_RES_TIMERS=y How do I find out which clock source KVM is using? Kind regards, Gerd -- Address (better: trap) for people I really don't want to get mail from: ja...@ca... |
From: Marcelo T. <mto...@re...> - 2008-04-14 00:25:53
|
On Sun, Apr 13, 2008 at 11:04:44PM +0200, Gerd von Egidy wrote: > Hi, > > I just tried the virtio block device with the intent > to boost disk throughput for my vm. > > I ran bonnie++ -r 512 -s 2048 -u nobody -d /tmp: > > Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > virtio_blk 2G 14274 42 20206 14 22363 37 31116 92 66731 81 140.8 13 > kvm-ide 2G 26065 83 26435 28 24146 33 26587 84 57991 18 91.5 2 > > The host is a Xeon 3040, 1G RAM (I know that that is a bit > few, it's just a test machine...), the vm gets 512MB of that. > The data is stored on two SATA disks, mirrored (RAID1) with > md, lvm running on top of that. > > Host and Client are running 2.6.25-0.200.rc8.git3.i686. This > is a kernel from Fedora-Rawhide with kvm manually enabled by me. > KVM version is 64. > > Especially writing seems to be slower using virtio, but > reading isn't that much faster. > > I thought virtio would improve io speed significantly because > of fewer steps needed to communicate betweeen host and client. > What might be the reason that I can't see a speed boost? > > - Wrong setup (The virtio-client boots from /dev/vda1, > so I think virtio is working) > - virtio_blk is not matured/tuned enough to give a real > speed boost > - I'm missing a patch that is not included into 2.6.26-rc8 > but can be found in kvm-git > - The output of bonnie++ is bogus because timing is not that > accurate within a kvm-client > > Any ideas welcome. virtio-blk is doing synchronous IO which blocks the guest CPU. This is especially bad for write intensive loads where the guest will hang in the host write throttling logic. In the meantime please try the following patch: http://www.mail-archive.com/kvm...@li.../msg14732.html Some larger changes will take place to allow similar behaviour with an API for asynchronous vectored IO. |
From: Gerd v. E. <li...@eg...> - 2008-04-16 23:06:13
|
Hi Marcelo, > virtio-blk is doing synchronous IO which blocks the guest CPU. > > This is especially bad for write intensive loads where the guest > will hang in the host write throttling logic. > > In the meantime please try the following patch: > > http://www.mail-archive.com/kvm...@li.../msg14732.html Thank you for this hint. I tried it this evening with kvm 66 - which should include your patch, right? The result, at least with bonnie++, is nearly the same. I looked at the patch for the guest kernel you sent with this patch but looking at the discussion it did not work. Kind regards, Gerd -- Address (better: trap) for people I really don't want to get mail from: ja...@ca... |
From: Anthony L. <an...@co...> - 2008-04-14 01:41:16
|
Gerd von Egidy wrote: > Hi, > > I just tried the virtio block device with the intent > to boost disk throughput for my vm. > We really haven't optimized virtio block yet under KVM. Most of the effort so far has been focused on virtio_net. We'll get there though in the near future. Regards, Anthony Liguori > I ran bonnie++ -r 512 -s 2048 -u nobody -d /tmp: > > Version 1.03 ------Sequential Output------ --Sequential Input- --Random- > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP > virtio_blk 2G 14274 42 20206 14 22363 37 31116 92 66731 81 140.8 13 > kvm-ide 2G 26065 83 26435 28 24146 33 26587 84 57991 18 91.5 2 > > The host is a Xeon 3040, 1G RAM (I know that that is a bit > few, it's just a test machine...), the vm gets 512MB of that. > The data is stored on two SATA disks, mirrored (RAID1) with > md, lvm running on top of that. > > Host and Client are running 2.6.25-0.200.rc8.git3.i686. This > is a kernel from Fedora-Rawhide with kvm manually enabled by me. > KVM version is 64. > > Especially writing seems to be slower using virtio, but > reading isn't that much faster. > > I thought virtio would improve io speed significantly because > of fewer steps needed to communicate betweeen host and client. > What might be the reason that I can't see a speed boost? > > - Wrong setup (The virtio-client boots from /dev/vda1, > so I think virtio is working) > - virtio_blk is not matured/tuned enough to give a real > speed boost > - I'm missing a patch that is not included into 2.6.26-rc8 > but can be found in kvm-git > - The output of bonnie++ is bogus because timing is not that > accurate within a kvm-client > > Any ideas welcome. > > Kind regards, > > Gerd > > |
From: Marcelo T. <mto...@re...> - 2008-04-16 23:37:42
|
On Thu, Apr 17, 2008 at 01:05:50AM +0200, Gerd von Egidy wrote: > Hi Marcelo, > > > virtio-blk is doing synchronous IO which blocks the guest CPU. > > > > This is especially bad for write intensive loads where the guest > > will hang in the host write throttling logic. > > > > In the meantime please try the following patch: > > > > http://www.mail-archive.com/kvm...@li.../msg14732.html > > Thank you for this hint. > > I tried it this evening with kvm 66 - which should include your patch, right? Hi Gerd, No its not included. The issue is being worked on. |
From: Gerd v. E. <li...@eg...> - 2008-04-18 21:28:08
|
Hi Marcelo, > > > http://www.mail-archive.com/kvm...@li.../msg14732.html > > > > I tried it this evening with kvm 66 - which should include your patch, > > right? > > No its not included. The issue is being worked on. my bad, sorry. Now I know I really have that patch: qemu-kvm hangs :( I was trying kvm 66 with only the patch listed above applied on an otherwise perfectly working vm with virtio_blk root partition: Last line of the booting kernel in my vnc window: Serial: 8250/16550 driver $Revision 1.90... (you know the rest) an strace of the qemu-kvm gave the following in rapid succession: clock_gettime(CLOCK_MONOTONIC, {2565, 306799672}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 307065342}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 307354930}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 307618803}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 307886312}) = 0 timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 33000000}}, NULL) = 0 rt_sigtimedwait([USR1 USR2 ALRM IO], {si_signo=SIGALRM, si_code=SI_TIMER, si_pid=0, si_uid=0, si_value={int=0, ptr=0}}, 0xbfe5af88, 8) = 14 rt_sigaction(SIGALRM, NULL, {0x804d8f8, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 select(12, [6 11], [], [], {0, 0}) = 0 (Timeout) select(0, [], NULL, NULL, {0, 0}) = 0 (Timeout) clock_gettime(CLOCK_MONOTONIC, {2565, 342895116}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 343164113}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 343454002}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 343716804}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 343980012}) = 0 timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 33000000}}, NULL) = 0 rt_sigtimedwait([USR1 USR2 ALRM IO], {si_signo=SIGALRM, si_code=SI_TIMER, si_pid=0, si_uid=0, si_value={int=0, ptr=0}}, 0xbfe5af88, 8) = 14 rt_sigaction(SIGALRM, NULL, {0x804d8f8, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 select(12, [6 11], [], [], {0, 0}) = 0 (Timeout) select(0, [], NULL, NULL, {0, 0}) = 0 (Timeout) clock_gettime(CLOCK_MONOTONIC, {2565, 379035364}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 379307884}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 379589434}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 379919100}) = 0 clock_gettime(CLOCK_MONOTONIC, {2565, 380183834}) = 0 timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 33000000}}, NULL) = 0 rt_sigtimedwait([USR1 USR2 ALRM IO], {si_signo=SIGALRM, si_code=SI_TIMER, si_pid=0, si_uid=0, si_value={int=0, ptr=0}}, 0xbfe5af88, 8) = 14 rt_sigaction(SIGALRM, NULL, {0x804d8f8, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 select(12, [6 11], [], [], {0, 0}) = 0 (Timeout) select(0, [], NULL, NULL, {0, 0}) = 0 (Timeout) ... Hope that helps. Kind regards, Gerd -- Address (better: trap) for people I really don't want to get mail from: ja...@ca... |
From: Marcelo T. <mto...@re...> - 2008-04-18 21:45:54
|
Hi Gerd, On Fri, Apr 18, 2008 at 11:27:58PM +0200, Gerd von Egidy wrote: > Hi Marcelo, > > > > > > http://www.mail-archive.com/kvm...@li.../msg14732.html > > > > > > I tried it this evening with kvm 66 - which should include your patch, > > > right? > > > > No its not included. The issue is being worked on. > > my bad, sorry. > > Now I know I really have that patch: qemu-kvm hangs :( > > I was trying kvm 66 with only the patch listed above applied on an otherwise > perfectly working vm with virtio_blk root partition: > > Last line of the booting kernel in my vnc window: > > Serial: 8250/16550 driver $Revision 1.90... (you know the rest) When the hang happens, can you run kvm-stat --once (script can be found kvm-66 directory) and paste the result? Can you confirm that reverting the patch fixes it? > an strace of the qemu-kvm gave the following in rapid succession: > > clock_gettime(CLOCK_MONOTONIC, {2565, 306799672}) = 0 > clock_gettime(CLOCK_MONOTONIC, {2565, 307065342}) = 0 > clock_gettime(CLOCK_MONOTONIC, {2565, 307354930}) = 0 > clock_gettime(CLOCK_MONOTONIC, {2565, 307618803}) = 0 > clock_gettime(CLOCK_MONOTONIC, {2565, 307886312}) = 0 > timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 > timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 33000000}}, NULL) = 0 > rt_sigtimedwait([USR1 USR2 ALRM IO], {si_signo=SIGALRM, si_code=SI_TIMER, > si_pid=0, si_uid=0, si_value={int=0, ptr=0}}, 0xbfe5af88, 8) = 14 > rt_sigaction(SIGALRM, NULL, {0x804d8f8, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 This won't help much. |
From: Gerd v. E. <li...@eg...> - 2008-04-18 22:32:49
|
Hi Marcelo, thanks for the quick reply. > When the hang happens, can you run kvm-stat --once (script can be found > kvm-66 directory) and paste the result? efer_reload 0 0 exits 4943909 2036 fpu_reload 222178 0 halt_exits 896464 999 halt_wakeup 17690 0 host_state_reload 2279013 1027 insn_emulation 2640886 1001 insn_emulation_fail 0 0 invlpg 0 0 io_exits 396350 28 irq_exits 200020 1 irq_window 0 0 mmio_exits 900583 0 mmu_cache_miss 54441 0 mmu_flooded 63313 0 mmu_pde_zapped 46114 0 mmu_pte_updated 554813 0 mmu_pte_write 639377 0 mmu_recycled 0 0 mmu_shadow_zapped 63313 0 pf_fixed 1205697 0 pf_guest 92134 0 remote_tlb_flush 1 0 request_irq 0 0 signal_exits 1 0 tlb_flush 609240 5 > Can you confirm that reverting the patch fixes it? It seems to be some kind of race condition, it doesn't completely hang the system always at the same position but probably brings the io to a halt at some time. I just got it to boot, but when running bonnie++ it halted at some time. The kvm_stat output above is from that one. I was still able to type in stuff at the console but the console couldn't break the system call bonnie was in. Booting a qemu-kvm without the patch completely fixes these problems. Kind regards, Gerd -- Address (better: trap) for people I really don't want to get mail from: ja...@ca... |