You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(33) |
Nov
(325) |
Dec
(320) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(484) |
Feb
(438) |
Mar
(407) |
Apr
(713) |
May
(831) |
Jun
(806) |
Jul
(1023) |
Aug
(1184) |
Sep
(1118) |
Oct
(1461) |
Nov
(1224) |
Dec
(1042) |
2008 |
Jan
(1449) |
Feb
(1110) |
Mar
(1428) |
Apr
(1643) |
May
(682) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: 钟文辉 <she...@12...> - 2008-04-19 21:53:49
|
各位老总:您们好! 诚祝:您们在2008年里;有鼠不尽的快乐!鼠不尽的收获!鼠不尽的钞票! 鼠不尽的幸福!鼠不尽的美满生活!愿: 您们阖家欢乐!幸福安康! 我公司可以长期提供:出口报关单,核销单,等等一系列手续;代理:出口 报关,商检,境内外运输......等等;还可以代办:出口欧盟许可证,欧盟产地证; 并且还有(广州国际贸易交易会)的摊位可以转让;价格特别优惠;有意者请来邮件 或来电联系。谢谢合作! 电话:0755-81153047。 传真:0755-81172940。 手机:15817477278。 联系人:钟文辉。 此致: 敬礼! |
From: Glauber C. <gl...@gm...> - 2008-04-19 21:11:16
|
On Fri, Apr 18, 2008 at 1:27 PM, Avi Kivity <av...@qu...> wrote: > > Glauber de Oliveira Costa wrote: > > > Hi, > > I've got some qemu crashes while trying to passthrough an ide device > > to a kvm guest. After some investigation, it turned out that > register_ioport_{read/write} will abort on errors instead of returning > > a meaningful error. > > > > However, even if we do return an error, the asynchronous nature of pci > > config space mapping updates makes it a little bit hard to treat. > > > > This series of patches basically treats errors in the mapping functions in > > the pci layer. If anything goes wrong, we unregister the pci device, > unmapping > > any mappings that happened to be sucessfull already. > > > > After these patches are applied, a lot of warnings appears. And, you know, > > everytime there is a warning, god kills a kitten. But I'm not planning on > > touching the other pieces of qemu code for this until we set up (or not) > in > > this solution > > > > Comments are very welcome, specially from qemu folks (since it is a bit > invasive) > > > > > > > > Have you considered, instead of rolling back the changes you already made > before the failure, to have a function which checks if an ioport > registration will be successful? This may simplify the code. > Yes, I did. Basic problem is that I basically could not find this information handy until we were deep in the stack, right before calling the update mapping functions. I turned out preferring this option. I can, however, take a fresh look at that. -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." |
From: David S. A. <da...@ci...> - 2008-04-19 20:25:34
|
DOH. I had the 2 new ones backwards in the formats file. thanks for pointing that out, david Liu, Eric E wrote: > > I mean the value of PTE_WRITE you write in the formats file ( 0x00020016 > )should be same with KVM_TRC_PTE_WRITE you define in kvm.h, > but now it is 0x00020015. if not what you get in the text file will be > disordered. > |
From: Anthony L. <an...@co...> - 2008-04-19 20:02:38
|
Blue Swirl wrote: > On 4/17/08, Anthony Liguori <ali...@us...> wrote: > >> Yes, the vector version of packet receive is tough. I'll take a look at >> your patch. Basically, you need to associate a set of RX vectors with each >> VLANClientState and then when it comes time to deliver a packet to the VLAN, >> before calling fd_read, see if there is an RX vector available for the >> client. >> >> In the case of tap, I want to optimize further and do the initial readv() >> to one of the clients RX buffers and then copy that RX buffer to the rest of >> the clients if necessary. >> > > The vector versions should also help SLIRP to add IP and Ethernet > headers to the incoming packets. > Yeah, I'm hoping that with my posted linux-aio interface, I can add vector support since linux-aio has a proper asynchronous vector function. Are we happy with the DMA API? If so, we should commit it now so we can start adding proper vector interfaces for net/block. Regards, Anthony Liguori > I made an initial version of the vectored AIO SCSI with ESP. It does > not work, but I can see that just using the vectors won't give too > much extra performance, because at least initially the vector length > is 1. Collecting the statuses may be tricky. > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.net email is sponsored by the 2008 JavaOne(SM) Conference > Don't miss this year's exciting event. There's still time to save $100. > Use priority code J8TL2D2. > http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone > ------------------------------------------------------------------------ > > _______________________________________________ > kvm-devel mailing list > kvm...@li... > https://lists.sourceforge.net/lists/listinfo/kvm-devel > |
From: Blue S. <bla...@gm...> - 2008-04-19 19:40:24
|
On 4/17/08, Anthony Liguori <ali...@us...> wrote: > Yes, the vector version of packet receive is tough. I'll take a look at > your patch. Basically, you need to associate a set of RX vectors with each > VLANClientState and then when it comes time to deliver a packet to the VLAN, > before calling fd_read, see if there is an RX vector available for the > client. > > In the case of tap, I want to optimize further and do the initial readv() > to one of the clients RX buffers and then copy that RX buffer to the rest of > the clients if necessary. The vector versions should also help SLIRP to add IP and Ethernet headers to the incoming packets. I made an initial version of the vectored AIO SCSI with ESP. It does not work, but I can see that just using the vectors won't give too much extra performance, because at least initially the vector length is 1. Collecting the statuses may be tricky. |
From: Marcelo T. <mto...@re...> - 2008-04-19 16:46:38
|
On Sat, Apr 19, 2008 at 01:22:28PM -0300, Glauber Costa wrote: > > I've been able to reproduce the problem. Symptoms are that when using > > NOHZ vcpu0 LAPIC timer is ticking far less than the others (apparently vcpu0 > > is the only one ticking "correctly"): > > > > > > nohz=on with kvmclock > > [root@localhost ~]# cat /proc/timer_stats | grep apic > > 13214, 8590 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > 13214, 8589 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > 13211, 8588 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > 389, 8587 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > > > nohz=off > > 3253, 8672 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > 2876, 8673 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > 2543, 8674 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > 2179, 8675 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > > > no-kvmclock > > 1017, 8808 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > 1577, 8809 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > 1708, 8807 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > 1812, 8806 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > > > > > Glauber will start looking at this next week. > > > > > > > > >From what me and marcelo discussed, I think there's a possibility that > it has marginally something to do with precision of clock calculation. > Gerd's patches address that issues. Can somebody test this with those > patches (both guest and host), while I'm off ? Haven't seen Gerd's guest patches ? |
From: Marcelo T. <mto...@re...> - 2008-04-19 16:26:20
|
On Sat, Apr 19, 2008 at 12:29:47PM -0300, Marcelo Tosatti wrote: > > I just reproduced this on a UP guest. Were you seeing the exact same > > stack trace in the guest with kvm-64 ? > > I've been able to reproduce the problem. Symptoms are that when using > NOHZ vcpu0 LAPIC timer is ticking far less than the others (apparently vcpu0 > is the only one ticking "correctly"): > > > nohz=on with kvmclock > [root@localhost ~]# cat /proc/timer_stats | grep apic > 13214, 8590 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 13214, 8589 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 13211, 8588 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 389, 8587 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > nohz=off > 3253, 8672 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 2876, 8673 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 2543, 8674 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 2179, 8675 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > no-kvmclock > 1017, 8808 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 1577, 8809 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 1708, 8807 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 1812, 8806 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > > Glauber will start looking at this next week. Glauber, that printk you suggested has just triggered, but in a different condition. Guest was working fine (SMP 2-way), then suddenly: [root@localhost bonnie++-1.03c]# ./bonnie++ You must use the "-u" switch when running as root. usage: bonnie++ [-d scratch-dir] [-s size(Mb)[:chunk-size(b)]] [-n number-to-stat[:max-size[:min-size][:num-directories]]] [-m machine-name] [-r ram-size-iserial8250: too much work for irq4 n-Mb] [-x number-of-tests] [-u uid-to-use:gid-to-use] [-g gid-to-use] [-q] [-f] [-b] [-p processes | -y] Version: 1.03c [root@localhost bonnie++-1.03c]# ./bonnie++ dirty_portuguese_word: -361322 And there it hanged. diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index a3fa587..7785fcc 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -453,6 +453,8 @@ void update_wall_time(void) #else offset = clock->cycle_interval; #endif + if ((s64) offset < 0) + printk("...! %lld\n", (s64)offset); clock->xtime_nsec += (s64)xtime.tv_nsec << clock->shift; /* normally this loop will run just once, however in the |
From: Glauber C. <gl...@gm...> - 2008-04-19 16:22:22
|
On Sat, Apr 19, 2008 at 12:29 PM, Marcelo Tosatti <mto...@re...> wrote: > On Mon, Apr 07, 2008 at 06:34:57PM -0300, Marcelo Tosatti wrote: > > > > On Mon, Apr 07, 2008 at 01:53:36PM +0200, Nikola Ciprich wrote: > > > Hi, > > > > > > I also tried paravirt clock again in latest git with kvm-65 patch > > > applied, and problem with cpu-lockups persists: > > > > > > [10813.654806] BUG: soft lockup - CPU#0 stuck for 61s! [swapper:0] > > > [10813.655789] CPU 0: > > > [10813.656624] Modules linked in: virtio_pci virtio_ring virtio_blk virtio > > > piix dm_snapshot dm_zero dm_mirror dm_mod ide_disk > > > ide_core sd_mod scsi_mod ext3 jbd ehci_hcd ohci_hcd uhci_hcd > > > [10813.658805] Pid: 0, comm: swapper Not tainted 2.6.25-rc7 #5 > > > [10813.658805] RIP: 0010:[<ffffffff80222ab2>] [<ffffffff80222ab2>] > > > native_safe_halt+0x2/0x10 > > > [10813.658805] RSP: 0018:ffffffff805adf50 EFLAGS: 00000296 > > > [10813.658805] RAX: 000000019b08eeb0 RBX: ffffffff805f5000 RCX: > > > 000000019b08eeb0 > > > [10813.658805] RDX: 0000000000000006 RSI: 00000000356832b0 RDI: > > > ffffffff805adf38 > > > [10813.658805] RBP: 0000000000000da8 R08: 0000000000000000 R09: > > > 0000000000000000 > > > [10813.658805] R10: 0000000000000001 R11: 0000000000000002 R12: > > > ffffffff802228ed > > > [10813.658805] R13: 00000000000132a0 R14: ffffffff80200bba R15: > > > ffff81000100a280 > > > [10813.658805] FS: 0000000000000000(0000) GS:ffffffff80576000(0000) > > > knlGS:0000000000000000 > > > [10813.658805] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b > > > [10813.658805] CR2: 00007fac0f852000 CR3: 0000000000201000 CR4: > > > 00000000000006e0 > > > [10813.658805] DR0: 0000000000000000 DR1: 0000000000000000 DR2: > > > 0000000000000000 > > > [10813.658805] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: > > > 0000000000000400 > > > [10813.658805] > > > [10813.658805] Call Trace: > > > [10813.658805] [<ffffffff8020a55b>] ? default_idle+0x3b/0x70 > > > [10813.658805] [<ffffffff8020a520>] ? default_idle+0x0/0x70 > > > [10813.658805] [<ffffffff8020a60e>] ? cpu_idle+0x7e/0xe0 > > > [10813.658805] [<ffffffff80211630>] ? pda_init+0x30/0xb0 > > > > > > Can I somehow help to track this one down?? > > > > Hi Nikola, > > > > I just reproduced this on a UP guest. Were you seeing the exact same > > stack trace in the guest with kvm-64 ? > > I've been able to reproduce the problem. Symptoms are that when using > NOHZ vcpu0 LAPIC timer is ticking far less than the others (apparently vcpu0 > is the only one ticking "correctly"): > > > nohz=on with kvmclock > [root@localhost ~]# cat /proc/timer_stats | grep apic > 13214, 8590 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 13214, 8589 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 13211, 8588 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 389, 8587 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > nohz=off > 3253, 8672 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 2876, 8673 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 2543, 8674 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 2179, 8675 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > no-kvmclock > 1017, 8808 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 1577, 8809 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 1708, 8807 qemu-system-x86 apic_mmio_write (apic_timer_fn) > 1812, 8806 qemu-system-x86 apic_mmio_write (apic_timer_fn) > > > Glauber will start looking at this next week. > > > >From what me and marcelo discussed, I think there's a possibility that it has marginally something to do with precision of clock calculation. Gerd's patches address that issues. Can somebody test this with those patches (both guest and host), while I'm off ? -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." |
From: Marcelo T. <mto...@re...> - 2008-04-19 15:26:51
|
On Mon, Apr 07, 2008 at 06:34:57PM -0300, Marcelo Tosatti wrote: > On Mon, Apr 07, 2008 at 01:53:36PM +0200, Nikola Ciprich wrote: > > Hi, > > > > I also tried paravirt clock again in latest git with kvm-65 patch > > applied, and problem with cpu-lockups persists: > > > > [10813.654806] BUG: soft lockup - CPU#0 stuck for 61s! [swapper:0] > > [10813.655789] CPU 0: > > [10813.656624] Modules linked in: virtio_pci virtio_ring virtio_blk virtio > > piix dm_snapshot dm_zero dm_mirror dm_mod ide_disk > > ide_core sd_mod scsi_mod ext3 jbd ehci_hcd ohci_hcd uhci_hcd > > [10813.658805] Pid: 0, comm: swapper Not tainted 2.6.25-rc7 #5 > > [10813.658805] RIP: 0010:[<ffffffff80222ab2>] [<ffffffff80222ab2>] > > native_safe_halt+0x2/0x10 > > [10813.658805] RSP: 0018:ffffffff805adf50 EFLAGS: 00000296 > > [10813.658805] RAX: 000000019b08eeb0 RBX: ffffffff805f5000 RCX: > > 000000019b08eeb0 > > [10813.658805] RDX: 0000000000000006 RSI: 00000000356832b0 RDI: > > ffffffff805adf38 > > [10813.658805] RBP: 0000000000000da8 R08: 0000000000000000 R09: > > 0000000000000000 > > [10813.658805] R10: 0000000000000001 R11: 0000000000000002 R12: > > ffffffff802228ed > > [10813.658805] R13: 00000000000132a0 R14: ffffffff80200bba R15: > > ffff81000100a280 > > [10813.658805] FS: 0000000000000000(0000) GS:ffffffff80576000(0000) > > knlGS:0000000000000000 > > [10813.658805] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b > > [10813.658805] CR2: 00007fac0f852000 CR3: 0000000000201000 CR4: > > 00000000000006e0 > > [10813.658805] DR0: 0000000000000000 DR1: 0000000000000000 DR2: > > 0000000000000000 > > [10813.658805] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: > > 0000000000000400 > > [10813.658805] > > [10813.658805] Call Trace: > > [10813.658805] [<ffffffff8020a55b>] ? default_idle+0x3b/0x70 > > [10813.658805] [<ffffffff8020a520>] ? default_idle+0x0/0x70 > > [10813.658805] [<ffffffff8020a60e>] ? cpu_idle+0x7e/0xe0 > > [10813.658805] [<ffffffff80211630>] ? pda_init+0x30/0xb0 > > > > Can I somehow help to track this one down?? > > Hi Nikola, > > I just reproduced this on a UP guest. Were you seeing the exact same > stack trace in the guest with kvm-64 ? I've been able to reproduce the problem. Symptoms are that when using NOHZ vcpu0 LAPIC timer is ticking far less than the others (apparently vcpu0 is the only one ticking "correctly"): nohz=on with kvmclock [root@localhost ~]# cat /proc/timer_stats | grep apic 13214, 8590 qemu-system-x86 apic_mmio_write (apic_timer_fn) 13214, 8589 qemu-system-x86 apic_mmio_write (apic_timer_fn) 13211, 8588 qemu-system-x86 apic_mmio_write (apic_timer_fn) 389, 8587 qemu-system-x86 apic_mmio_write (apic_timer_fn) nohz=off 3253, 8672 qemu-system-x86 apic_mmio_write (apic_timer_fn) 2876, 8673 qemu-system-x86 apic_mmio_write (apic_timer_fn) 2543, 8674 qemu-system-x86 apic_mmio_write (apic_timer_fn) 2179, 8675 qemu-system-x86 apic_mmio_write (apic_timer_fn) no-kvmclock 1017, 8808 qemu-system-x86 apic_mmio_write (apic_timer_fn) 1577, 8809 qemu-system-x86 apic_mmio_write (apic_timer_fn) 1708, 8807 qemu-system-x86 apic_mmio_write (apic_timer_fn) 1812, 8806 qemu-system-x86 apic_mmio_write (apic_timer_fn) Glauber will start looking at this next week. |
From: Muli Ben-Y. <mu...@il...> - 2008-04-19 14:56:14
|
On Fri, Apr 18, 2008 at 06:56:41PM +0300, Avi Kivity wrote: > be...@il... wrote: > > From: Ben-Ami Yassour <be...@il...> > > > > Signed-off-by: Ben-Ami Yassour <be...@il...> > > Signed-off-by: Muli Ben-Yehuda <mu...@il...> > > --- > > libkvm/libkvm.c | 24 ++++++++---- > > qemu/hw/pci-passthrough.c | 89 +++++++++++---------------------------------- > > qemu/hw/pci-passthrough.h | 2 + > > 3 files changed, 40 insertions(+), 75 deletions(-) > > > > diff --git a/libkvm/libkvm.c b/libkvm/libkvm.c > > index de91328..8c02af9 100644 > > --- a/libkvm/libkvm.c > > +++ b/libkvm/libkvm.c > > @@ -400,7 +400,7 @@ void *kvm_create_userspace_phys_mem(kvm_context_t kvm, unsigned long phys_start, > > { > > int r; > > int prot = PROT_READ; > > - void *ptr; > > + void *ptr = NULL; > > struct kvm_userspace_memory_region memory = { > > .memory_size = len, > > .guest_phys_addr = phys_start, > > @@ -410,16 +410,24 @@ void *kvm_create_userspace_phys_mem(kvm_context_t kvm, unsigned long phys_start, > > if (writable) > > prot |= PROT_WRITE; > > > > - ptr = mmap(NULL, len, prot, MAP_ANONYMOUS | MAP_SHARED, -1, 0); > > - if (ptr == MAP_FAILED) { > > - fprintf(stderr, "create_userspace_phys_mem: %s", strerror(errno)); > > - return 0; > > - } > > + if (len > 0) { > > + ptr = mmap(NULL, len, prot, MAP_ANONYMOUS | MAP_SHARED, -1, 0); > > + if (ptr == MAP_FAILED) { > > + fprintf(stderr, "create_userspace_phys_mem: %s", > > + strerror(errno)); > > + return 0; > > + } > > > > - memset(ptr, 0, len); > > + memset(ptr, 0, len); > > + } > > > > memory.userspace_addr = (unsigned long)ptr; > > - memory.slot = get_free_slot(kvm); > > + > > + if (len > 0) > > + memory.slot = get_free_slot(kvm); > > + else > > + memory.slot = get_slot(phys_start); > > + > > r = ioctl(kvm->vm_fd, KVM_SET_USER_MEMORY_REGION, &memory); > > if (r == -1) { > > fprintf(stderr, "create_userspace_phys_mem: %s", strerror(errno)); > > > > This looks like support for zero-length memory slots? Why is it > needed? > > It needs to be in a separate patch. We need an interface to remove a memslot. When the guest writes to a direct assigned device's BAR and changes an MMIO region, we need to remove the old slot and establish a new one. The kernel side treats 0-sized memslot as "remove", but the userspace side didn't quite handle it properly. Personally I would've preferred a proper "remove" interface, rather than shoehorning it into kvm_create_userspace_phys_mem and a 0-sized slot. If that's acceptable, we'll put together a patch. > > diff --git a/qemu/hw/pci-passthrough.c b/qemu/hw/pci-passthrough.c > > index 7ffcc7b..a5894d9 100644 > > --- a/qemu/hw/pci-passthrough.c > > +++ b/qemu/hw/pci-passthrough.c > > @@ -25,18 +25,6 @@ typedef __u64 resource_size_t; > > extern kvm_context_t kvm_context; > > extern FILE *logfile; > > > > -CPUReadMemoryFunc *pt_mmio_read_cb[3] = { > > - pt_mmio_readb, > > - pt_mmio_readw, > > - pt_mmio_readl > > -}; > > - > > -CPUWriteMemoryFunc *pt_mmio_write_cb[3] = { > > - pt_mmio_writeb, > > - pt_mmio_writew, > > - pt_mmio_writel > > -}; > > - > > > > There's at least one use case for keeping mmio in userspace: > reverse-engineering a device driver. So if it doesn't cause too much > trouble, please keep this an option. I don't think it's a big deal to support both, although I'm not sure how useful it would be (especially considering mmiotrace). Did you have a user-interface for specifying it in mind? Cheers, Muli |
From: Muli Ben-Y. <mu...@il...> - 2008-04-19 14:34:29
|
On Fri, Apr 18, 2008 at 06:50:03PM +0300, Avi Kivity wrote: > be...@il... wrote: > > From: Ben-Ami Yassour <be...@il...> > > > > Signed-off-by: Ben-Ami Yassour <be...@il...> > > Signed-off-by: Muli Ben-Yehuda <mu...@il...> > > --- > > arch/x86/kvm/mmu.c | 59 +++++++++++++++++++++++++++++-------------- > > arch/x86/kvm/paging_tmpl.h | 19 +++++++++---- > > include/linux/kvm_host.h | 2 +- > > virt/kvm/kvm_main.c | 17 +++++++++++- > > 4 files changed, 69 insertions(+), 28 deletions(-) > > > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > index 078a7f1..c89029d 100644 > > --- a/arch/x86/kvm/mmu.c > > +++ b/arch/x86/kvm/mmu.c > > @@ -112,6 +112,8 @@ static int dbg = 1; > > #define PT_FIRST_AVAIL_BITS_SHIFT 9 > > #define PT64_SECOND_AVAIL_BITS_SHIFT 52 > > > > +#define PT_SHADOW_IO_MARK (1ULL << PT_FIRST_AVAIL_BITS_SHIFT) > > + > > > > Please rename this PT_SHADOW_MMIO_MASK. Sure thing. > > #define VALID_PAGE(x) ((x) != INVALID_PAGE) > > > > #define PT64_LEVEL_BITS 9 > > @@ -237,6 +239,9 @@ static int is_dirty_pte(unsigned long pte) > > > > static int is_rmap_pte(u64 pte) > > { > > + if (pte & PT_SHADOW_IO_MARK) > > + return false; > > + > > return is_shadow_present_pte(pte); > > } > > > > Why avoid rmap on mmio pages? Sure it's unnecessary work, but > having less cases improves overall reliability. The rmap functions already have a check to bail out if the pte is not an rmap pte, so in that sense, we aren't adding a new case for the code to handle, just adding direct MMIO ptes to the existing list of non-rmap ptes. > You can use pfn_valid() in gfn_to_pfn() and kvm_release_pfn_*() to > conditionally update the page refcounts. Since rmap isn't useful for direct MMIO ptes, doesn't it make more sense to "bail out" early rather than in the bowls of the rmap code? Chag Same'ach Muli |
From: Ian K. <bl...@bl...> - 2008-04-19 10:18:09
|
Ian Kirk wrote: > 2.6.24.4-64.fc8PAE noexec=off: > > Using normal F8 modules > qemu-kvm dies in the same way > > > 2. Loading the kernel modules that come with kvm-66 > > Against 2.6.24.4-64.fc8 it works. 2.6.24.4-64.fc8PAE with kvm-66 module seems to work OK. I guess that solves that little problem? I intend to upgrade my other AMD machine to 4GB (which is at home, as opposed to at work) so I can properly test the various combinations to check it's "fixed" in kvm-66 (or somewhere after the bundled Fedora 8 one) Ian. |
From: Alex D. <ale...@ya...> - 2008-04-19 04:19:36
|
--- On Fri, 4/18/08, Avi Kivity <av...@qu...> wrote: > From: Avi Kivity <av...@qu...> > Subject: Re: [kvm-devel] Second KVM process hangs eating 80-100% CPU on host during startup > To: "Alex Davis" <ale...@ya...> > Cc: kvm...@li... > Date: Friday, April 18, 2008, 12:12 PM > Alex Davis wrote: > > Host software: > > Linux 2.6.24.4 > > KVM 65 (I am using the kernel modules from this > release). > > X11 7.2 from Xorg > > SDL 1.2.13 > > GCC 4.1.1 > > Glibc 2.4 > > > > Host hardware: > > Asus P5B Deluxe (P965 chipset based) motherboard > > 4 GB RAM > > Intel E6700 CPU > > > > Guest software: > > Slackware 12.0 installed from CD-ROM. > > > > Command used to first KVM instance: > > /usr/local/bin/qemu-system-x86_64 -hda > /spare/vdisk1.img -cdrom /dev/cdrom -boot c -m 384 -net > > nic,macaddr=DE:AD:BE:EF:11:29 -net > tap,ifname=tap0,script=no & > > > > Command used to start second KVM instance: > > /usr/local/bin/qemu-system-x86_64 -hda > /spare/vdisk2.img -cdrom /dev/cdrom -boot c -m 384 -net > > nic,macaddr=DE:AD:BE:EF:11:30 -net > tap,ifname=tap1,script=no & > > > > tap0 and tap1 are bridged on the host. The guest OS > was installed on /spare/vdisk1.img, > > which was initially created by /usr/local/bin/qemu-img > create -f qcow /spare/vdisk.img 10G > > After the guest installation completed, vdisk1 was > copied to vdisk2. > > > > The second instance always stops after printing > > Checking if the processor honours the WP bit even in > supervisor mode... Ok. > > It stays hung until I press the return key in the > first instance; sometimes clicking in another X > > window will wake it up as well. > > > > This is a test machine so I can test patches (almost) > at will. > > > > > > Strange. Does pinning each guest to a different cpu help > (use 'taskset > 1 qemu ... vdisk1.img & ', taskset 2 qemu ... > vdisk2.img) Some additional information: I upgraded the guest to 2.6.25, and added some printk's to init_32.c and init/calibrate.c in the kernel source tree. Here's the output from dmesg for the guest boot: [ 0.004000] Checking if this processor honours the WP bit even in supervisor mode...Ok. [ 0.004000] Before cpa_init. [ 0.004000] CPA: page pool initialized 1 of 1 pages preallocated [ 0.004000] After cpa_init. [ 0.004000] After pagealloc [ 0.004000] After cpu_hotplug_init [ 0.004000] After kmem_cache_init [ 0.004000] After setup_percpu_pageset [ 0.004000] After numa_policy_init [ 0.004005] After late_time_init [ 0.004622] Before read_current_timer(&pre_start) [ 0.005314] After read_current_timer() [ 0.006493] Before read_current_timer(&start) [ 16.065027] Before read_current_timer(&post_start) [ 16.065753] Before read_current_timer(&post_end) [ 16.066437] Before read_current_timer(&start) [ 16.073007] Before read_current_timer(&post_start) [ 16.081007] Before read_current_timer(&post_end) [ 16.081703] Before read_current_timer(&start) [ 16.089008] Before read_current_timer(&post_start) [ 16.097008] Before read_current_timer(&post_end) [ 16.097695] Before read_current_timer(&start) [ 16.105010] Before read_current_timer(&post_start) [ 16.113009] Before read_current_timer(&post_end) [ 16.113697] Before read_current_timer(&start) [ 16.121010] Before read_current_timer(&post_start) [ 16.129010] Before read_current_timer(&post_end) [ 16.129697] calibrate_delay_direct() failed to get a good estimate for loops_per_jiffy. [ 16.129698] Probably due to long platform interrupts. Consider using "lpj=" boot option. [ 16.132180] Calibrating delay loop... 5308.41 BogoMIPS (lpj=10616832) [ 16.237019] After calibrate_delay Notice how the time jumped from about 0 seconds to 16 seconds. That's where I woke it up by typing in another window. The code seems to be hanging in the call to read_current_timer(&start) in function calibrate_delay_direct in init/calibrate.c. Also notice that calibrate_delay_direct() failed. ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ |
From: Liu, E. E <eri...@in...> - 2008-04-19 04:04:41
|
David S. Ahern wrote: > inline. > > Liu, Eric E wrote: >> David S. Ahern wrote: >>> I am trying to add a trace marker and the data is coming out all >>> 0's. e.g., >>> >>> 0 (+ 0) PTE_WRITE vcpu = 0x00000001 pid = 0x0000240d [ >>> gpa = 0x00000000 00000000 gpte = 0x00000000 00000000 ] >>> >>> Patch is attached. I know the data is non-zero as I added an if >>> check before calling the trace to only do the trace if the data is >>> non-zero. Anyone have suggestions on what I am missing? >>> >>> thanks, >>> >>> david >> Hi, david >> I read your patch and find this: >> +#define KVM_TRC_PTE_WRITE (KVM_TRC_HANDLER + 0x15) >> +#define KVM_TRC_PTE_FLOODED (KVM_TRC_HANDLER + 0x16) >> but in your formats file >> +0x00020015 %(tsc)d (+%(reltsc)8d) PTE_FLOODED vcpu >> = 0x%(vcpu)08x pid = 0x%(pid)08x >> +0x00020016 %(tsc)d (+%(reltsc)8d) PTE_WRITE vcpu >> = 0x%(vcpu)08x pid = 0x%(pid)08x [ gpa = 0x%(2)08x %(1)08x gpte = >> 0x%(4)08x %(3)08x ] You mistake the value, right? >> > > Which value? Do you mean the 0x00020015 and0x00020016? > > kvm.h shows KVM_TRC_APIC_ACCESS as KVM_TRC_HANDLER + 0x14. I added > the PTE_WRITE and PTE_FLOODED after that in kvm.h with the values > 0x15 and 0x16. Then in the formats file it shows APIC_ACCESS as > 0x00020014, and I added the new PTE entries after that as 20015 and > 20016. The kvmtrace_format tool does show those lines in its output > which makes me believe these values are ok. > I mean the value of PTE_WRITE you write in the formats file ( 0x00020016 )should be same with KVM_TRC_PTE_WRITE you define in kvm.h, but now it is 0x00020015. if not what you get in the text file will be disordered. > What has me puzzled is the 0 values for gpa and gpte. I believe they > are not 0 because I added "if (gpa || gpte) before the > KVMTRACE_4D(PTE_WRITE, ...) line and the lines still show up in the > trace output. > > david |
From: David S. A. <da...@ci...> - 2008-04-19 03:53:27
|
inline. Liu, Eric E wrote: > David S. Ahern wrote: >> I am trying to add a trace marker and the data is coming out all 0's. >> e.g., >> >> 0 (+ 0) PTE_WRITE vcpu = 0x00000001 pid = 0x0000240d [ >> gpa = 0x00000000 00000000 gpte = 0x00000000 00000000 ] >> >> Patch is attached. I know the data is non-zero as I added an if check >> before calling the trace to only do the trace if the data is >> non-zero. Anyone have suggestions on what I am missing? >> >> thanks, >> >> david > Hi, david > I read your patch and find this: > +#define KVM_TRC_PTE_WRITE (KVM_TRC_HANDLER + > 0x15) > +#define KVM_TRC_PTE_FLOODED (KVM_TRC_HANDLER + > 0x16) > but in your formats file > +0x00020015 %(tsc)d (+%(reltsc)8d) PTE_FLOODED vcpu > = 0x%(vcpu)08x pid = 0x%(pid)08x > +0x00020016 %(tsc)d (+%(reltsc)8d) PTE_WRITE vcpu > = 0x%(vcpu)08x pid = 0x%(pid)08x [ gpa = 0x%(2)08x %(1)08x gpte = > 0x%(4)08x %(3)08x ] > You mistake the value, right? > Which value? Do you mean the 0x00020015 and0x00020016? kvm.h shows KVM_TRC_APIC_ACCESS as KVM_TRC_HANDLER + 0x14. I added the PTE_WRITE and PTE_FLOODED after that in kvm.h with the values 0x15 and 0x16. Then in the formats file it shows APIC_ACCESS as 0x00020014, and I added the new PTE entries after that as 20015 and 20016. The kvmtrace_format tool does show those lines in its output which makes me believe these values are ok. What has me puzzled is the 0 values for gpa and gpte. I believe they are not 0 because I added "if (gpa || gpte) before the KVMTRACE_4D(PTE_WRITE, ...) line and the lines still show up in the trace output. david |
From: Liu, E. E <eri...@in...> - 2008-04-19 02:12:32
|
David S. Ahern wrote: > I am trying to add a trace marker and the data is coming out all 0's. > e.g., > > 0 (+ 0) PTE_WRITE vcpu = 0x00000001 pid = 0x0000240d [ > gpa = 0x00000000 00000000 gpte = 0x00000000 00000000 ] > > Patch is attached. I know the data is non-zero as I added an if check > before calling the trace to only do the trace if the data is > non-zero. Anyone have suggestions on what I am missing? > > thanks, > > david Hi, david I read your patch and find this: +#define KVM_TRC_PTE_WRITE (KVM_TRC_HANDLER + 0x15) +#define KVM_TRC_PTE_FLOODED (KVM_TRC_HANDLER + 0x16) but in your formats file +0x00020015 %(tsc)d (+%(reltsc)8d) PTE_FLOODED vcpu = 0x%(vcpu)08x pid = 0x%(pid)08x +0x00020016 %(tsc)d (+%(reltsc)8d) PTE_WRITE vcpu = 0x%(vcpu)08x pid = 0x%(pid)08x [ gpa = 0x%(2)08x %(1)08x gpte = 0x%(4)08x %(3)08x ] You mistake the value, right? |
From: Gerd v. E. <li...@eg...> - 2008-04-18 22:58:07
|
Hi Marcelo, > Use the asynchronous version of block IO functions, otherwise guests can > block for long periods of time waiting for the operations to complete. just tried these patches. Results are similar to the last ones: the guest comes up fine but after running 2 or 3 minutes of bonnie++ the guest-vm hangs. This time I used screen on the guest console to try switching to another process - hanging too. Here is the kvm_stat --once output: efer_reload 0 0 exits 3325114 196 fpu_reload 185671 0 halt_exits 18692 29 halt_wakeup 24807 0 host_state_reload 1387308 59 insn_emulation 1924291 130 insn_emulation_fail 0 0 invlpg 0 0 io_exits 350020 30 irq_exits 225446 3 irq_window 0 0 mmio_exits 917561 0 mmu_cache_miss 55436 0 mmu_flooded 64416 0 mmu_pde_zapped 46914 0 mmu_pte_updated 565547 0 mmu_pte_write 650181 0 mmu_recycled 0 0 mmu_shadow_zapped 64416 0 pf_fixed 1229672 0 pf_guest 94338 0 remote_tlb_flush 0 0 request_irq 0 0 signal_exits 1 0 tlb_flush 602678 4 Kind regards, Gerd -- Address (better: trap) for people I really don't want to get mail from: james(at)cactusamerica.com |
From: Alex D. <ale...@ya...> - 2008-04-18 22:44:22
|
--- On Fri, 4/18/08, Avi Kivity <av...@qu...> wrote: > From: Avi Kivity <av...@qu...> > Subject: Re: [kvm-devel] Second KVM process hangs eating 80-100% CPU on host during startup > To: "Alex Davis" <ale...@ya...> > Cc: kvm...@li... > Date: Friday, April 18, 2008, 12:12 PM > Alex Davis wrote: > > Host software: > > Linux 2.6.24.4 > > KVM 65 (I am using the kernel modules from this > release). > > X11 7.2 from Xorg > > SDL 1.2.13 > > GCC 4.1.1 > > Glibc 2.4 > > > > Host hardware: > > Asus P5B Deluxe (P965 chipset based) motherboard > > 4 GB RAM > > Intel E6700 CPU > > > > Guest software: > > Slackware 12.0 installed from CD-ROM. > > > > Command used to first KVM instance: > > /usr/local/bin/qemu-system-x86_64 -hda > /spare/vdisk1.img -cdrom /dev/cdrom -boot c -m 384 -net > > nic,macaddr=DE:AD:BE:EF:11:29 -net > tap,ifname=tap0,script=no & > > > > Command used to start second KVM instance: > > /usr/local/bin/qemu-system-x86_64 -hda > /spare/vdisk2.img -cdrom /dev/cdrom -boot c -m 384 -net > > nic,macaddr=DE:AD:BE:EF:11:30 -net > tap,ifname=tap1,script=no & > > > > tap0 and tap1 are bridged on the host. The guest OS > was installed on /spare/vdisk1.img, > > which was initially created by /usr/local/bin/qemu-img > create -f qcow /spare/vdisk.img 10G > > After the guest installation completed, vdisk1 was > copied to vdisk2. > > > > The second instance always stops after printing > > Checking if the processor honours the WP bit even in > supervisor mode... Ok. > > It stays hung until I press the return key in the > first instance; sometimes clicking in another X > > window will wake it up as well. > > > > This is a test machine so I can test patches (almost) > at will. > > > > > > Strange. Does pinning each guest to a different cpu help > (use 'taskset > 1 qemu ... vdisk1.img & ', taskset 2 qemu ... > vdisk2.img) > > taskset made no difference. Upgrading to kvm-66 didn't help either. > Any sufficiently difficult bug is indistinguishable from a > feature. ____________________________________________________________________________________ Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ |
From: David S. A. <da...@ci...> - 2008-04-18 22:40:46
|
I am trying to add a trace marker and the data is coming out all 0's. e.g., 0 (+ 0) PTE_WRITE vcpu = 0x00000001 pid = 0x0000240d [ gpa = 0x00000000 00000000 gpte = 0x00000000 00000000 ] Patch is attached. I know the data is non-zero as I added an if check before calling the trace to only do the trace if the data is non-zero. Anyone have suggestions on what I am missing? thanks, david |
From: Gerd v. E. <li...@eg...> - 2008-04-18 22:32:49
|
Hi Marcelo, thanks for the quick reply. > When the hang happens, can you run kvm-stat --once (script can be found > kvm-66 directory) and paste the result? efer_reload 0 0 exits 4943909 2036 fpu_reload 222178 0 halt_exits 896464 999 halt_wakeup 17690 0 host_state_reload 2279013 1027 insn_emulation 2640886 1001 insn_emulation_fail 0 0 invlpg 0 0 io_exits 396350 28 irq_exits 200020 1 irq_window 0 0 mmio_exits 900583 0 mmu_cache_miss 54441 0 mmu_flooded 63313 0 mmu_pde_zapped 46114 0 mmu_pte_updated 554813 0 mmu_pte_write 639377 0 mmu_recycled 0 0 mmu_shadow_zapped 63313 0 pf_fixed 1205697 0 pf_guest 92134 0 remote_tlb_flush 1 0 request_irq 0 0 signal_exits 1 0 tlb_flush 609240 5 > Can you confirm that reverting the patch fixes it? It seems to be some kind of race condition, it doesn't completely hang the system always at the same position but probably brings the io to a halt at some time. I just got it to boot, but when running bonnie++ it halted at some time. The kvm_stat output above is from that one. I was still able to type in stuff at the console but the console couldn't break the system call bonnie was in. Booting a qemu-kvm without the patch completely fixes these problems. Kind regards, Gerd -- Address (better: trap) for people I really don't want to get mail from: ja...@ca... |
From: Marcelo T. <mto...@re...> - 2008-04-18 22:25:27
|
virtio-blk should not use synchronous requests, as that can blocks vcpus outside of guest mode for large periods of time for no reason. The generic block layer could complete AIO's before re-entering guest mode, so that cached reads and writes can be reported ASAP, a job for the block layer. Signed-off-by: Marcelo Tosatti <mto...@re...> Index: kvm-userspace.aio/qemu/hw/virtio-blk.c =================================================================== --- kvm-userspace.aio.orig/qemu/hw/virtio-blk.c +++ kvm-userspace.aio/qemu/hw/virtio-blk.c @@ -77,54 +77,117 @@ static VirtIOBlock *to_virtio_blk(VirtIO return (VirtIOBlock *)vdev; } +typedef struct VirtIOBlockReq +{ + VirtIODevice *vdev; + VirtQueue *vq; + struct iovec in_sg_status; + unsigned int pending; + unsigned int len; + unsigned int elem_idx; + int status; +} VirtIOBlockReq; + +static void virtio_blk_rw_complete(void *opaque, int ret) +{ + VirtIOBlockReq *req = opaque; + struct virtio_blk_inhdr *in; + VirtQueueElement elem; + + req->status |= ret; + if (--req->pending > 0) + return; + + elem.index = req->elem_idx; + in = (void *)req->in_sg_status.iov_base; + + in->status = req->status ? VIRTIO_BLK_S_IOERR : VIRTIO_BLK_S_OK; + virtqueue_push(req->vq, &elem, req->len); + virtio_notify(req->vdev, req->vq); + qemu_free(req); +} + static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq) { VirtIOBlock *s = to_virtio_blk(vdev); VirtQueueElement elem; + VirtIOBlockReq *req; unsigned int count; while ((count = virtqueue_pop(vq, &elem)) != 0) { struct virtio_blk_inhdr *in; struct virtio_blk_outhdr *out; - unsigned int wlen; off_t off; int i; + /* + * FIXME: limit the number of in-flight requests + */ + req = qemu_malloc(sizeof(VirtIOBlockReq)); + if (!req) + return; + memset(req, 0, sizeof(*req)); + memcpy(&req->in_sg_status, &elem.in_sg[elem.in_num - 1], + sizeof(req->in_sg_status)); + req->vdev = vdev; + req->vq = vq; + req->elem_idx = elem.index; + out = (void *)elem.out_sg[0].iov_base; in = (void *)elem.in_sg[elem.in_num - 1].iov_base; off = out->sector; if (out->type & VIRTIO_BLK_T_SCSI_CMD) { - wlen = sizeof(*in); + unsigned int len = sizeof(*in); + in->status = VIRTIO_BLK_S_UNSUPP; + virtqueue_push(vq, &elem, len); + virtio_notify(vdev, vq); + qemu_free(req); + } else if (out->type & VIRTIO_BLK_T_OUT) { - wlen = sizeof(*in); + req->pending = elem.out_num - 1; for (i = 1; i < elem.out_num; i++) { - bdrv_write(s->bs, off, + bdrv_aio_write(s->bs, off, elem.out_sg[i].iov_base, - elem.out_sg[i].iov_len / 512); + elem.out_sg[i].iov_len / 512, + virtio_blk_rw_complete, + req); off += elem.out_sg[i].iov_len / 512; + req->len += elem.out_sg[i].iov_len; } - in->status = VIRTIO_BLK_S_OK; } else { - wlen = sizeof(*in); + req->pending = elem.in_num - 1; for (i = 0; i < elem.in_num - 1; i++) { - bdrv_read(s->bs, off, + bdrv_aio_read(s->bs, off, elem.in_sg[i].iov_base, - elem.in_sg[i].iov_len / 512); + elem.in_sg[i].iov_len / 512, + virtio_blk_rw_complete, + req); off += elem.in_sg[i].iov_len / 512; - wlen += elem.in_sg[i].iov_len; + req->len += elem.in_sg[i].iov_len; } - - in->status = VIRTIO_BLK_S_OK; } - - virtqueue_push(vq, &elem, wlen); - virtio_notify(vdev, vq); } + /* + * FIXME: Want to check for completions before returning to guest mode, + * so cached reads and writes are reported as quickly as possible. But + * that should be done in the generic block layer. + */ +} + +static void virtio_blk_reset(VirtIODevice *vdev) +{ + VirtIOBlock *s = to_virtio_blk(vdev); + + /* + * This should cancel pending requests, but can't do nicely until there + * are per-device request lists. + */ + qemu_aio_flush(); } static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config) @@ -156,6 +219,7 @@ void *virtio_blk_init(PCIBus *bus, uint1 s->vdev.update_config = virtio_blk_update_config; s->vdev.get_features = virtio_blk_get_features; + s->vdev.reset = virtio_blk_reset; s->bs = bs; bs->devfn = s->vdev.pci_dev.devfn; -- |
From: Marcelo T. <mto...@re...> - 2008-04-18 22:25:24
|
So drivers can do whatever necessary on reset. Signed-off-by: Marcelo Tosatti <mto...@re...> Index: kvm-userspace.aio/qemu/hw/virtio.c =================================================================== --- kvm-userspace.aio.orig/qemu/hw/virtio.c +++ kvm-userspace.aio/qemu/hw/virtio.c @@ -166,6 +166,9 @@ void virtio_reset(void *opaque) VirtIODevice *vdev = opaque; int i; + if (vdev->reset) + vdev->reset(vdev); + vdev->features = 0; vdev->queue_sel = 0; vdev->status = 0; Index: kvm-userspace.aio/qemu/hw/virtio.h =================================================================== --- kvm-userspace.aio.orig/qemu/hw/virtio.h +++ kvm-userspace.aio/qemu/hw/virtio.h @@ -119,6 +119,7 @@ struct VirtIODevice uint32_t (*get_features)(VirtIODevice *vdev); void (*set_features)(VirtIODevice *vdev, uint32_t val); void (*update_config)(VirtIODevice *vdev, uint8_t *config); + void (*reset)(VirtIODevice *vdev); VirtQueue vq[VIRTIO_PCI_QUEUE_MAX]; }; -- |
From: Marcelo T. <mto...@re...> - 2008-04-18 22:25:22
|
Use the asynchronous version of block IO functions, otherwise guests can block for long periods of time waiting for the operations to complete. -- |
From: Jeremy F. <je...@go...> - 2008-04-18 22:23:45
|
Gerd Hoffmann wrote: > I'm looking at the guest side of the issue right now, trying to identify > common code, and while doing so noticed that xen does the > version-check-loop in both get_time_values_from_xen(void) and > xen_clocksource_read(void), and I can't see any obvious reason for that. > The loop in xen_clocksource_read(void) is not needed IMHO. Can I drop it? > No. The get_nsec_offset() needs to be atomic with respect to the get_time_values() parameters. There could be a loopless __get_time_values() for use in this case, but given that it almost never loops, I don't think its worthwhile. J |
From: Marcelo T. <mto...@re...> - 2008-04-18 21:45:54
|
Hi Gerd, On Fri, Apr 18, 2008 at 11:27:58PM +0200, Gerd von Egidy wrote: > Hi Marcelo, > > > > > > http://www.mail-archive.com/kvm...@li.../msg14732.html > > > > > > I tried it this evening with kvm 66 - which should include your patch, > > > right? > > > > No its not included. The issue is being worked on. > > my bad, sorry. > > Now I know I really have that patch: qemu-kvm hangs :( > > I was trying kvm 66 with only the patch listed above applied on an otherwise > perfectly working vm with virtio_blk root partition: > > Last line of the booting kernel in my vnc window: > > Serial: 8250/16550 driver $Revision 1.90... (you know the rest) When the hang happens, can you run kvm-stat --once (script can be found kvm-66 directory) and paste the result? Can you confirm that reverting the patch fixes it? > an strace of the qemu-kvm gave the following in rapid succession: > > clock_gettime(CLOCK_MONOTONIC, {2565, 306799672}) = 0 > clock_gettime(CLOCK_MONOTONIC, {2565, 307065342}) = 0 > clock_gettime(CLOCK_MONOTONIC, {2565, 307354930}) = 0 > clock_gettime(CLOCK_MONOTONIC, {2565, 307618803}) = 0 > clock_gettime(CLOCK_MONOTONIC, {2565, 307886312}) = 0 > timer_gettime(0, {it_interval={0, 0}, it_value={0, 0}}) = 0 > timer_settime(0, 0, {it_interval={0, 0}, it_value={0, 33000000}}, NULL) = 0 > rt_sigtimedwait([USR1 USR2 ALRM IO], {si_signo=SIGALRM, si_code=SI_TIMER, > si_pid=0, si_uid=0, si_value={int=0, ptr=0}}, 0xbfe5af88, 8) = 14 > rt_sigaction(SIGALRM, NULL, {0x804d8f8, ~[KILL STOP RTMIN RT_1], 0}, 8) = 0 This won't help much. |