From: Avi K. <av...@qu...> - 2007-11-07 17:29:37
|
If you're having trouble on AMD systems, please try this out. Changes from kvm-50: - fix some x86 emulator one-byte insns (fixes W2K3 installer again) - fix host hangs with NMI watchdog on AMD - fix guest SMP on AMD - fix dirty page tracking when clearing a guest page (Dor Laor) - more portability work (Hollis Blanchard, Jerone Young) - fix FlexPriority with guest smp (Sheng Yang) - improve rpm specfile (Akio Takebe, me) - fix external module vs portability work (Andrea Arcangeli) - remove elpin bios due to license violation - testsuite shutdown pio port - don't advertise svm on the guest - fix reset with kernel apic (Markus Rechberger) Notes: If you use the modules bundled with kvm-51, you can use any version of Linux from 2.6.9 upwards. If you use the modules bundled with Linux 2.6.20, you need to use kvm-12. If you use the modules bundled with Linux 2.6.21, you need to use kvm-17. Modules from Linux 2.6.22 and up will work with any kvm version from kvm-22. Some features may only be available in newer releases. For best performance, use Linux 2.6.23-rc2 or later as the host. http://kvm.qumranet.com |
From: Haydn S. <hay...@gm...> - 2007-11-07 19:38:13
|
First , thank you for new release of kvm. I have a few problems to report with kvm-51. 1. When running an exisiting winxp ACPI multiprocessor HAL with -smp 2, sometimes it will hang on boot. 2. This may not be a major problem but cpu usage is a litte higher when idle on release 51 than 50. It was very low on 50, the lowest I've seen in a long time. 3. For me personally, the best performing release to date is release 50. Regards Haydn Avi Kivity wrote: > If you're having trouble on AMD systems, please try this out. > > Changes from kvm-50: > - fix some x86 emulator one-byte insns (fixes W2K3 installer again) > - fix host hangs with NMI watchdog on AMD > - fix guest SMP on AMD > - fix dirty page tracking when clearing a guest page (Dor Laor) > - more portability work (Hollis Blanchard, Jerone Young) > - fix FlexPriority with guest smp (Sheng Yang) > - improve rpm specfile (Akio Takebe, me) > - fix external module vs portability work (Andrea Arcangeli) > - remove elpin bios due to license violation > - testsuite shutdown pio port > - don't advertise svm on the guest > - fix reset with kernel apic (Markus Rechberger) > > > Notes: > If you use the modules bundled with kvm-51, you can use any version > of Linux from 2.6.9 upwards. > If you use the modules bundled with Linux 2.6.20, you need to use > kvm-12. > If you use the modules bundled with Linux 2.6.21, you need to use > kvm-17. > Modules from Linux 2.6.22 and up will work with any kvm version from > kvm-22. Some features may only be available in newer releases. > For best performance, use Linux 2.6.23-rc2 or later as the host. > > http://kvm.qumranet.com > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > kvm-devel mailing list > kvm...@li... > https://lists.sourceforge.net/lists/listinfo/kvm-devel > adfdf |
From: Amit S. <ami...@qu...> - 2007-11-07 19:49:07
|
On Thursday 08 November 2007 01:05:32 Haydn Solomon wrote: > First , thank you for new release of kvm. I have a few problems to > report with kvm-51. > > 1. When running an exisiting winxp ACPI multiprocessor HAL with -smp 2, > sometimes it will hang on boot. You mean the guest hangs, right? What's your host system? > 2. This may not be a major problem but cpu usage is a litte higher when > idle on release 51 than 50. It was very low on 50, the lowest I've seen > in a long time. > 3. For me personally, the best performing release to date is release 50. > > Regards > > Haydn |
From: Haydn S. <hay...@gm...> - 2007-11-07 19:55:51
|
On Nov 7, 2007 2:48 PM, Amit Shah <ami...@qu...> wrote: > On Thursday 08 November 2007 01:05:32 Haydn Solomon wrote: > > First , thank you for new release of kvm. I have a few problems to > > report with kvm-51. > > > > 1. When running an exisiting winxp ACPI multiprocessor HAL with -smp 2, > > sometimes it will hang on boot. > > You mean the guest hangs, right? What's your host system? Yes, sorry, I meant my guest hanges. Host details are: Linux localhost.localdomain 2.6.23.1-21.fc7 #1 SMP Thu Nov 1 20:28:15 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux output of /proc/cpu processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz stepping : 10 cpu MHz : 800.000 cache size : 4096 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr lahf_lm ida bogomips : 4387.71 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Ok, I'm running kvm-50 again and looking at my cpu usage and it's about the same as what I'm seeing on 51. However, I did upgrade my fedora 7 kernel since running kvm-50 so I think that probably explains the cpu usage part. And it's not that the cpu usage is high by any means, just that it was really low on my previous kernel. Haydn > > > > 2. This may not be a major problem but cpu usage is a litte higher when > > idle on release 51 than 50. It was very low on 50, the lowest I've seen > > in a long time. > > 3. For me personally, the best performing release to date is release 50. > > > > Regards > > > > Haydn > |
From: Avi K. <av...@qu...> - 2007-11-08 05:51:56
|
Haydn Solomon wrote: > First , thank you for new release of kvm. I have a few problems to > report with kvm-51. > > 1. When running an exisiting winxp ACPI multiprocessor HAL with -smp > 2, sometimes it will hang on boot. This isn't new. It isn't reported because few people run smp Windows, as prior to FlexPriority/tpr-opt it was unbearably slow. I'll look unto it. > 2. This may not be a major problem but cpu usage is a litte higher > when idle on release 51 than 50. It was very low on 50, the lowest > I've seen in a long time. It shouldn't have changed. What do you see? Can you provide a snapshot of kvm_stat while Windows is idling (a few minutes after load)? > 3. For me personally, the best performing release to date is release 50. kvm-51 shouldn't be all that different; it's mostly AMD stability improvements and the FlexPriority smp fix. > > Regards > > Haydn > > Avi Kivity wrote: >> If you're having trouble on AMD systems, please try this out. >> >> Changes from kvm-50: >> - fix some x86 emulator one-byte insns (fixes W2K3 installer again) >> - fix host hangs with NMI watchdog on AMD >> - fix guest SMP on AMD >> - fix dirty page tracking when clearing a guest page (Dor Laor) >> - more portability work (Hollis Blanchard, Jerone Young) >> - fix FlexPriority with guest smp (Sheng Yang) >> - improve rpm specfile (Akio Takebe, me) >> - fix external module vs portability work (Andrea Arcangeli) >> - remove elpin bios due to license violation >> - testsuite shutdown pio port >> - don't advertise svm on the guest >> - fix reset with kernel apic (Markus Rechberger) >> >> >> Notes: >> If you use the modules bundled with kvm-51, you can use any version >> of Linux from 2.6.9 upwards. >> If you use the modules bundled with Linux 2.6.20, you need to use >> kvm-12. >> If you use the modules bundled with Linux 2.6.21, you need to use >> kvm-17. >> Modules from Linux 2.6.22 and up will work with any kvm version from >> kvm-22. Some features may only be available in newer releases. >> For best performance, use Linux 2.6.23-rc2 or later as the host. >> >> http://kvm.qumranet.com >> >> >> ------------------------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. >> Still grepping through log files to find problems? Stop. >> Now Search log events and configuration files using AJAX and a browser. >> Download your FREE copy of Splunk now >> http://get.splunk.com/ >> _______________________________________________ >> kvm-devel mailing list >> kvm...@li... >> https://lists.sourceforge.net/lists/listinfo/kvm-devel >> > adfdf -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. |
From: Haydn S. <hay...@gm...> - 2007-11-08 13:05:56
|
On Nov 8, 2007 12:51 AM, Avi Kivity <av...@qu...> wrote: > Haydn Solomon wrote: > > First , thank you for new release of kvm. I have a few problems to > > report with kvm-51. > > > > 1. When running an exisiting winxp ACPI multiprocessor HAL with -smp > > 2, sometimes it will hang on boot. > > This isn't new. It isn't reported because few people run smp Windows, > as prior to FlexPriority/tpr-opt it was unbearably slow. > > I'll look unto it. thanks. > > > > 2. This may not be a major problem but cpu usage is a litte higher > > when idle on release 51 than 50. It was very low on 50, the lowest > > I've seen in a long time. > > It shouldn't have changed. What do you see? Can you provide a snapshot > of kvm_stat while Windows is idling (a few minutes after load)? I went back to kvm-50 and tested and realized that load now is about same as on kvm-51 now. However, since running release 50 I upgraded my kernel (Fedora 7) and didn't pay attention to load after the upgrade so I'm pretty sure this cpu usage thing is kernel related. > > > > 3. For me personally, the best performing release to date is release 50. > > kvm-51 shouldn't be all that different; it's mostly AMD stability > improvements and the FlexPriority smp fix. This opinion was really based on cpu usage but explained above. > > > > > > Regards > > > > Haydn > > > > Avi Kivity wrote: > >> If you're having trouble on AMD systems, please try this out. > >> > >> Changes from kvm-50: > >> - fix some x86 emulator one-byte insns (fixes W2K3 installer again) > >> - fix host hangs with NMI watchdog on AMD > >> - fix guest SMP on AMD > >> - fix dirty page tracking when clearing a guest page (Dor Laor) > >> - more portability work (Hollis Blanchard, Jerone Young) > >> - fix FlexPriority with guest smp (Sheng Yang) > >> - improve rpm specfile (Akio Takebe, me) > >> - fix external module vs portability work (Andrea Arcangeli) > >> - remove elpin bios due to license violation > >> - testsuite shutdown pio port > >> - don't advertise svm on the guest > >> - fix reset with kernel apic (Markus Rechberger) > >> > >> > >> Notes: > >> If you use the modules bundled with kvm-51, you can use any > version > >> of Linux from 2.6.9 upwards. > >> If you use the modules bundled with Linux 2.6.20, you need to use > >> kvm-12. > >> If you use the modules bundled with Linux 2.6.21, you need to use > >> kvm-17. > >> Modules from Linux 2.6.22 and up will work with any kvm version > from > >> kvm-22. Some features may only be available in newer releases. > >> For best performance, use Linux 2.6.23-rc2 or later as the host. > >> > >> http://kvm.qumranet.com > >> > >> > >> > ------------------------------------------------------------------------- > >> This SF.net email is sponsored by: Splunk Inc. > >> Still grepping through log files to find problems? Stop. > >> Now Search log events and configuration files using AJAX and a browser. > >> Download your FREE copy of Splunk now >> http://get.splunk.com/ > >> _______________________________________________ > >> kvm-devel mailing list > >> kvm...@li... > >> https://lists.sourceforge.net/lists/listinfo/kvm-devel > >> > > adfdf > > > -- > Do not meddle in the internals of kernels, for they are subtle and quick > to panic. > > |
From: Farkas L. <lf...@bp...> - 2007-11-09 10:25:57
|
Avi Kivity wrote: > If you're having trouble on AMD systems, please try this out. this version worse than kvm-50:-( setup: - host: - Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz - Intel S3000AHV - 8GB RAM - CentOS-5 - kernel-2.6.18-8.1.14.el5 x86_64 64bit - guest-1: - CentOS-5 - kernel-2.6.18-8.1.14.el5 i386 32bit - guest-2: - CentOS-5 - kernel-2.6.18-8.1.14.el5 x86_64 64bit - guest-3: - Mandrake-9 - kernel-2.4.19.16mdk-1-1mdk 32bit - guest-4: - Windows XP Professional 32bit smp not working on any centos guest (guests are hang during boot). even the host crash. the worst thing is the host crash during boot with another stack trace which i was not able to log. i really would like to see some kind of stable version other then kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and arch rearrange, but wouldn't it be better to fix these basic issues first? like running two smp guest (32 and 64) on 64 smp host, just to boot until the login screen. this is when the guest stop and the host dump it: ------------------------------------------------------------ Ignoring de-assert INIT to vcpu 1 SIPI to vcpu 1 vector 0x06 SIPI to vcpu 1 vector 0x06 eth0: topology change detected, propagating eth0: port 3(vnet1) entering forwarding state Ignoring de-assert INIT to vcpu 2 SIPI to vcpu 2 vector 0x06 SIPI to vcpu 2 vector 0x06 Ignoring de-assert INIT to vcpu 3 SIPI to vcpu 3 vector 0x06 SIPI to vcpu 3 vector 0x06 BUG: soft lockup detected on CPU#1! Call Trace: <IRQ> [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed [<ffffffff80093493>] update_process_times+0x42/0x68 [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47 [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47 [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c <EOI> [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188 [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188 [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1 [<ffffffff882079ac>] :kvm:kvm_mmu_pte_write+0x1fc/0x330 [<ffffffff88203a36>] :kvm:emulator_write_emulated_onepage+0x85/0xe5 [<ffffffff8820c320>] :kvm:x86_emulate_insn+0x2e03/0x407f [<ffffffff80015e7e>] __pte_alloc+0x122/0x142 [<ffffffff88225477>] :kvm_intel:vmcs_readl+0x17/0x1c [<ffffffff88203e13>] :kvm:emulate_instruction+0x152/0x290 [<ffffffff8820716b>] :kvm:kvm_mmu_page_fault+0x5e/0xb4 [<ffffffff882056dc>] :kvm:kvm_arch_vcpu_ioctl_run+0x28a/0x3a6 [<ffffffff88202539>] :kvm:kvm_vcpu_ioctl+0xc3/0x388 [<ffffffff8008515c>] __wake_up_common+0x3e/0x68 [<ffffffff800626d0>] _spin_unlock_irqrestore+0x8/0x9 [<ffffffff80117410>] avc_has_perm+0x43/0x55 [<ffffffff80117f47>] inode_has_perm+0x56/0x63 [<ffffffff8820245d>] :kvm:kvm_vm_ioctl+0x277/0x290 [<ffffffff88226dcf>] :kvm_intel:vmx_vcpu_put+0x0/0xa3 [<ffffffff80117fe8>] file_has_perm+0x94/0xa3 [<ffffffff8003fca8>] do_ioctl+0x21/0x6b [<ffffffff8002faae>] vfs_ioctl+0x248/0x261 [<ffffffff8004a2b4>] sys_ioctl+0x59/0x78 [<ffffffff8005b349>] tracesys+0xd1/0xdc ------------------------------------------------------------ -- Levente "Si vis pacem para bellum!" |
From: david a. <da...@ci...> - 2007-11-09 14:59:28
|
I found that I had to move to a newer kernel (2.6.23.1 is what I used) to get SMP guests to boot on RHEL5 hosts. It appears to be an issue with the host kernel. david Farkas Levente wrote: > Avi Kivity wrote: >> If you're having trouble on AMD systems, please try this out. > > this version worse than kvm-50:-( > setup: > - host: > - Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > - Intel S3000AHV > - 8GB RAM > - CentOS-5 > - kernel-2.6.18-8.1.14.el5 x86_64 64bit > - guest-1: > - CentOS-5 > - kernel-2.6.18-8.1.14.el5 i386 32bit > - guest-2: > - CentOS-5 > - kernel-2.6.18-8.1.14.el5 x86_64 64bit > - guest-3: > - Mandrake-9 > - kernel-2.4.19.16mdk-1-1mdk 32bit > - guest-4: > - Windows XP Professional 32bit > smp not working on any centos guest (guests are hang during boot). even > the host crash. the worst thing is the host crash during boot with > another stack trace which i was not able to log. > i really would like to see some kind of stable version other then > kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and > arch rearrange, but wouldn't it be better to fix these basic issues > first? like running two smp guest (32 and 64) on 64 smp host, just to > boot until the login screen. > this is when the guest stop and the host dump it: > ------------------------------------------------------------ > Ignoring de-assert INIT to vcpu 1 > SIPI to vcpu 1 vector 0x06 > SIPI to vcpu 1 vector 0x06 > eth0: topology change detected, propagating > eth0: port 3(vnet1) entering forwarding state > Ignoring de-assert INIT to vcpu 2 > SIPI to vcpu 2 vector 0x06 > SIPI to vcpu 2 vector 0x06 > Ignoring de-assert INIT to vcpu 3 > SIPI to vcpu 3 vector 0x06 > SIPI to vcpu 3 vector 0x06 > BUG: soft lockup detected on CPU#1! > > Call Trace: > <IRQ> [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed > [<ffffffff80093493>] update_process_times+0x42/0x68 > [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47 > [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47 > [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c > <EOI> [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188 > [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188 > [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1 > [<ffffffff882079ac>] :kvm:kvm_mmu_pte_write+0x1fc/0x330 > [<ffffffff88203a36>] :kvm:emulator_write_emulated_onepage+0x85/0xe5 > [<ffffffff8820c320>] :kvm:x86_emulate_insn+0x2e03/0x407f > [<ffffffff80015e7e>] __pte_alloc+0x122/0x142 > [<ffffffff88225477>] :kvm_intel:vmcs_readl+0x17/0x1c > [<ffffffff88203e13>] :kvm:emulate_instruction+0x152/0x290 > [<ffffffff8820716b>] :kvm:kvm_mmu_page_fault+0x5e/0xb4 > [<ffffffff882056dc>] :kvm:kvm_arch_vcpu_ioctl_run+0x28a/0x3a6 > [<ffffffff88202539>] :kvm:kvm_vcpu_ioctl+0xc3/0x388 > [<ffffffff8008515c>] __wake_up_common+0x3e/0x68 > [<ffffffff800626d0>] _spin_unlock_irqrestore+0x8/0x9 > [<ffffffff80117410>] avc_has_perm+0x43/0x55 > [<ffffffff80117f47>] inode_has_perm+0x56/0x63 > [<ffffffff8820245d>] :kvm:kvm_vm_ioctl+0x277/0x290 > [<ffffffff88226dcf>] :kvm_intel:vmx_vcpu_put+0x0/0xa3 > [<ffffffff80117fe8>] file_has_perm+0x94/0xa3 > [<ffffffff8003fca8>] do_ioctl+0x21/0x6b > [<ffffffff8002faae>] vfs_ioctl+0x248/0x261 > [<ffffffff8004a2b4>] sys_ioctl+0x59/0x78 > [<ffffffff8005b349>] tracesys+0xd1/0xdc > ------------------------------------------------------------ > |
From: Farkas L. <lf...@bp...> - 2007-11-10 00:22:41
|
that would be really sad, since what i like in kvm that i don't have to compile kernel, so be able to follow upstream kernel updates:-((( david ahern wrote: > I found that I had to move to a newer kernel (2.6.23.1 is what I used) to get SMP guests to boot on RHEL5 hosts. It appears to be an issue with the host kernel. > > david > > > Farkas Levente wrote: >> Avi Kivity wrote: >>> If you're having trouble on AMD systems, please try this out. >> this version worse than kvm-50:-( >> setup: >> - host: >> - Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz >> - Intel S3000AHV >> - 8GB RAM >> - CentOS-5 >> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >> - guest-1: >> - CentOS-5 >> - kernel-2.6.18-8.1.14.el5 i386 32bit >> - guest-2: >> - CentOS-5 >> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >> - guest-3: >> - Mandrake-9 >> - kernel-2.4.19.16mdk-1-1mdk 32bit >> - guest-4: >> - Windows XP Professional 32bit >> smp not working on any centos guest (guests are hang during boot). even >> the host crash. the worst thing is the host crash during boot with >> another stack trace which i was not able to log. >> i really would like to see some kind of stable version other then >> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and >> arch rearrange, but wouldn't it be better to fix these basic issues >> first? like running two smp guest (32 and 64) on 64 smp host, just to >> boot until the login screen. >> this is when the guest stop and the host dump it: >> ------------------------------------------------------------ >> Ignoring de-assert INIT to vcpu 1 >> SIPI to vcpu 1 vector 0x06 >> SIPI to vcpu 1 vector 0x06 >> eth0: topology change detected, propagating >> eth0: port 3(vnet1) entering forwarding state >> Ignoring de-assert INIT to vcpu 2 >> SIPI to vcpu 2 vector 0x06 >> SIPI to vcpu 2 vector 0x06 >> Ignoring de-assert INIT to vcpu 3 >> SIPI to vcpu 3 vector 0x06 >> SIPI to vcpu 3 vector 0x06 >> BUG: soft lockup detected on CPU#1! >> >> Call Trace: >> <IRQ> [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed >> [<ffffffff80093493>] update_process_times+0x42/0x68 >> [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47 >> [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47 >> [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c >> <EOI> [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188 >> [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188 >> [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1 >> [<ffffffff882079ac>] :kvm:kvm_mmu_pte_write+0x1fc/0x330 >> [<ffffffff88203a36>] :kvm:emulator_write_emulated_onepage+0x85/0xe5 >> [<ffffffff8820c320>] :kvm:x86_emulate_insn+0x2e03/0x407f >> [<ffffffff80015e7e>] __pte_alloc+0x122/0x142 >> [<ffffffff88225477>] :kvm_intel:vmcs_readl+0x17/0x1c >> [<ffffffff88203e13>] :kvm:emulate_instruction+0x152/0x290 >> [<ffffffff8820716b>] :kvm:kvm_mmu_page_fault+0x5e/0xb4 >> [<ffffffff882056dc>] :kvm:kvm_arch_vcpu_ioctl_run+0x28a/0x3a6 >> [<ffffffff88202539>] :kvm:kvm_vcpu_ioctl+0xc3/0x388 >> [<ffffffff8008515c>] __wake_up_common+0x3e/0x68 >> [<ffffffff800626d0>] _spin_unlock_irqrestore+0x8/0x9 >> [<ffffffff80117410>] avc_has_perm+0x43/0x55 >> [<ffffffff80117f47>] inode_has_perm+0x56/0x63 >> [<ffffffff8820245d>] :kvm:kvm_vm_ioctl+0x277/0x290 >> [<ffffffff88226dcf>] :kvm_intel:vmx_vcpu_put+0x0/0xa3 >> [<ffffffff80117fe8>] file_has_perm+0x94/0xa3 >> [<ffffffff8003fca8>] do_ioctl+0x21/0x6b >> [<ffffffff8002faae>] vfs_ioctl+0x248/0x261 >> [<ffffffff8004a2b4>] sys_ioctl+0x59/0x78 >> [<ffffffff8005b349>] tracesys+0xd1/0xdc >> ------------------------------------------------------------ >> -- Levente "Si vis pacem para bellum!" |
From: Avi K. <av...@qu...> - 2007-11-11 09:09:33
|
david ahern wrote: > I found that I had to move to a newer kernel (2.6.23.1 is what I used) to get SMP guests to boot on RHEL5 hosts. It appears to be an issue with the host kernel. > Might also be a problem with the smp_call_function_single() emulation. > > david > > > Farkas Levente wrote: > >> Avi Kivity wrote: >> >>> If you're having trouble on AMD systems, please try this out. >>> >> this version worse than kvm-50:-( >> setup: >> - host: >> - Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz >> - Intel S3000AHV >> - 8GB RAM >> - CentOS-5 >> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >> - guest-1: >> - CentOS-5 >> - kernel-2.6.18-8.1.14.el5 i386 32bit >> - guest-2: >> - CentOS-5 >> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >> - guest-3: >> - Mandrake-9 >> - kernel-2.4.19.16mdk-1-1mdk 32bit >> - guest-4: >> - Windows XP Professional 32bit >> smp not working on any centos guest (guests are hang during boot). even >> the host crash. the worst thing is the host crash during boot with >> another stack trace which i was not able to log. >> i really would like to see some kind of stable version other then >> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and >> arch rearrange, but wouldn't it be better to fix these basic issues >> first? like running two smp guest (32 and 64) on 64 smp host, just to >> boot until the login screen. >> this is when the guest stop and the host dump it: >> ------------------------------------------------------------ >> Ignoring de-assert INIT to vcpu 1 >> SIPI to vcpu 1 vector 0x06 >> SIPI to vcpu 1 vector 0x06 >> eth0: topology change detected, propagating >> eth0: port 3(vnet1) entering forwarding state >> Ignoring de-assert INIT to vcpu 2 >> SIPI to vcpu 2 vector 0x06 >> SIPI to vcpu 2 vector 0x06 >> Ignoring de-assert INIT to vcpu 3 >> SIPI to vcpu 3 vector 0x06 >> SIPI to vcpu 3 vector 0x06 >> BUG: soft lockup detected on CPU#1! >> >> Call Trace: >> <IRQ> [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed >> [<ffffffff80093493>] update_process_times+0x42/0x68 >> [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47 >> [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47 >> [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c >> <EOI> [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188 >> [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188 >> [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1 >> [<ffffffff882079ac>] :kvm:kvm_mmu_pte_write+0x1fc/0x330 >> [<ffffffff88203a36>] :kvm:emulator_write_emulated_onepage+0x85/0xe5 >> [<ffffffff8820c320>] :kvm:x86_emulate_insn+0x2e03/0x407f >> [<ffffffff80015e7e>] __pte_alloc+0x122/0x142 >> [<ffffffff88225477>] :kvm_intel:vmcs_readl+0x17/0x1c >> [<ffffffff88203e13>] :kvm:emulate_instruction+0x152/0x290 >> [<ffffffff8820716b>] :kvm:kvm_mmu_page_fault+0x5e/0xb4 >> [<ffffffff882056dc>] :kvm:kvm_arch_vcpu_ioctl_run+0x28a/0x3a6 >> [<ffffffff88202539>] :kvm:kvm_vcpu_ioctl+0xc3/0x388 >> [<ffffffff8008515c>] __wake_up_common+0x3e/0x68 >> [<ffffffff800626d0>] _spin_unlock_irqrestore+0x8/0x9 >> [<ffffffff80117410>] avc_has_perm+0x43/0x55 >> [<ffffffff80117f47>] inode_has_perm+0x56/0x63 >> [<ffffffff8820245d>] :kvm:kvm_vm_ioctl+0x277/0x290 >> [<ffffffff88226dcf>] :kvm_intel:vmx_vcpu_put+0x0/0xa3 >> [<ffffffff80117fe8>] file_has_perm+0x94/0xa3 >> [<ffffffff8003fca8>] do_ioctl+0x21/0x6b >> [<ffffffff8002faae>] vfs_ioctl+0x248/0x261 >> [<ffffffff8004a2b4>] sys_ioctl+0x59/0x78 >> [<ffffffff8005b349>] tracesys+0xd1/0xdc >> ------------------------------------------------------------ >> >> -- error compiling committee.c: too many arguments to function |
From: Avi K. <av...@qu...> - 2007-11-11 09:12:45
|
Farkas Levente wrote: > Avi Kivity wrote: > >> If you're having trouble on AMD systems, please try this out. >> > > this version worse than kvm-50:-( > setup: > - host: > - Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > - Intel S3000AHV > - 8GB RAM > - CentOS-5 > - kernel-2.6.18-8.1.14.el5 x86_64 64bit > - guest-1: > - CentOS-5 > - kernel-2.6.18-8.1.14.el5 i386 32bit > - guest-2: > - CentOS-5 > - kernel-2.6.18-8.1.14.el5 x86_64 64bit > - guest-3: > - Mandrake-9 > - kernel-2.4.19.16mdk-1-1mdk 32bit > - guest-4: > - Windows XP Professional 32bit > smp not working on any centos guest (guests are hang during boot). even > the host crash. the worst thing is the host crash during boot with > another stack trace which i was not able to log. > i really would like to see some kind of stable version other then > kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and > arch rearrange, but wouldn't it be better to fix these basic issues > first? like running two smp guest (32 and 64) on 64 smp host, just to > boot until the login screen. > this is when the guest stop and the host dump it: > [...] > Call Trace: > <IRQ> [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed > [<ffffffff80093493>] update_process_times+0x42/0x68 > [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47 > [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47 > [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c > <EOI> [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188 > [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188 > [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1 > Are you sure this is a regression relative to kvm-50? Please recheck. -- error compiling committee.c: too many arguments to function |
From: Farkas L. <lf...@bp...> - 2007-11-11 12:58:34
|
Avi Kivity wrote: > Farkas Levente wrote: >> Avi Kivity wrote: >> >>> If you're having trouble on AMD systems, please try this out. >>> >> >> this version worse than kvm-50:-( >> setup: >> - host: >> - Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz >> - Intel S3000AHV >> - 8GB RAM >> - CentOS-5 >> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >> - guest-1: >> - CentOS-5 >> - kernel-2.6.18-8.1.14.el5 i386 32bit >> - guest-2: >> - CentOS-5 >> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >> - guest-3: >> - Mandrake-9 >> - kernel-2.4.19.16mdk-1-1mdk 32bit >> - guest-4: >> - Windows XP Professional 32bit >> smp not working on any centos guest (guests are hang during boot). even >> the host crash. the worst thing is the host crash during boot with >> another stack trace which i was not able to log. >> i really would like to see some kind of stable version other then >> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and >> arch rearrange, but wouldn't it be better to fix these basic issues >> first? like running two smp guest (32 and 64) on 64 smp host, just to >> boot until the login screen. >> this is when the guest stop and the host dump it: >> > > [...] > >> Call Trace: >> <IRQ> [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed >> [<ffffffff80093493>] update_process_times+0x42/0x68 >> [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47 >> [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47 >> [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c >> <EOI> [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188 >> [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188 >> [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1 >> > > > Are you sure this is a regression relative to kvm-50? Please recheck. i', not sure this's a regression since kvm-50 was so terrible slow that we switch back to kvm-46. but i can't catch any stack trace with kvm-50. anyway even if it's not a regression it's currently not working with smp. -- Levente "Si vis pacem para bellum!" |
From: Avi K. <av...@qu...> - 2007-11-11 14:44:30
|
Farkas Levente wrote: > Avi Kivity wrote: > >> Farkas Levente wrote: >> >>> Avi Kivity wrote: >>> >>> >>>> If you're having trouble on AMD systems, please try this out. >>>> >>>> >>> this version worse than kvm-50:-( >>> setup: >>> - host: >>> - Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz >>> - Intel S3000AHV >>> - 8GB RAM >>> - CentOS-5 >>> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >>> - guest-1: >>> - CentOS-5 >>> - kernel-2.6.18-8.1.14.el5 i386 32bit >>> - guest-2: >>> - CentOS-5 >>> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >>> - guest-3: >>> - Mandrake-9 >>> - kernel-2.4.19.16mdk-1-1mdk 32bit >>> - guest-4: >>> - Windows XP Professional 32bit >>> smp not working on any centos guest (guests are hang during boot). even >>> the host crash. the worst thing is the host crash during boot with >>> another stack trace which i was not able to log. >>> i really would like to see some kind of stable version other then >>> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and >>> arch rearrange, but wouldn't it be better to fix these basic issues >>> first? like running two smp guest (32 and 64) on 64 smp host, just to >>> boot until the login screen. >>> this is when the guest stop and the host dump it: >>> >>> >> [...] >> >> >>> Call Trace: >>> <IRQ> [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed >>> [<ffffffff80093493>] update_process_times+0x42/0x68 >>> [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47 >>> [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47 >>> [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c >>> <EOI> [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188 >>> [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188 >>> [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1 >>> >>> >> Are you sure this is a regression relative to kvm-50? Please recheck. >> > > i', not sure this's a regression since kvm-50 was so terrible slow that > we switch back to kvm-46. but i can't catch any stack trace with kvm-50. > anyway even if it's not a regression it's currently not working with smp. > > I can't reproduce this on a centos system here running 2.6.18-8.el5 with a 4-way FC6 x86_64 as guest. It appears to survive a kernel compile. What does one need to do in order to reproduce this? -- error compiling committee.c: too many arguments to function |
From: david a. <da...@ci...> - 2007-11-11 15:32:19
|
I now have hosts running both 32-bit and 64-bit versions of RHEL5.1. I will retry SMP guests on the RHEL5 kernel, but at present kvm-51 does not compile: make -C kernel make[1]: Entering directory `/opt/kvm/kvm-51/kernel' make -C /lib/modules/2.6.18-53.el5/build M=`pwd` "$@" make[2]: Entering directory `/usr/src/kernels/2.6.18-53.el5-i686' LD /opt/kvm/kvm-51/kernel/built-in.o CC [M] /opt/kvm/kvm-51/kernel/svm.o CC [M] /opt/kvm/kvm-51/kernel/vmx.o CC [M] /opt/kvm/kvm-51/kernel/vmx-debug.o CC [M] /opt/kvm/kvm-51/kernel/kvm_main.o /opt/kvm/kvm-51/kernel/kvm_main.c: In function ‘kvm_cpu_hotplug’: /opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: ‘CPU_UP_CANCELED_FROZEN’ undeclared (first use in this function) /opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: (Each undeclared identifier is reported only once /opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: for each function it appears in.) make[3]: *** [/opt/kvm/kvm-51/kernel/kvm_main.o] Error 1 make[2]: *** [_module_/opt/kvm/kvm-51/kernel] Error 2 make[2]: Leaving directory `/usr/src/kernels/2.6.18-53.el5-i686' make[1]: *** [all] Error 2 make[1]: Leaving directory `/opt/kvm/kvm-51/kernel' make: *** [kernel] Error 2 david Avi Kivity wrote: > Farkas Levente wrote: >> Avi Kivity wrote: >> >>> Farkas Levente wrote: >>> >>>> Avi Kivity wrote: >>>> >>>> >>>>> If you're having trouble on AMD systems, please try this out. >>>>> >>>>> >>>> this version worse than kvm-50:-( >>>> setup: >>>> - host: >>>> - Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz >>>> - Intel S3000AHV >>>> - 8GB RAM >>>> - CentOS-5 >>>> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >>>> - guest-1: >>>> - CentOS-5 >>>> - kernel-2.6.18-8.1.14.el5 i386 32bit >>>> - guest-2: >>>> - CentOS-5 >>>> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >>>> - guest-3: >>>> - Mandrake-9 >>>> - kernel-2.4.19.16mdk-1-1mdk 32bit >>>> - guest-4: >>>> - Windows XP Professional 32bit >>>> smp not working on any centos guest (guests are hang during boot). even >>>> the host crash. the worst thing is the host crash during boot with >>>> another stack trace which i was not able to log. >>>> i really would like to see some kind of stable version other then >>>> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and >>>> arch rearrange, but wouldn't it be better to fix these basic issues >>>> first? like running two smp guest (32 and 64) on 64 smp host, just to >>>> boot until the login screen. >>>> this is when the guest stop and the host dump it: >>>> >>>> >>> [...] >>> >>> >>>> Call Trace: >>>> <IRQ> [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed >>>> [<ffffffff80093493>] update_process_times+0x42/0x68 >>>> [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47 >>>> [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47 >>>> [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c >>>> <EOI> [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188 >>>> [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188 >>>> [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1 >>>> >>>> >>> Are you sure this is a regression relative to kvm-50? Please recheck. >>> >> i', not sure this's a regression since kvm-50 was so terrible slow that >> we switch back to kvm-46. but i can't catch any stack trace with kvm-50. >> anyway even if it's not a regression it's currently not working with smp. >> >> > > I can't reproduce this on a centos system here running 2.6.18-8.el5 with > a 4-way FC6 x86_64 as guest. It appears to survive a kernel compile. > > What does one need to do in order to reproduce this? > |
From: david a. <da...@ci...> - 2007-11-11 15:56:02
|
In RHEL 5.1 <linux/notifier.h> defines: #define CPU_TASKS_FROZEN 0x0010 #define CPU_ONLINE_FROZEN (CPU_ONLINE | CPU_TASKS_FROZEN) #define CPU_DEAD_FROZEN (CPU_DEAD | CPU_TASKS_FROZEN) which means in kvm-51/kernel/external-module-compat.h the '#ifndef CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved up the endif around what was defined. With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1). david david ahern wrote: > I now have hosts running both 32-bit and 64-bit versions of RHEL5.1. I will retry SMP guests on the RHEL5 kernel, but at present kvm-51 does not compile: > > make -C kernel > make[1]: Entering directory `/opt/kvm/kvm-51/kernel' > make -C /lib/modules/2.6.18-53.el5/build M=`pwd` "$@" > make[2]: Entering directory `/usr/src/kernels/2.6.18-53.el5-i686' > LD /opt/kvm/kvm-51/kernel/built-in.o > CC [M] /opt/kvm/kvm-51/kernel/svm.o > CC [M] /opt/kvm/kvm-51/kernel/vmx.o > CC [M] /opt/kvm/kvm-51/kernel/vmx-debug.o > CC [M] /opt/kvm/kvm-51/kernel/kvm_main.o > /opt/kvm/kvm-51/kernel/kvm_main.c: In function ‘kvm_cpu_hotplug’: > /opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: ‘CPU_UP_CANCELED_FROZEN’ undeclared (first use in this function) > /opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: (Each undeclared identifier is reported only once > /opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: for each function it appears in.) > make[3]: *** [/opt/kvm/kvm-51/kernel/kvm_main.o] Error 1 > make[2]: *** [_module_/opt/kvm/kvm-51/kernel] Error 2 > make[2]: Leaving directory `/usr/src/kernels/2.6.18-53.el5-i686' > make[1]: *** [all] Error 2 > make[1]: Leaving directory `/opt/kvm/kvm-51/kernel' > make: *** [kernel] Error 2 > > david > > > Avi Kivity wrote: >> Farkas Levente wrote: >>> Avi Kivity wrote: >>> >>>> Farkas Levente wrote: >>>> >>>>> Avi Kivity wrote: >>>>> >>>>> >>>>>> If you're having trouble on AMD systems, please try this out. >>>>>> >>>>>> >>>>> this version worse than kvm-50:-( >>>>> setup: >>>>> - host: >>>>> - Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz >>>>> - Intel S3000AHV >>>>> - 8GB RAM >>>>> - CentOS-5 >>>>> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >>>>> - guest-1: >>>>> - CentOS-5 >>>>> - kernel-2.6.18-8.1.14.el5 i386 32bit >>>>> - guest-2: >>>>> - CentOS-5 >>>>> - kernel-2.6.18-8.1.14.el5 x86_64 64bit >>>>> - guest-3: >>>>> - Mandrake-9 >>>>> - kernel-2.4.19.16mdk-1-1mdk 32bit >>>>> - guest-4: >>>>> - Windows XP Professional 32bit >>>>> smp not working on any centos guest (guests are hang during boot). even >>>>> the host crash. the worst thing is the host crash during boot with >>>>> another stack trace which i was not able to log. >>>>> i really would like to see some kind of stable version other then >>>>> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and >>>>> arch rearrange, but wouldn't it be better to fix these basic issues >>>>> first? like running two smp guest (32 and 64) on 64 smp host, just to >>>>> boot until the login screen. >>>>> this is when the guest stop and the host dump it: >>>>> >>>>> >>>> [...] >>>> >>>> >>>>> Call Trace: >>>>> <IRQ> [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed >>>>> [<ffffffff80093493>] update_process_times+0x42/0x68 >>>>> [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47 >>>>> [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47 >>>>> [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c >>>>> <EOI> [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188 >>>>> [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188 >>>>> [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1 >>>>> >>>>> >>>> Are you sure this is a regression relative to kvm-50? Please recheck. >>>> >>> i', not sure this's a regression since kvm-50 was so terrible slow that >>> we switch back to kvm-46. but i can't catch any stack trace with kvm-50. >>> anyway even if it's not a regression it's currently not working with smp. >>> >>> >> I can't reproduce this on a centos system here running 2.6.18-8.el5 with >> a 4-way FC6 x86_64 as guest. It appears to survive a kernel compile. >> >> What does one need to do in order to reproduce this? >> > |
From: Avi K. <av...@qu...> - 2007-11-11 16:54:18
Attachments:
scfs-simplify.patch
|
david ahern wrote: > In RHEL 5.1 <linux/notifier.h> defines: > > #define CPU_TASKS_FROZEN 0x0010 > > #define CPU_ONLINE_FROZEN (CPU_ONLINE | CPU_TASKS_FROZEN) > #define CPU_DEAD_FROZEN (CPU_DEAD | CPU_TASKS_FROZEN) > > which means in kvm-51/kernel/external-module-compat.h the '#ifndef CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved up the endif around what was defined. > I committed a change which renders this unnecessary. Will be part of kvm-52. > With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1). > > I still don't. Can you test the attached patch? -- error compiling committee.c: too many arguments to function |
From: david a. <da...@ci...> - 2007-11-11 21:10:17
|
The patch worked for me -- rhel4 smp guests boot fine on stock RHEL5 hosts, both 32-bit and 64-bit. david Avi Kivity wrote: > david ahern wrote: >> In RHEL 5.1 <linux/notifier.h> defines: >> >> #define CPU_TASKS_FROZEN 0x0010 >> >> #define CPU_ONLINE_FROZEN (CPU_ONLINE | CPU_TASKS_FROZEN) >> #define CPU_DEAD_FROZEN (CPU_DEAD | CPU_TASKS_FROZEN) >> >> which means in kvm-51/kernel/external-module-compat.h the '#ifndef >> CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved >> up the endif around what was defined. >> > > I committed a change which renders this unnecessary. Will be part of > kvm-52. > >> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests >> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1). >> > > I still don't. Can you test the attached patch? > > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > > > ------------------------------------------------------------------------ > > _______________________________________________ > kvm-devel mailing list > kvm...@li... > https://lists.sourceforge.net/lists/listinfo/kvm-devel |
From: Avi K. <av...@qu...> - 2007-11-12 08:20:31
|
david ahern wrote: > The patch worked for me -- rhel4 smp guests boot fine on stock RHEL5 hosts, both 32-bit and 64-bit. > > Excellent. I had a premonition so it is already committed. Do note that smp_call_function_mask() emulation is pretty bad in terms of performance on large multicores. On a dual code it's basically equivalent to mainline, I guess it's okay for four-way, but above four-way you will need either mainline or a better smp_call_function_mask() (which is nontrivial but doable). > david > > > Avi Kivity wrote: > >> david ahern wrote: >> >>> In RHEL 5.1 <linux/notifier.h> defines: >>> >>> #define CPU_TASKS_FROZEN 0x0010 >>> >>> #define CPU_ONLINE_FROZEN (CPU_ONLINE | CPU_TASKS_FROZEN) >>> #define CPU_DEAD_FROZEN (CPU_DEAD | CPU_TASKS_FROZEN) >>> >>> which means in kvm-51/kernel/external-module-compat.h the '#ifndef >>> CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved >>> up the endif around what was defined. >>> >>> >> I committed a change which renders this unnecessary. Will be part of >> kvm-52. >> >> >>> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests >>> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1). >>> >>> >> I still don't. Can you test the attached patch? >> >> >> >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. >> Still grepping through log files to find problems? Stop. >> Now Search log events and configuration files using AJAX and a browser. >> Download your FREE copy of Splunk now >> http://get.splunk.com/ >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> kvm-devel mailing list >> kvm...@li... >> https://lists.sourceforge.net/lists/listinfo/kvm-devel >> -- error compiling committee.c: too many arguments to function |
From: Avi K. <av...@qu...> - 2007-11-13 16:16:41
|
david ahern wrote: > I let the host stay up for 90 minutes before loading kvm and starting a VM. On the first reboot it hangs at 'Starting udev'. > > First reboot or first boot? I thought the problem was cold starting a VM. > I added 'noapic' to the kernel boot options, and it boots fine. (Turns out I only added that to grub.conf in images that run a particular ap for which I am running performance tests.) > > I would like to know why I need the noapic option to get around this and the networking problem. Are there performance hits as a side effect? > > Looks like there's a bug in the apic emulation. There probably are performance implications. Does -no-kvm-irqchip help? -- error compiling committee.c: too many arguments to function |
From: david a. <da...@ci...> - 2007-11-13 16:31:41
|
First boot has been working fine since your patch this past weekend. It's been subsequent boots that hang. I added -no-kvm-irqchip to qemu command line and did not add the noapic boot option: it's hung at 'Starting udev' again but this time it's not consuming CPU. kernel stack traces for qemu threads: Nov 13 09:27:51 bldr-ccm89 kernel: process trace for qemu-system-x86(3907) Nov 13 09:27:51 bldr-ccm89 kernel: 00000001 00000282 c0438eb7 00000000 c07972d4 c0439187 00000001 0817a000 Nov 13 09:27:51 bldr-ccm89 kernel: f7aed200 000007c4 0817a7c4 0a9a7fd0 0817a7c4 0817a7c4 c0439d66 fff6c373 Nov 13 09:27:51 bldr-ccm89 kernel: ffffffff 0290e500 f7ae4058 00000001 f5274f18 c042e759 00000000 c30126e0 Nov 13 09:27:51 bldr-ccm89 kernel: Call Trace: Nov 13 09:27:51 bldr-ccm89 kernel: [<c0438eb7>] wake_futex+0x3a/0x44 Nov 13 09:27:51 bldr-ccm89 kernel: [<c0439187>] futex_wake+0xa9/0xb3 Nov 13 09:27:51 bldr-ccm89 kernel: [<c0439d66>] do_futex+0x20d/0xb15 Nov 13 09:27:51 bldr-ccm89 kernel: [<c042e759>] __dequeue_signal+0x151/0x15c Nov 13 09:27:51 bldr-ccm89 kernel: [<c0604884>] schedule_timeout+0x71/0x8c Nov 13 09:27:51 bldr-ccm89 kernel: [<c042d1ab>] process_timeout+0x0/0x5 Nov 13 09:27:51 bldr-ccm89 kernel: [<c0430747>] sys_rt_sigtimedwait+0x1e0/0x2c2 Nov 13 09:27:51 bldr-ccm89 kernel: [<c042cc0e>] getnstimeofday+0x30/0xb6 Nov 13 09:27:51 bldr-ccm89 kernel: [<c04386d6>] ktime_get_ts+0x16/0x44 Nov 13 09:27:51 bldr-ccm89 kernel: [<c04388b6>] ktime_get+0x12/0x34 Nov 13 09:27:51 bldr-ccm89 kernel: [<c04352a6>] common_timer_get+0xee/0x129 Nov 13 09:27:51 bldr-ccm89 kernel: [<c044abd9>] audit_syscall_entry+0x11c/0x14e Nov 13 09:27:51 bldr-ccm89 kernel: [<c0404eff>] syscall_call+0x7/0xb Nov 13 09:27:51 bldr-ccm89 kernel: ======================= Nov 13 09:27:55 bldr-ccm89 kernel: process trace for qemu-system-x86(3909) Nov 13 09:27:55 bldr-ccm89 kernel: f47a6ee4 00000086 c0438eb7 ec1af7ea 00000734 c0439187 00000009 f7c13000 Nov 13 09:27:55 bldr-ccm89 kernel: c066d3c0 ec1b88ca 00000734 000090e0 00000000 f7c1310c c30126e0 c0673b80 Nov 13 09:27:55 bldr-ccm89 kernel: 00000082 00000046 f7ae4058 ffffffff 00000000 00000000 7fffffff 7fffffff Nov 13 09:27:55 bldr-ccm89 kernel: Call Trace: Nov 13 09:27:55 bldr-ccm89 kernel: [<c0438eb7>] wake_futex+0x3a/0x44 Nov 13 09:27:55 bldr-ccm89 kernel: [<c0439187>] futex_wake+0xa9/0xb3 Nov 13 09:27:55 bldr-ccm89 kernel: [<c0604826>] schedule_timeout+0x13/0x8c Nov 13 09:27:55 bldr-ccm89 kernel: [<c042fa99>] dequeue_signal+0x2d/0x9c Nov 13 09:27:55 bldr-ccm89 kernel: [<c0430747>] sys_rt_sigtimedwait+0x1e0/0x2c2 Nov 13 09:27:55 bldr-ccm89 kernel: [<c04202b1>] default_wake_function+0x0/0xc Nov 13 09:27:55 bldr-ccm89 kernel: [<f8c15319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] Nov 13 09:27:55 bldr-ccm89 kernel: [<c044abd9>] audit_syscall_entry+0x11c/0x14e Nov 13 09:27:55 bldr-ccm89 kernel: [<c0404eff>] syscall_call+0x7/0xb Nov 13 09:27:55 bldr-ccm89 kernel: ======================= Nov 13 09:27:59 bldr-ccm89 kernel: process trace for qemu-system-x86(3910) Nov 13 09:27:59 bldr-ccm89 kernel: f4d19ee4 00000086 c0438eb7 04c3e7a7 00000736 c0439187 0000000a f7c0a000 Nov 13 09:27:59 bldr-ccm89 kernel: f7450550 04c48a79 00000736 0000a2d2 00000002 f7c0a10c c30226e0 c0673b80 Nov 13 09:27:59 bldr-ccm89 kernel: 00000082 00000046 f7ae4058 f4d19f18 f4d19f18 c042e759 7fffffff 7fffffff Nov 13 09:27:59 bldr-ccm89 kernel: Call Trace: Nov 13 09:27:59 bldr-ccm89 kernel: [<c0438eb7>] wake_futex+0x3a/0x44 Nov 13 09:27:59 bldr-ccm89 kernel: [<c0439187>] futex_wake+0xa9/0xb3 Nov 13 09:27:59 bldr-ccm89 kernel: [<c042e759>] __dequeue_signal+0x151/0x15c Nov 13 09:27:59 bldr-ccm89 kernel: [<c0604826>] schedule_timeout+0x13/0x8c Nov 13 09:27:59 bldr-ccm89 kernel: [<c042fa99>] dequeue_signal+0x2d/0x9c Nov 13 09:27:59 bldr-ccm89 kernel: [<c0430747>] sys_rt_sigtimedwait+0x1e0/0x2c2 Nov 13 09:27:59 bldr-ccm89 kernel: [<c04202b1>] default_wake_function+0x0/0xc Nov 13 09:27:59 bldr-ccm89 kernel: [<f8c15319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] Nov 13 09:27:59 bldr-ccm89 kernel: [<c044abd9>] audit_syscall_entry+0x11c/0x14e Nov 13 09:27:59 bldr-ccm89 kernel: [<c0404eff>] syscall_call+0x7/0xb Nov 13 09:27:59 bldr-ccm89 kernel: ======================= Nov 13 09:28:02 bldr-ccm89 kernel: process trace for qemu-system-x86(3911) Nov 13 09:28:02 bldr-ccm89 kernel: f4370ee4 00000086 c0438eb7 9b91a394 00000736 c0439187 00000009 f7450550 Nov 13 09:28:02 bldr-ccm89 kernel: c3107000 9b922a3f 00000736 000086ab 00000002 f745065c c30226e0 c0673b80 Nov 13 09:28:02 bldr-ccm89 kernel: 00000082 00000046 f7ae4058 ffffffff 00000000 00000000 7fffffff 7fffffff Nov 13 09:28:02 bldr-ccm89 kernel: Call Trace: Nov 13 09:28:02 bldr-ccm89 kernel: [<c0438eb7>] wake_futex+0x3a/0x44 Nov 13 09:28:02 bldr-ccm89 kernel: [<c0439187>] futex_wake+0xa9/0xb3 Nov 13 09:28:02 bldr-ccm89 kernel: [<c0604826>] schedule_timeout+0x13/0x8c Nov 13 09:28:02 bldr-ccm89 kernel: [<c042fa99>] dequeue_signal+0x2d/0x9c Nov 13 09:28:02 bldr-ccm89 kernel: [<c0430747>] sys_rt_sigtimedwait+0x1e0/0x2c2 Nov 13 09:28:02 bldr-ccm89 kernel: [<c04202b1>] default_wake_function+0x0/0xc Nov 13 09:28:02 bldr-ccm89 kernel: [<c0435bed>] sys_timer_settime+0x243/0x24f Nov 13 09:28:02 bldr-ccm89 kernel: [<f8c15319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] Nov 13 09:28:02 bldr-ccm89 kernel: [<c044abd9>] audit_syscall_entry+0x11c/0x14e Nov 13 09:28:02 bldr-ccm89 kernel: [<c047f473>] vfs_ioctl+0x24a/0x25c Nov 13 09:28:02 bldr-ccm89 kernel: [<c047f4cd>] sys_ioctl+0x48/0x5f Nov 13 09:28:02 bldr-ccm89 kernel: [<c0404eff>] syscall_call+0x7/0xb Nov 13 09:28:02 bldr-ccm89 kernel: ======================= Nov 13 09:28:05 bldr-ccm89 kernel: process trace for qemu-system-x86(3913) Nov 13 09:28:05 bldr-ccm89 kernel: f5442e90 00000086 f5aaa4ac f0aa7e8f 00000715 00000019 0000000a f7c1daa0 Nov 13 09:28:05 bldr-ccm89 kernel: f7c13aa0 f0aaa96f 00000715 00002ae0 00000003 f7c1dbac c302a6e0 c042da86 Nov 13 09:28:05 bldr-ccm89 kernel: f7d20000 f5442e98 00000286 c042db97 00000000 00000286 80728887 80728887 Nov 13 09:28:05 bldr-ccm89 kernel: Call Trace: Nov 13 09:28:05 bldr-ccm89 kernel: [<c042da86>] lock_timer_base+0x15/0x2f Nov 13 09:28:05 bldr-ccm89 kernel: [<c042db97>] __mod_timer+0x99/0xa3 Nov 13 09:28:05 bldr-ccm89 kernel: [<c0604884>] schedule_timeout+0x71/0x8c Nov 13 09:28:05 bldr-ccm89 kernel: [<c042d1ab>] process_timeout+0x0/0x5 Nov 13 09:28:05 bldr-ccm89 kernel: [<c0439cf5>] do_futex+0x19c/0xb15 Nov 13 09:28:05 bldr-ccm89 kernel: [<c042e937>] send_signal+0x47/0xde Nov 13 09:28:05 bldr-ccm89 kernel: [<c042eea4>] __group_send_sig_info+0x74/0x7e Nov 13 09:28:05 bldr-ccm89 kernel: [<c04202b1>] default_wake_function+0x0/0xc Nov 13 09:28:05 bldr-ccm89 kernel: [<c043a777>] sys_futex+0x109/0x11f Nov 13 09:28:05 bldr-ccm89 kernel: [<c0404eff>] syscall_call+0x7/0xb Nov 13 09:28:05 bldr-ccm89 kernel: ======================= david Avi Kivity wrote: > david ahern wrote: >> I let the host stay up for 90 minutes before loading kvm and starting a VM. On the first reboot it hangs at 'Starting udev'. >> >> > > First reboot or first boot? > > I thought the problem was cold starting a VM. > >> I added 'noapic' to the kernel boot options, and it boots fine. (Turns out I only added that to grub.conf in images that run a particular ap for which I am running performance tests.) >> >> I would like to know why I need the noapic option to get around this and the networking problem. Are there performance hits as a side effect? >> >> > > Looks like there's a bug in the apic emulation. There probably are > performance implications. Does -no-kvm-irqchip help? > > |
From: Avi K. <av...@qu...> - 2007-11-13 16:33:09
|
david ahern wrote: > First boot has been working fine since your patch this past weekend. It's been subsequent boots that hang. > > I added -no-kvm-irqchip to qemu command line and did not add the noapic boot option: it's hung at 'Starting udev' again but this time it's not consuming CPU. kernel stack traces for qemu threads: > > Ah okay. I misunderstood. How about -no-kvm? Maybe it's a qemu problem. -- error compiling committee.c: too many arguments to function |
From: david a. <da...@ci...> - 2007-11-13 16:54:24
|
I removed the kvm/kvm-intel modules. qemu command line was: /usr/local/bin/qemu-system-x86_64 -boot c -localtime -hda /opt/kvm/images/rhel5.img -m 1536 -smp 4 -net nic,macaddr=00:0c:29:10:10:e8,model=rtl8139 -net tap,ifname=tap0,script=/bin/true -monitor stdio -no-kvm -name bldr-ccm89.cisco.com -vnc :2 I did *not* add 'noapic' to guest kernel boot. The VM boot went fine; the reboot did not. qemu process was showing 100% CPU. After a few minutes I hit ctrl-c, to terminate qemu and then restarted the exact the same command. Same result: boot went fine; shutdown did not, though it hung at a different spot. If it matters, host for this test is an HP DL380 G5. david Avi Kivity wrote: > david ahern wrote: >> First boot has been working fine since your patch this past weekend. >> It's been subsequent boots that hang. >> >> I added -no-kvm-irqchip to qemu command line and did not add the >> noapic boot option: it's hung at 'Starting udev' again but this time >> it's not consuming CPU. kernel stack traces for qemu threads: >> >> > > Ah okay. I misunderstood. > > How about -no-kvm? Maybe it's a qemu problem. > |
From: Avi K. <av...@qu...> - 2007-11-13 16:59:37
|
david ahern wrote: > I removed the kvm/kvm-intel modules. qemu command line was: > > /usr/local/bin/qemu-system-x86_64 -boot c -localtime -hda /opt/kvm/images/rhel5.img -m 1536 -smp 4 -net nic,macaddr=00:0c:29:10:10:e8,model=rtl8139 -net tap,ifname=tap0,script=/bin/true -monitor stdio -no-kvm -name bldr-ccm89.cisco.com -vnc :2 > > I did *not* add 'noapic' to guest kernel boot. > > The VM boot went fine; the reboot did not. qemu process was showing 100% CPU. After a few minutes I hit ctrl-c, to terminate qemu and then restarted the exact the same command. Same result: boot went fine; shutdown did not, though it hung at a different spot. > Thanks; that helps isolate the problem. I'll probably be able to reproduce it since it's likely not a timing issue. -- error compiling committee.c: too many arguments to function |
From: david a. <da...@ci...> - 2007-11-12 21:46:33
|
With kvm-52 my 32-bit host running RHEL5.1 can start an RHEL 5 SMP guest only once. Second and subsequent attempts hang. Removing kvm and kvm_intel modules have no affect; I need to reboot the host to get an SMP guest to start. My similarly configured 64-bit host does not seem to have this problem. Second attempts to start the RHEL5 SMP guest hang at: Starting udev: _ Looking at top on the host shows qemu in a loop: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3909 root 18 0 1625m 67m 9476 R 400 2.1 2:52.32 qemu-system-x86 In this case the qemu threads are: PID LWP TTY TIME CMD 3909 3909 pts/0 00:01:12 qemu-system-x86 3909 3911 pts/0 00:01:05 qemu-system-x86 3909 3912 pts/0 00:01:05 qemu-system-x86 3909 3913 pts/0 00:01:07 qemu-system-x86 3909 3917 pts/0 00:00:00 qemu-system-x86 and their kernel side backtraces are: process trace for qemu-system-x86(3909) f5967d88 00000082 f8c125e4 bbdec465 000001c6 f5230da4 00000001 f7acf000 f7d7d000 bbded629 000001c6 000011c4 00000000 f7acf110 c30126e0 00000001 f4d8a000 f5967d90 f5967d80 f5230da0 f5967000 f8c11120 f5230da0 f4d8a000 Call Trace: [<f8c125e4>] vmx_vcpu_put+0xef/0xf6 [kvm_intel] [<f8c11120>] handle_external_interrupt+0x0/0xc [kvm_intel] [<c042169f>] __cond_resched+0x16/0x34 [<c0604218>] cond_resched+0x2a/0x31 [<f8b96d7f>] kvm_arch_vcpu_ioctl_run+0x28d/0x333 [kvm] [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm] [<c042169f>] __cond_resched+0x16/0x34 [<c0604218>] cond_resched+0x2a/0x31 [<c0480305>] core_sys_select+0x1ef/0x2ca [<c041ea84>] __wake_up_common+0x2f/0x53 [<c0604141>] schedule+0x90d/0x9ba [<c0405953>] reschedule_interrupt+0x1f/0x24 [<c042e759>] __dequeue_signal+0x151/0x15c [<c042fa99>] dequeue_signal+0x2d/0x9c [<c043062c>] sys_rt_sigtimedwait+0xc5/0x2c2 [<c042cc0e>] getnstimeofday+0x30/0xb6 [<c04386d6>] ktime_get_ts+0x16/0x44 [<c04388b6>] ktime_get+0x12/0x34 [<c04352a6>] common_timer_get+0xee/0x129 [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] [<c047f1e8>] do_ioctl+0x1c/0x5d [<c047f473>] vfs_ioctl+0x24a/0x25c [<c047f4cd>] sys_ioctl+0x48/0x5f [<c0404eff>] syscall_call+0x7/0xb ======================= process trace for qemu-system-x86(3911) c301a6e0 00000100 000001c7 f749baa0 00000001 c301a6e0 f749baa0 00000001 f51fed44 f51fed44 f51fed6c 00000001 00000001 00000046 f579ce20 f57ee000 00000001 c04059bf f579ce20 8005003b 00006c00 f8c113e5 f579ce20 f579ce20 Call Trace: [<c04059bf>] apic_timer_interrupt+0x1f/0x24 [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel] [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm] [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm] [<c041fa31>] enqueue_task+0x29/0x39 [<c041fa5d>] __activate_task+0x1c/0x29 [<c04202a7>] try_to_wake_up+0x371/0x37b [<c0604141>] schedule+0x90d/0x9ba [<c041ea84>] __wake_up_common+0x2f/0x53 [<c041f871>] __wake_up+0x2a/0x3d [<c0438eb7>] wake_futex+0x3a/0x44 [<c0439187>] futex_wake+0xa9/0xb3 [<c0439d66>] do_futex+0x20d/0xb15 [<f8b94696>] kvm_ack_smp_call+0x17/0x27 [kvm] [<c042e759>] __dequeue_signal+0x151/0x15c [<c042fa99>] dequeue_signal+0x2d/0x9c [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm] [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm] [<c04202b1>] default_wake_function+0x0/0xc [<c040599b>] call_function_interrupt+0x1f/0x24 [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] [<c047f1e8>] do_ioctl+0x1c/0x5d [<c047f473>] vfs_ioctl+0x24a/0x25c [<c047f4cd>] sys_ioctl+0x48/0x5f [<c0404eff>] syscall_call+0x7/0xb ======================= process trace for qemu-system-x86(3912) f560fd88 00000082 f8c125e4 193272c5 000001c8 f52b6074 00000004 f7f09000 f7f09000 19328fc9 000001c8 00001d04 00000002 f52b6070 55eefb90 00000000 f52b6070 f5693000 f52b6070 8005003b 00006c00 f8c113e5 f52b6070 f52b6070 Call Trace: [<f8c125e4>] vmx_vcpu_put+0xef/0xf6 [kvm_intel] [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel] [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm] [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm] [<c0604141>] schedule+0x90d/0x9ba [<c041ea84>] __wake_up_common+0x2f/0x53 [<c0461e10>] find_extend_vma+0x12/0x49 [<c0438d53>] get_futex_key+0x40/0xd0 [<c0439187>] futex_wake+0xa9/0xb3 [<c0439d66>] do_futex+0x20d/0xb15 [<f888f9b0>] ext3_ordered_writepage+0x0/0x162 [ext3] [<c042e759>] __dequeue_signal+0x151/0x15c [<c042fa99>] dequeue_signal+0x2d/0x9c [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm] [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm] [<c04202b1>] default_wake_function+0x0/0xc [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] [<c047f1e8>] do_ioctl+0x1c/0x5d [<c047f473>] vfs_ioctl+0x24a/0x25c [<c047f4cd>] sys_ioctl+0x48/0x5f [<c0404eff>] syscall_call+0x7/0xb ======================= process trace for qemu-system-x86(3913) c302a6e0 00000100 000001c8 f7488550 00000003 c302a6e0 f7488550 00000003 f4d92d44 f4d92d44 f4d92d6c 00000001 00000001 00000046 f52b6de0 f5aaa000 00000001 c04059bf f52b6de0 8005003b 00006c00 f8c113e5 f52b6de0 f52b6de0 Call Trace: [<c04059bf>] apic_timer_interrupt+0x1f/0x24 [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel] [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm] [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm] [<c0604141>] schedule+0x90d/0x9ba [<c041ea84>] __wake_up_common+0x2f/0x53 [<c0461e10>] find_extend_vma+0x12/0x49 [<c0438d53>] get_futex_key+0x40/0xd0 [<c0439187>] futex_wake+0xa9/0xb3 [<c0439d66>] do_futex+0x20d/0xb15 [<c040599b>] call_function_interrupt+0x1f/0x24 [<c042e759>] __dequeue_signal+0x151/0x15c [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm] [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm] [<c04202b1>] default_wake_function+0x0/0xc [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] [<c047f1e8>] do_ioctl+0x1c/0x5d [<c047f473>] vfs_ioctl+0x24a/0x25c [<c047f4cd>] sys_ioctl+0x48/0x5f [<c0404eff>] syscall_call+0x7/0xb ======================= david Avi Kivity wrote: > david ahern wrote: >> The patch worked for me -- rhel4 smp guests boot fine on stock RHEL5 >> hosts, both 32-bit and 64-bit. >> >> > > Excellent. I had a premonition so it is already committed. > > Do note that smp_call_function_mask() emulation is pretty bad in terms > of performance on large multicores. On a dual code it's basically > equivalent to mainline, I guess it's okay for four-way, but above > four-way you will need either mainline or a better > smp_call_function_mask() (which is nontrivial but doable). > >> david >> >> >> Avi Kivity wrote: >> >>> david ahern wrote: >>> >>>> In RHEL 5.1 <linux/notifier.h> defines: >>>> >>>> #define CPU_TASKS_FROZEN 0x0010 >>>> >>>> #define CPU_ONLINE_FROZEN (CPU_ONLINE | CPU_TASKS_FROZEN) >>>> #define CPU_DEAD_FROZEN (CPU_DEAD | CPU_TASKS_FROZEN) >>>> >>>> which means in kvm-51/kernel/external-module-compat.h the '#ifndef >>>> CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved >>>> up the endif around what was defined. >>>> >>> I committed a change which renders this unnecessary. Will be part of >>> kvm-52. >>> >>> >>>> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests >>>> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1). >>>> >>> I still don't. Can you test the attached patch? >>> >>> >>> >>> ------------------------------------------------------------------------ >>> >>> ------------------------------------------------------------------------- >>> >>> This SF.net email is sponsored by: Splunk Inc. >>> Still grepping through log files to find problems? Stop. >>> Now Search log events and configuration files using AJAX and a browser. >>> Download your FREE copy of Splunk now >> http://get.splunk.com/ >>> >>> >>> ------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> kvm-devel mailing list >>> kvm...@li... >>> https://lists.sourceforge.net/lists/listinfo/kvm-devel >>> > > |
From: david a. <da...@ci...> - 2007-11-12 22:37:17
Attachments:
messages
|
(Changed the subject to correspond with email.) I am having the same problem on the 64-bit host running RHEL5.1 as well, it just takes more reboots. Same symptoms as I mentioned for the 32-bit host. kernel side stack traces for each qemu thread for one of the lockups is attached; the file contains traces for each thread at 3 sample times in case it helps get some insight. david david ahern wrote: > With kvm-52 my 32-bit host running RHEL5.1 can start an RHEL 5 SMP guest only once. Second and subsequent attempts hang. Removing kvm and kvm_intel modules have no affect; I need to reboot the host to get an SMP guest to start. My similarly configured 64-bit host does not seem to have this problem. > > > Second attempts to start the RHEL5 SMP guest hang at: > Starting udev: _ > > > Looking at top on the host shows qemu in a loop: > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND > 3909 root 18 0 1625m 67m 9476 R 400 2.1 2:52.32 qemu-system-x86 > > > In this case the qemu threads are: > > PID LWP TTY TIME CMD > 3909 3909 pts/0 00:01:12 qemu-system-x86 > 3909 3911 pts/0 00:01:05 qemu-system-x86 > 3909 3912 pts/0 00:01:05 qemu-system-x86 > 3909 3913 pts/0 00:01:07 qemu-system-x86 > 3909 3917 pts/0 00:00:00 qemu-system-x86 > > > > and their kernel side backtraces are: > > process trace for qemu-system-x86(3909) > f5967d88 00000082 f8c125e4 bbdec465 000001c6 f5230da4 00000001 f7acf000 > f7d7d000 bbded629 000001c6 000011c4 00000000 f7acf110 c30126e0 00000001 > f4d8a000 f5967d90 f5967d80 f5230da0 f5967000 f8c11120 f5230da0 f4d8a000 > Call Trace: > [<f8c125e4>] vmx_vcpu_put+0xef/0xf6 [kvm_intel] > [<f8c11120>] handle_external_interrupt+0x0/0xc [kvm_intel] > [<c042169f>] __cond_resched+0x16/0x34 > [<c0604218>] cond_resched+0x2a/0x31 > [<f8b96d7f>] kvm_arch_vcpu_ioctl_run+0x28d/0x333 [kvm] > [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] > [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm] > [<c042169f>] __cond_resched+0x16/0x34 > [<c0604218>] cond_resched+0x2a/0x31 > [<c0480305>] core_sys_select+0x1ef/0x2ca > [<c041ea84>] __wake_up_common+0x2f/0x53 > [<c0604141>] schedule+0x90d/0x9ba > [<c0405953>] reschedule_interrupt+0x1f/0x24 > [<c042e759>] __dequeue_signal+0x151/0x15c > [<c042fa99>] dequeue_signal+0x2d/0x9c > [<c043062c>] sys_rt_sigtimedwait+0xc5/0x2c2 > [<c042cc0e>] getnstimeofday+0x30/0xb6 > [<c04386d6>] ktime_get_ts+0x16/0x44 > [<c04388b6>] ktime_get+0x12/0x34 > [<c04352a6>] common_timer_get+0xee/0x129 > [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] > [<c047f1e8>] do_ioctl+0x1c/0x5d > [<c047f473>] vfs_ioctl+0x24a/0x25c > [<c047f4cd>] sys_ioctl+0x48/0x5f > [<c0404eff>] syscall_call+0x7/0xb > ======================= > process trace for qemu-system-x86(3911) > c301a6e0 00000100 000001c7 f749baa0 00000001 c301a6e0 f749baa0 00000001 > f51fed44 f51fed44 f51fed6c 00000001 00000001 00000046 f579ce20 f57ee000 > 00000001 c04059bf f579ce20 8005003b 00006c00 f8c113e5 f579ce20 f579ce20 > Call Trace: > [<c04059bf>] apic_timer_interrupt+0x1f/0x24 > [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel] > [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm] > [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] > [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm] > [<c041fa31>] enqueue_task+0x29/0x39 > [<c041fa5d>] __activate_task+0x1c/0x29 > [<c04202a7>] try_to_wake_up+0x371/0x37b > [<c0604141>] schedule+0x90d/0x9ba > [<c041ea84>] __wake_up_common+0x2f/0x53 > [<c041f871>] __wake_up+0x2a/0x3d > [<c0438eb7>] wake_futex+0x3a/0x44 > [<c0439187>] futex_wake+0xa9/0xb3 > [<c0439d66>] do_futex+0x20d/0xb15 > [<f8b94696>] kvm_ack_smp_call+0x17/0x27 [kvm] > [<c042e759>] __dequeue_signal+0x151/0x15c > [<c042fa99>] dequeue_signal+0x2d/0x9c > [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm] > [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm] > [<c04202b1>] default_wake_function+0x0/0xc > [<c040599b>] call_function_interrupt+0x1f/0x24 > [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] > [<c047f1e8>] do_ioctl+0x1c/0x5d > [<c047f473>] vfs_ioctl+0x24a/0x25c > [<c047f4cd>] sys_ioctl+0x48/0x5f > [<c0404eff>] syscall_call+0x7/0xb > ======================= > process trace for qemu-system-x86(3912) > f560fd88 00000082 f8c125e4 193272c5 000001c8 f52b6074 00000004 f7f09000 > f7f09000 19328fc9 000001c8 00001d04 00000002 f52b6070 55eefb90 00000000 > f52b6070 f5693000 f52b6070 8005003b 00006c00 f8c113e5 f52b6070 f52b6070 > Call Trace: > [<f8c125e4>] vmx_vcpu_put+0xef/0xf6 [kvm_intel] > [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel] > [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm] > [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] > [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm] > [<c0604141>] schedule+0x90d/0x9ba > [<c041ea84>] __wake_up_common+0x2f/0x53 > [<c0461e10>] find_extend_vma+0x12/0x49 > [<c0438d53>] get_futex_key+0x40/0xd0 > [<c0439187>] futex_wake+0xa9/0xb3 > [<c0439d66>] do_futex+0x20d/0xb15 > [<f888f9b0>] ext3_ordered_writepage+0x0/0x162 [ext3] > [<c042e759>] __dequeue_signal+0x151/0x15c > [<c042fa99>] dequeue_signal+0x2d/0x9c > [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm] > [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm] > [<c04202b1>] default_wake_function+0x0/0xc > [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] > [<c047f1e8>] do_ioctl+0x1c/0x5d > [<c047f473>] vfs_ioctl+0x24a/0x25c > [<c047f4cd>] sys_ioctl+0x48/0x5f > [<c0404eff>] syscall_call+0x7/0xb > ======================= > process trace for qemu-system-x86(3913) > c302a6e0 00000100 000001c8 f7488550 00000003 c302a6e0 f7488550 00000003 > f4d92d44 f4d92d44 f4d92d6c 00000001 00000001 00000046 f52b6de0 f5aaa000 > 00000001 c04059bf f52b6de0 8005003b 00006c00 f8c113e5 f52b6de0 f52b6de0 > Call Trace: > [<c04059bf>] apic_timer_interrupt+0x1f/0x24 > [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel] > [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm] > [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] > [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm] > [<c0604141>] schedule+0x90d/0x9ba > [<c041ea84>] __wake_up_common+0x2f/0x53 > [<c0461e10>] find_extend_vma+0x12/0x49 > [<c0438d53>] get_futex_key+0x40/0xd0 > [<c0439187>] futex_wake+0xa9/0xb3 > [<c0439d66>] do_futex+0x20d/0xb15 > [<c040599b>] call_function_interrupt+0x1f/0x24 > [<c042e759>] __dequeue_signal+0x151/0x15c > [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm] > [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm] > [<c04202b1>] default_wake_function+0x0/0xc > [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm] > [<c047f1e8>] do_ioctl+0x1c/0x5d > [<c047f473>] vfs_ioctl+0x24a/0x25c > [<c047f4cd>] sys_ioctl+0x48/0x5f > [<c0404eff>] syscall_call+0x7/0xb > ======================= > > david > > > Avi Kivity wrote: >> david ahern wrote: >>> The patch worked for me -- rhel4 smp guests boot fine on stock RHEL5 >>> hosts, both 32-bit and 64-bit. >>> >>> >> Excellent. I had a premonition so it is already committed. >> >> Do note that smp_call_function_mask() emulation is pretty bad in terms >> of performance on large multicores. On a dual code it's basically >> equivalent to mainline, I guess it's okay for four-way, but above >> four-way you will need either mainline or a better >> smp_call_function_mask() (which is nontrivial but doable). >> >>> david >>> >>> >>> Avi Kivity wrote: >>> >>>> david ahern wrote: >>>> >>>>> In RHEL 5.1 <linux/notifier.h> defines: >>>>> >>>>> #define CPU_TASKS_FROZEN 0x0010 >>>>> >>>>> #define CPU_ONLINE_FROZEN (CPU_ONLINE | CPU_TASKS_FROZEN) >>>>> #define CPU_DEAD_FROZEN (CPU_DEAD | CPU_TASKS_FROZEN) >>>>> >>>>> which means in kvm-51/kernel/external-module-compat.h the '#ifndef >>>>> CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved >>>>> up the endif around what was defined. >>>>> >>>> I committed a change which renders this unnecessary. Will be part of >>>> kvm-52. >>>> >>>> >>>>> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests >>>>> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1). >>>>> >>>> I still don't. Can you test the attached patch? >>>> >>>> >>>> >>>> ------------------------------------------------------------------------ >>>> >>>> ------------------------------------------------------------------------- >>>> >>>> This SF.net email is sponsored by: Splunk Inc. >>>> Still grepping through log files to find problems? Stop. >>>> Now Search log events and configuration files using AJAX and a browser. >>>> Download your FREE copy of Splunk now >> http://get.splunk.com/ >>>> >>>> >>>> ------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> kvm-devel mailing list >>>> kvm...@li... >>>> https://lists.sourceforge.net/lists/listinfo/kvm-devel >>>> >> > |