From: Ryan H. <ry...@us...> - 2008-05-07 22:03:28
|
I've been playing around with smp guests on a couple amd systems and I've also seen some of the smp/locking issues which lead me to dig into some of the tsc code. The svm_cpu_load, the tsc_offset calc will generate a massively large tsc_offset if we're switching cpus and tsc_this is ahead of the host_tsc value (delta would normally be negative, but since it's unsigned, we get a huge positive number). svm_vcpu_load() ... rdtscll(tsc_this); delta = vcpu->arch.host_tsc - tsc_this; svm->vmcb->control.tsc_offset += delta; This is handled a little differently on Intel (in vmx.c) where there is a check: if (tsc_this < vcpu->arch.host_tsc) /* do delta and new offset calc */ This check makes sense to me in that we only need to ensure that we don't go backwards which means that unless the new cpu is behind the current vcpu's host_tsc, we can skip a new delta calc. If the check doesn't make sense then we'll need to do the math with s64s. Attached patch fixed the case where an idle guest was live-locked. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 ry...@us... diffstat output: svm.c | 10 ++++++++-- 1 files changed, 8 insertions(+), 2 deletions(-) Signed-off-by: Ryan Harper <ry...@us...> --- diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 5528121..c919ddd 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -685,8 +685,14 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) * increasing TSC. */ rdtscll(tsc_this); - delta = vcpu->arch.host_tsc - tsc_this; - svm->vmcb->control.tsc_offset += delta; + /* we only need to adjust this if the old tsc was ahead + * also, we'll generate a massively large u64 value if + * tsc_this is less than host_tsc because of unsigned math + */ + if (tsc_this < vcpu->arch.host_tsc) { + delta = vcpu->arch.host_tsc - tsc_this; + svm->vmcb->control.tsc_offset += delta; + } vcpu->cpu = cpu; kvm_migrate_apic_timer(vcpu); } |