From: Ryan H. <ry...@us...> - 2008-05-07 22:03:28
|
I've been playing around with smp guests on a couple amd systems and I've also seen some of the smp/locking issues which lead me to dig into some of the tsc code. The svm_cpu_load, the tsc_offset calc will generate a massively large tsc_offset if we're switching cpus and tsc_this is ahead of the host_tsc value (delta would normally be negative, but since it's unsigned, we get a huge positive number). svm_vcpu_load() ... rdtscll(tsc_this); delta = vcpu->arch.host_tsc - tsc_this; svm->vmcb->control.tsc_offset += delta; This is handled a little differently on Intel (in vmx.c) where there is a check: if (tsc_this < vcpu->arch.host_tsc) /* do delta and new offset calc */ This check makes sense to me in that we only need to ensure that we don't go backwards which means that unless the new cpu is behind the current vcpu's host_tsc, we can skip a new delta calc. If the check doesn't make sense then we'll need to do the math with s64s. Attached patch fixed the case where an idle guest was live-locked. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 ry...@us... diffstat output: svm.c | 10 ++++++++-- 1 files changed, 8 insertions(+), 2 deletions(-) Signed-off-by: Ryan Harper <ry...@us...> --- diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 5528121..c919ddd 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -685,8 +685,14 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) * increasing TSC. */ rdtscll(tsc_this); - delta = vcpu->arch.host_tsc - tsc_this; - svm->vmcb->control.tsc_offset += delta; + /* we only need to adjust this if the old tsc was ahead + * also, we'll generate a massively large u64 value if + * tsc_this is less than host_tsc because of unsigned math + */ + if (tsc_this < vcpu->arch.host_tsc) { + delta = vcpu->arch.host_tsc - tsc_this; + svm->vmcb->control.tsc_offset += delta; + } vcpu->cpu = cpu; kvm_migrate_apic_timer(vcpu); } |
From: Anthony L. <an...@co...> - 2008-05-07 22:30:08
|
Ryan Harper wrote: > I've been playing around with smp guests on a couple amd systems and > I've also seen some of the smp/locking issues which lead me to dig into > some of the tsc code. The svm_cpu_load, the tsc_offset calc will > generate a massively large tsc_offset if we're switching cpus and > tsc_this is ahead of the host_tsc value (delta would normally be > negative, but since it's unsigned, we get a huge positive number). > > svm_vcpu_load() > ... > rdtscll(tsc_this); > delta = vcpu->arch.host_tsc - tsc_this; > svm->vmcb->control.tsc_offset += delta; > This math will work out fine since the very large number will generate an overflow and the result will be identical to if we were using s64s. We're using u64s because that's how the tsc_offset is defined by hardware. > This is handled a little differently on Intel (in vmx.c) where there is > a check: > > if (tsc_this < vcpu->arch.host_tsc) > /* do delta and new offset calc */ > So what your patch really does is change the behavior of the tsc_offset to increase the guest's TSC by a potentially large amount depending on how out of sync the TSC is on CPU migration. The question is why this would make things work out better for you.. Do you have Gerd's kvm-clock most recent patch applied? Regards, Anthony Liguori > This check makes sense to me in that we only need to ensure that we > don't go backwards which means that unless the new cpu is behind the > current vcpu's host_tsc, we can skip a new delta calc. If the check > doesn't make sense then we'll need to do the math with s64s. > > Attached patch fixed the case where an idle guest was live-locked. > > |
From: Joerg R. <joe...@am...> - 2008-05-13 10:11:53
|
On Wed, May 07, 2008 at 05:01:02PM -0500, Ryan Harper wrote: > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 5528121..c919ddd 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -685,8 +685,14 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > * increasing TSC. > */ > rdtscll(tsc_this); > - delta = vcpu->arch.host_tsc - tsc_this; > - svm->vmcb->control.tsc_offset += delta; > + /* we only need to adjust this if the old tsc was ahead > + * also, we'll generate a massively large u64 value if > + * tsc_this is less than host_tsc because of unsigned math > + */ > + if (tsc_this < vcpu->arch.host_tsc) { > + delta = vcpu->arch.host_tsc - tsc_this; > + svm->vmcb->control.tsc_offset += delta; > + } > vcpu->cpu = cpu; > kvm_migrate_apic_timer(vcpu); > } Hmm, I think this can result in inaccurate guest time because it makes the tsc hopping. Does it fix the problem when you make delta an s64? Joerg -- | AMD Saxony Limited Liability Company & Co. KG Operating | Wilschdorfer Landstr. 101, 01109 Dresden, Germany System | Register Court Dresden: HRA 4896 Research | General Partner authorized to represent: Center | AMD Saxony LLC (Wilmington, Delaware, US) | General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy |