* Re: Found workaround/fix for ntp on AMD systems with PCI passthrough
2017-10-25 5:42 ` Paolo Bonzini
@ 2017-10-25 7:21 ` Radim Krčmář
2017-10-25 7:51 ` geoff
1 sibling, 0 replies; 4+ messages in thread
From: Radim Krčmář @ 2017-10-25 7:21 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: geoff, kvm
2017-10-25 07:42+0200, Paolo Bonzini:
> On 24/10/2017 23:50, geoff@hostfission.com wrote:
> > In svm.c, by just changing the line in `init_vmcb` that reads:
> >
> > save->g_pat = svm->vcpu.arch.pat;
> >
> > To:
> >
> > save->g_pat = 0x0606060606060606;
> >
> > The problem is resolved. From what I understand this is setting a
> > MTTR value that enables Write Back (WB).
>
> That's cool, you certainly are onto something. Currently, SVM is
> disregarding the guest PAT setting (PA0=PA4=WB, PA1=PA5=WT, PA2=PA6=UC-,
> PA3=UC). The guest might be using a different setting so you're
> getting slow accesses (UC- or UC, i.e. uncacheable) instead of fast
> accesses (WB or WC, respectively writeback and write combining).
>
> It would be great if you could proceed with the following tests:
>
> 1) see if this patch has any effect
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index af256b786a70..b2e4b912f053 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3626,6 +3626,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
> u32 ecx = msr->index;
> u64 data = msr->data;
> switch (ecx) {
> + case MSR_IA32_CR_PAT:
> + if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
> + return 1;
> + vcpu->arch.pat = data;
> + svm->vmcb->save.g_pat = data;
Great progress! SVM might cache the value and adding
+ mark_dirty(svm->vmcb, VMCB_NPT);
here should result in the same behavior as doing (2).
> + break;
> case MSR_IA32_TSC:
> kvm_write_tsc(vcpu, msr);
> break;
>
> 2) if it doesn't, add a printk("%#016lx", data); to the patch and get the
> last value written by the guest. Hard-code it in the "save->g_pat = ..."
> line where you've been using 0x0606060606060606 successfully. Test that
> things work (though they should still be slow).
>
> 3) starting from the rightmost byte, change one byte to 0x06, test that
> and see if things get fast. For each byte you change, take a note of the
> full value and whether things are slow or fast.
>
> Thank you very much!
>
> Paolo
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: Found workaround/fix for ntp on AMD systems with PCI passthrough
2017-10-25 5:42 ` Paolo Bonzini
2017-10-25 7:21 ` Radim Krčmář
@ 2017-10-25 7:51 ` geoff
1 sibling, 0 replies; 4+ messages in thread
From: geoff @ 2017-10-25 7:51 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm, Paolo Bonzini
On 2017-10-25 16:42, Paolo Bonzini wrote:
> On 24/10/2017 23:50, geoff@hostfission.com wrote:
>> In svm.c, by just changing the line in `init_vmcb` that reads:
>>
>> save->g_pat = svm->vcpu.arch.pat;
>>
>> To:
>>
>> save->g_pat = 0x0606060606060606;
>>
>> The problem is resolved. From what I understand this is setting a
>> MTTR value that enables Write Back (WB).
>
> That's cool, you certainly are onto something. Currently, SVM is
> disregarding the guest PAT setting (PA0=PA4=WB, PA1=PA5=WT,
> PA2=PA6=UC-,
> PA3=UC). The guest might be using a different setting so you're
> getting slow accesses (UC- or UC, i.e. uncacheable) instead of fast
> accesses (WB or WC, respectively writeback and write combining).
>
> It would be great if you could proceed with the following tests:
>
> 1) see if this patch has any effect
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index af256b786a70..b2e4b912f053 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3626,6 +3626,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu,
> struct msr_data *msr)
> u32 ecx = msr->index;
> u64 data = msr->data;
> switch (ecx) {
> + case MSR_IA32_CR_PAT:
> + if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
> + return 1;
> + vcpu->arch.pat = data;
> + svm->vmcb->save.g_pat = data;
> + break;
> case MSR_IA32_TSC:
> kvm_write_tsc(vcpu, msr);
> break;
>
Confirmed! this has corrected the fault without the need to hard code
the
value.
> 2) if it doesn't, add a printk("%#016lx", data); to the patch and get
> the
> last value written by the guest. Hard-code it in the "save->g_pat =
> ..."
> line where you've been using 0x0606060606060606 successfully. Test
> that
> things work (though they should still be slow).
>
> 3) starting from the rightmost byte, change one byte to 0x06, test that
> and see if things get fast. For each byte you change, take a note of
> the
> full value and whether things are slow or fast.
>
> Thank you very much!
>
> Paolo
^ permalink raw reply [flat|nested] 4+ messages in thread