From mboxrd@z Thu Jan 1 00:00:00 1970 From: Radim =?utf-8?B?S3LEjW3DocWZ?= Subject: Re: Found workaround/fix for ntp on AMD systems with PCI passthrough Date: Wed, 25 Oct 2017 09:21:20 +0200 Message-ID: <20171025072119.GA28882@flask> References: <6af47870f44a208b8bcaca284573a857@hostfission.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: geoff@hostfission.com, kvm@vger.kernel.org To: Paolo Bonzini Return-path: Received: from mx1.redhat.com ([209.132.183.28]:37018 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751719AbdJYHVX (ORCPT ); Wed, 25 Oct 2017 03:21:23 -0400 Content-Disposition: inline In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: 2017-10-25 07:42+0200, Paolo Bonzini: > On 24/10/2017 23:50, geoff@hostfission.com wrote: > > In svm.c, by just changing the line in `init_vmcb` that reads: > > > >    save->g_pat = svm->vcpu.arch.pat; > > > > To: > > > >    save->g_pat = 0x0606060606060606; > > > > The problem is resolved. From what I understand this is setting a > > MTTR value that enables Write Back (WB). > > That's cool, you certainly are onto something. Currently, SVM is > disregarding the guest PAT setting (PA0=PA4=WB, PA1=PA5=WT, PA2=PA6=UC-, > PA3=UC). The guest might be using a different setting so you're > getting slow accesses (UC- or UC, i.e. uncacheable) instead of fast > accesses (WB or WC, respectively writeback and write combining). > > It would be great if you could proceed with the following tests: > > 1) see if this patch has any effect > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index af256b786a70..b2e4b912f053 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -3626,6 +3626,12 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) > u32 ecx = msr->index; > u64 data = msr->data; > switch (ecx) { > + case MSR_IA32_CR_PAT: > + if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) > + return 1; > + vcpu->arch.pat = data; > + svm->vmcb->save.g_pat = data; Great progress! SVM might cache the value and adding + mark_dirty(svm->vmcb, VMCB_NPT); here should result in the same behavior as doing (2). > + break; > case MSR_IA32_TSC: > kvm_write_tsc(vcpu, msr); > break; > > 2) if it doesn't, add a printk("%#016lx", data); to the patch and get the > last value written by the guest. Hard-code it in the "save->g_pat = ..." > line where you've been using 0x0606060606060606 successfully. Test that > things work (though they should still be slow). > > 3) starting from the rightmost byte, change one byte to 0x06, test that > and see if things get fast. For each byte you change, take a note of the > full value and whether things are slow or fast. > > Thank you very much! > > Paolo