From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sheng Yang Subject: Re: [PATCH v3] KVM: VMX: Execute WBINVD to keep data consistency with assigned devices Date: Mon, 28 Jun 2010 14:42:34 +0800 Message-ID: <201006281442.35437.sheng@linux.intel.com> References: <1277696187-3571-1-git-send-email-sheng@linux.intel.com> <4C281D58.9090202@redhat.com> Mime-Version: 1.0 Content-Type: Text/Plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Jan Kiszka , Marcelo Tosatti , Joerg Roedel , kvm@vger.kernel.org, "Yaozu (Eddie) Dong" To: Avi Kivity Return-path: Received: from mga11.intel.com ([192.55.52.93]:9673 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754311Ab0F1Go0 (ORCPT ); Mon, 28 Jun 2010 02:44:26 -0400 In-Reply-To: <4C281D58.9090202@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Monday 28 June 2010 11:56:08 Avi Kivity wrote: > On 06/28/2010 06:36 AM, Sheng Yang wrote: > > Some guest device driver may leverage the "Non-Snoop" I/O, and explicitly > > WBINVD or CLFLUSH to a RAM space. Since migration may occur before WBINVD > > or > > > CLFLUSH, we need to maintain data consistency either by: > Don't we always force enable snooping? Or is that only for the > processor, and you're worried about devices? We only force enabling snooping for capable VT-d engine(with KVM_IOMMU_CACHE_COHERENCY flag, on most recent server board). And you're right, with the snooping capable VT-d engine we don't need to do all these. Would address it in the next version. > > 1: flushing cache (wbinvd) when the guest is scheduled out if there is no > > wbinvd exit, or > > 2: execute wbinvd on all dirty physical CPUs when guest wbinvd exits. > > > > /* fields used by HYPER-V emulation */ > > u64 hv_vapic; > > > > + > > + cpumask_t wbinvd_dirty_mask; > > > > }; > > Need alloc_cpumask_var()/free_cpumask_var() for very large hosts. OK. > > > +static void wbinvd_ipi(void *garbage) > > +{ > > + wbinvd(); > > +} > > Like Jan mentioned, this is quite heavy. What about a clflush() loop > instead? That may take more time, but at least it's preemptible. Of > course, it isn't preemptible in an IPI. I think this kind of behavior happened rarely, and most recent processor should have WBINVD exit which means it's an IPI... So I think it's maybe acceptable here. > > + > > > > void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > > { > > > > + /* Address WBINVD may be executed by guest */ > > + if (vcpu->kvm->arch.iommu_domain) { > > + if (kvm_x86_ops->has_wbinvd_exit()) > > + cpu_set(cpu, vcpu->arch.wbinvd_dirty_mask); > > + else if (vcpu->cpu != -1) > > + smp_call_function_single(vcpu->cpu, > > + wbinvd_ipi, NULL, 1); > > Is there any point to doing this if !has_wbinvd_exit()? The vcpu might > not have migrated in time, so the cache is flushed too late. For the !has_wbinvd_exit(), the instruction would be executed by guest and flush the current processor immediately. And we can ensure that it's clean in the last CPU, so we're fine. > > + } > > + > > > > kvm_x86_ops->vcpu_load(vcpu, cpu); > > if (unlikely(per_cpu(cpu_tsc_khz, cpu) == 0)) { > > > > unsigned long khz = cpufreq_quick_get(cpu); > > > > @@ -3650,6 +3664,21 @@ int emulate_invlpg(struct kvm_vcpu *vcpu, gva_t > > address) > > > > return X86EMUL_CONTINUE; > > > > } > > > > +int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu) > > +{ > > + if (!vcpu->kvm->arch.iommu_domain) > > + return X86EMUL_CONTINUE; > > + > > + if (kvm_x86_ops->has_wbinvd_exit()) { > > + smp_call_function_many(&vcpu->arch.wbinvd_dirty_mask, > > + wbinvd_ipi, NULL, 1); > > + cpus_clear(vcpu->arch.wbinvd_dirty_mask); > > Race - a migration may set a new bit in wbinvd_dirty_mask after the > s_c_f_m(). > > However, it's probably benign, since we won't be entering the guest in > that period. Yes. :) -- regards Yang, Sheng > > > + } else > > + wbinvd(); > > + return X86EMUL_CONTINUE; > > +} > > +EXPORT_SYMBOL_GPL(kvm_emulate_wbinvd); > > + > > > > int emulate_clts(struct kvm_vcpu *vcpu) > > { > > > > kvm_x86_ops->set_cr0(vcpu, kvm_read_cr0_bits(vcpu, ~X86_CR0_TS));