From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH v3] KVM: VMX: Execute WBINVD to keep data consistency with assigned devices Date: Mon, 28 Jun 2010 06:56:08 +0300 Message-ID: <4C281D58.9090202@redhat.com> References: <1277696187-3571-1-git-send-email-sheng@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Jan Kiszka , Marcelo Tosatti , Joerg Roedel , kvm@vger.kernel.org, "Yaozu (Eddie) Dong" To: Sheng Yang Return-path: Received: from mx1.redhat.com ([209.132.183.28]:31624 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752658Ab0F1D4T (ORCPT ); Sun, 27 Jun 2010 23:56:19 -0400 In-Reply-To: <1277696187-3571-1-git-send-email-sheng@linux.intel.com> Sender: kvm-owner@vger.kernel.org List-ID: On 06/28/2010 06:36 AM, Sheng Yang wrote: > Some guest device driver may leverage the "Non-Snoop" I/O, and explicitly > WBINVD or CLFLUSH to a RAM space. Since migration may occur before WBINVD or > CLFLUSH, we need to maintain data consistency either by: > Don't we always force enable snooping? Or is that only for the processor, and you're worried about devices? > 1: flushing cache (wbinvd) when the guest is scheduled out if there is no > wbinvd exit, or > 2: execute wbinvd on all dirty physical CPUs when guest wbinvd exits. > > > > /* fields used by HYPER-V emulation */ > u64 hv_vapic; > + > + cpumask_t wbinvd_dirty_mask; > }; > > Need alloc_cpumask_var()/free_cpumask_var() for very large hosts. > > +static void wbinvd_ipi(void *garbage) > +{ > + wbinvd(); > +} > Like Jan mentioned, this is quite heavy. What about a clflush() loop instead? That may take more time, but at least it's preemptible. Of course, it isn't preemptible in an IPI. > + > void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > { > + /* Address WBINVD may be executed by guest */ > + if (vcpu->kvm->arch.iommu_domain) { > + if (kvm_x86_ops->has_wbinvd_exit()) > + cpu_set(cpu, vcpu->arch.wbinvd_dirty_mask); > + else if (vcpu->cpu != -1) > + smp_call_function_single(vcpu->cpu, > + wbinvd_ipi, NULL, 1); > Is there any point to doing this if !has_wbinvd_exit()? The vcpu might not have migrated in time, so the cache is flushed too late. > + } > + > kvm_x86_ops->vcpu_load(vcpu, cpu); > if (unlikely(per_cpu(cpu_tsc_khz, cpu) == 0)) { > unsigned long khz = cpufreq_quick_get(cpu); > @@ -3650,6 +3664,21 @@ int emulate_invlpg(struct kvm_vcpu *vcpu, gva_t address) > return X86EMUL_CONTINUE; > } > > +int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu) > +{ > + if (!vcpu->kvm->arch.iommu_domain) > + return X86EMUL_CONTINUE; > + > + if (kvm_x86_ops->has_wbinvd_exit()) { > + smp_call_function_many(&vcpu->arch.wbinvd_dirty_mask, > + wbinvd_ipi, NULL, 1); > + cpus_clear(vcpu->arch.wbinvd_dirty_mask); > Race - a migration may set a new bit in wbinvd_dirty_mask after the s_c_f_m(). However, it's probably benign, since we won't be entering the guest in that period. > + } else > + wbinvd(); > + return X86EMUL_CONTINUE; > +} > +EXPORT_SYMBOL_GPL(kvm_emulate_wbinvd); > + > int emulate_clts(struct kvm_vcpu *vcpu) > { > kvm_x86_ops->set_cr0(vcpu, kvm_read_cr0_bits(vcpu, ~X86_CR0_TS)); > -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.