From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [patch 3/5] KVM: MMU: notifiers support for pinned sptes Date: Thu, 19 Jun 2014 15:28:25 -0300 Message-ID: <20140619182825.GB32410@amt.cnet> References: <20140618231203.846608908@amt.cnet> <20140618231521.648087161@amt.cnet> <20140619064850.GB10948@minantech.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org, ak@linux.intel.com, pbonzini@redhat.com, xiaoguangrong@linux.vnet.ibm.com, avi@cloudius-systems.com To: Gleb Natapov Return-path: Received: from mx1.redhat.com ([209.132.183.28]:13967 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934039AbaFSTK4 (ORCPT ); Thu, 19 Jun 2014 15:10:56 -0400 Content-Disposition: inline In-Reply-To: <20140619064850.GB10948@minantech.com> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Jun 19, 2014 at 09:48:50AM +0300, Gleb Natapov wrote: > On Wed, Jun 18, 2014 at 08:12:06PM -0300, mtosatti@redhat.com wrote: > > Request KVM_REQ_MMU_RELOAD when deleting sptes from MMU notifiers. > > > > Keep pinned sptes intact if page aging. > > > > Signed-off-by: Marcelo Tosatti > > > > --- > > arch/x86/kvm/mmu.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++------- > > 1 file changed, 62 insertions(+), 9 deletions(-) > > > > Index: kvm.pinned-sptes/arch/x86/kvm/mmu.c > > =================================================================== > > --- kvm.pinned-sptes.orig/arch/x86/kvm/mmu.c 2014-06-18 17:28:24.339435654 -0300 > > +++ kvm.pinned-sptes/arch/x86/kvm/mmu.c 2014-06-18 17:29:32.510225755 -0300 > > @@ -1184,6 +1184,42 @@ > > kvm_flush_remote_tlbs(vcpu->kvm); > > } > > > > +static void ack_flush(void *_completed) > > +{ > > +} > > + > > +static void mmu_reload_pinned_vcpus(struct kvm *kvm) > > +{ > > + int i, cpu, me; > > + cpumask_var_t cpus; > > + struct kvm_vcpu *vcpu; > > + unsigned int req = KVM_REQ_MMU_RELOAD; > > + > > + zalloc_cpumask_var(&cpus, GFP_ATOMIC); > > + > > + me = get_cpu(); > > + kvm_for_each_vcpu(i, vcpu, kvm) { > > + if (list_empty(&vcpu->arch.pinned_mmu_pages)) > > + continue; > > + kvm_make_request(req, vcpu); > > + cpu = vcpu->cpu; > > + > > + /* Set ->requests bit before we read ->mode */ > > + smp_mb(); > > + > > + if (cpus != NULL && cpu != -1 && cpu != me && > > + kvm_vcpu_exiting_guest_mode(vcpu) != OUTSIDE_GUEST_MODE) > > + cpumask_set_cpu(cpu, cpus); > > + } > > + if (unlikely(cpus == NULL)) > > + smp_call_function_many(cpu_online_mask, ack_flush, NULL, 1); > > + else if (!cpumask_empty(cpus)) > > + smp_call_function_many(cpus, ack_flush, NULL, 1); > > + put_cpu(); > > + free_cpumask_var(cpus); > > + return; > > +} > This is a c&p of make_all_cpus_request(), the only difference is checking > of vcpu->arch.pinned_mmu_pages. You can add make_some_cpus_request(..., bool (*predicate)(struct kvm_vcpu *)) > to kvm_main.c and rewrite make_all_cpus_request() to use it instead. Half-way through it i decided it was better to c&p. Can change make_all_cpus_request() though if it makes more sense to you.