From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [patch 3/5] KVM: MMU: notifiers support for pinned sptes Date: Fri, 20 Jun 2014 13:11:02 +0300 Message-ID: <20140620101101.GC20764@minantech.com> References: <20140618231203.846608908@amt.cnet> <20140618231521.648087161@amt.cnet> <20140619064850.GB10948@minantech.com> <20140619182825.GB32410@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org, ak@linux.intel.com, pbonzini@redhat.com, xiaoguangrong@linux.vnet.ibm.com, avi@cloudius-systems.com To: Marcelo Tosatti Return-path: Received: from mail-wi0-f175.google.com ([209.85.212.175]:52299 "EHLO mail-wi0-f175.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932976AbaFTKLG (ORCPT ); Fri, 20 Jun 2014 06:11:06 -0400 Received: by mail-wi0-f175.google.com with SMTP id r20so515912wiv.2 for ; Fri, 20 Jun 2014 03:11:05 -0700 (PDT) Content-Disposition: inline In-Reply-To: <20140619182825.GB32410@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Jun 19, 2014 at 03:28:25PM -0300, Marcelo Tosatti wrote: > On Thu, Jun 19, 2014 at 09:48:50AM +0300, Gleb Natapov wrote: > > On Wed, Jun 18, 2014 at 08:12:06PM -0300, mtosatti@redhat.com wrote: > > > Request KVM_REQ_MMU_RELOAD when deleting sptes from MMU notifiers. > > > > > > Keep pinned sptes intact if page aging. > > > > > > Signed-off-by: Marcelo Tosatti > > > > > > --- > > > arch/x86/kvm/mmu.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++------- > > > 1 file changed, 62 insertions(+), 9 deletions(-) > > > > > > Index: kvm.pinned-sptes/arch/x86/kvm/mmu.c > > > =================================================================== > > > --- kvm.pinned-sptes.orig/arch/x86/kvm/mmu.c 2014-06-18 17:28:24.339435654 -0300 > > > +++ kvm.pinned-sptes/arch/x86/kvm/mmu.c 2014-06-18 17:29:32.510225755 -0300 > > > @@ -1184,6 +1184,42 @@ > > > kvm_flush_remote_tlbs(vcpu->kvm); > > > } > > > > > > +static void ack_flush(void *_completed) > > > +{ > > > +} > > > + > > > +static void mmu_reload_pinned_vcpus(struct kvm *kvm) > > > +{ > > > + int i, cpu, me; > > > + cpumask_var_t cpus; > > > + struct kvm_vcpu *vcpu; > > > + unsigned int req = KVM_REQ_MMU_RELOAD; > > > + > > > + zalloc_cpumask_var(&cpus, GFP_ATOMIC); > > > + > > > + me = get_cpu(); > > > + kvm_for_each_vcpu(i, vcpu, kvm) { > > > + if (list_empty(&vcpu->arch.pinned_mmu_pages)) > > > + continue; > > > + kvm_make_request(req, vcpu); > > > + cpu = vcpu->cpu; > > > + > > > + /* Set ->requests bit before we read ->mode */ > > > + smp_mb(); > > > + > > > + if (cpus != NULL && cpu != -1 && cpu != me && > > > + kvm_vcpu_exiting_guest_mode(vcpu) != OUTSIDE_GUEST_MODE) > > > + cpumask_set_cpu(cpu, cpus); > > > + } > > > + if (unlikely(cpus == NULL)) > > > + smp_call_function_many(cpu_online_mask, ack_flush, NULL, 1); > > > + else if (!cpumask_empty(cpus)) > > > + smp_call_function_many(cpus, ack_flush, NULL, 1); > > > + put_cpu(); > > > + free_cpumask_var(cpus); > > > + return; > > > +} > > This is a c&p of make_all_cpus_request(), the only difference is checking > > of vcpu->arch.pinned_mmu_pages. You can add make_some_cpus_request(..., bool (*predicate)(struct kvm_vcpu *)) > > to kvm_main.c and rewrite make_all_cpus_request() to use it instead. > > Half-way through it i decided it was better to c&p. > > Can change make_all_cpus_request() though if it makes more sense to you. > If I haven't missed anything and checking of pinned_mmu_pages is indeed the only difference, then yes, reusing make_all_cpus_request() makes more sense. -- Gleb.