From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [patch 07/10] KVM: introduce kvm->srcu and convert kvm_set_memory_region to SRCU update Date: Tue, 22 Sep 2009 09:59:04 +0300 Message-ID: <4AB875B8.80904@redhat.com> References: <20090921233711.213665413@amt.cnet> <20090921234124.596305294@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org To: Marcelo Tosatti Return-path: Received: from mx1.redhat.com ([209.132.183.28]:4698 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754639AbZIVG7E (ORCPT ); Tue, 22 Sep 2009 02:59:04 -0400 Received: from int-mx03.intmail.prod.int.phx2.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.16]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n8M6x7BR014391 for ; Tue, 22 Sep 2009 02:59:07 -0400 In-Reply-To: <20090921234124.596305294@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On 09/22/2009 02:37 AM, Marcelo Tosatti wrote: > Use two steps for memslot deletion: mark the slot invalid (which stops > instantiation of new shadow pages for that slot, but allows destruction), > then instantiate the new empty slot. > > Also simplifies kvm_handle_hva locking. > > unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm) > { > - int i; > + int i, idx; > unsigned int nr_mmu_pages; > unsigned int nr_pages = 0; > + struct kvm_memslots *slots; > > - for (i = 0; i< kvm->memslots->nmemslots; i++) > - nr_pages += kvm->memslots->memslots[i].npages; > + idx = srcu_read_lock(&kvm->srcu); > Doesn't the caller hold the srcu_read_lock() here? > > Index: kvm-slotslock/arch/x86/kvm/vmx.c > =================================================================== > --- kvm-slotslock.orig/arch/x86/kvm/vmx.c > +++ kvm-slotslock/arch/x86/kvm/vmx.c > @@ -24,6 +24,7 @@ > #include > #include > #include > +#include > #include > #include > #include "kvm_cache_regs.h" > @@ -1465,10 +1466,18 @@ static void enter_pmode(struct kvm_vcpu > static gva_t rmode_tss_base(struct kvm *kvm) > { > if (!kvm->arch.tss_addr) { > - gfn_t base_gfn = kvm->memslots->memslots[0].base_gfn + > - kvm->memslots->memslots[0].npages - 3; > + struct kvm_memslots *slots; > + gfn_t base_gfn; > + int idx; > + > + idx = srcu_read_lock(&kvm->srcu); > + slots = rcu_dereference(kvm->memslots); > + base_gfn = slots->memslots[0].base_gfn + > + slots->memslots[0].npages - 3; > + srcu_read_unlock(&kvm->srcu, idx); > return base_gfn<< PAGE_SHIFT; > } > + > And here? Maybe we should take the srcu_lock in vcpu_load/put and only drop in when going into vcpu context or explicitly sleeping, just to simplify things. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.