From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [PATCH] kvm memslot read-locking with mmu_lock Date: Tue, 22 Jan 2008 15:47:28 +0200 Message-ID: <4795F3F0.90403@qumranet.com> References: <20080121123710.GF6970@v2.random> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org To: Andrea Arcangeli Return-path: In-Reply-To: <20080121123710.GF6970-lysg2Xt5kKMAvxtiuMwx3w@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Andrea Arcangeli wrote: > This adds locking to the memslots so they can be looked up with only > the mmu_lock. Entries with memslot->userspace_addr have to be ignored > because they're not fully inserted yet. > > What is the motivation for this? Calls from mmu notifiers that don't have mmap_sem held? > > /* Allocate page dirty bitmap if needed */ > @@ -311,14 +320,18 @@ int __kvm_set_memory_region(struct kvm *kvm, > memset(new.dirty_bitmap, 0, dirty_bytes); > } > > + spin_lock(&kvm->mmu_lock); > if (mem->slot >= kvm->nmemslots) > kvm->nmemslots = mem->slot + 1; > > *memslot = new; > + spin_unlock(&kvm->mmu_lock); > > r = kvm_arch_set_memory_region(kvm, mem, old, user_alloc); > if (r) { > + spin_lock(&kvm->mmu_lock); > *memslot = old; > + spin_unlock(&kvm->mmu_lock); > goto out_free; > } > > This is arch independent code, I'm surprised mmu_lock is visible here? What are the new lookup rules? We don't hold mmu_lock everywhere we look up a gfn, do we? -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/