From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [PATCH 4/5] KVM: PPC: Book3S HV: Take the SRCU read lock before looking up memslots Date: Thu, 9 Aug 2012 15:22:38 -0300 Message-ID: <20120809182238.GB12285@amt.cnet> References: <20120806100207.GA8980@bloggs.ozlabs.ibm.com> <20120806100655.GE8980@bloggs.ozlabs.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Alexander Graf , kvm-ppc@vger.kernel.org, kvm@vger.kernel.org To: Paul Mackerras Return-path: Received: from mx1.redhat.com ([209.132.183.28]:56685 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758898Ab2HIS1Z (ORCPT ); Thu, 9 Aug 2012 14:27:25 -0400 Content-Disposition: inline In-Reply-To: <20120806100655.GE8980@bloggs.ozlabs.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: On Mon, Aug 06, 2012 at 08:06:55PM +1000, Paul Mackerras wrote: > The generic KVM code uses SRCU (sleeping RCU) to protect accesses > to the memslots data structures against updates due to userspace > adding, modifying or removing memory slots. We need to do that too, > both to avoid accessing stale copies of the memslots and to avoid > lockdep warnings. This therefore adds srcu_read_lock/unlock pairs > around code that accesses and uses memslots. > > Since the real-mode handlers for H_ENTER, H_REMOVE and H_BULK_REMOVE > need to access the memslots, and we don't want to call the SRCU code > in real mode (since we have no assurance that it would only access > the linear mapping), we hold the SRCU read lock for the VM while > in the guest. This does mean that adding or removing memory slots > while some vcpus are executing in the guest will block for up to > two jiffies. This tradeoff is acceptable since adding/removing > memory slots only happens rarely, while H_ENTER/H_REMOVE/H_BULK_REMOVE > are performance-critical hot paths. I would avoid doing this. What prevents the guest entry loop in the kernel to simply reenter without ever unlocking SRCU?