From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ingo Molnar Subject: Re: [PATCH 4/6] kvm tools: Add rwlock wrapper Date: Thu, 26 May 2011 20:05:18 +0200 Message-ID: <20110526180518.GA3572@elte.hu> References: <1306419950-19064-1-git-send-email-levinsasha928@gmail.com> <1306419950-19064-4-git-send-email-levinsasha928@gmail.com> <1306426743.3065.34.camel@lappy> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Pekka Enberg , john@jfloren.net, kvm@vger.kernel.org, asias.hejun@gmail.com, gorcunov@gmail.com, prasadjoshi124@gmail.com To: Sasha Levin Return-path: Received: from mx2.mail.elte.hu ([157.181.151.9]:37562 "EHLO mx2.mail.elte.hu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751600Ab1EZSF3 (ORCPT ); Thu, 26 May 2011 14:05:29 -0400 Content-Disposition: inline In-Reply-To: <1306426743.3065.34.camel@lappy> Sender: kvm-owner@vger.kernel.org List-ID: * Sasha Levin wrote: > On Thu, 2011-05-26 at 19:02 +0300, Pekka Enberg wrote: > > On Thu, 26 May 2011, Sasha Levin wrote: > > > Adds a rwlock wrapper which like the mutex wrapper makes rwlock calls > > > similar to their kernel counterparts. > > > > > > Signed-off-by: Sasha Levin > > > > There's no explanation why a mutex isn't sufficient. The pthread > > locking primitives aren't all that great in practice so unless > > you have some correctness issue that requires a rwlock or some > > numbers, I'd prefer you go for a mutex. > > I've added some rwlocks because of what Ingo said yesterday about > adding/removing devices after the first initialization phase. > > Take MMIO lock for example: Since we can now run SMP guests, we may > have multiple MMIO exits (one from each VCPU thread). Each of those > exits leads to searching the MMIO rbtree. > > We can use a mutex to lock it, but it just means that those threads > will be blocked there instead of concurrently searching the MMIO > tree which makes the search linear instead of parallel. > > It's hard to bring 'real' numbers at this stage because the only > 'real' device we have which uses MMIO is the VESA driver, and we > can't really simulate many VCPUs writing to it :) I'd suggest keeping it simple first - rwlocks are nasty and will bounce a cacheline just as much. If lookup scalability is an issue we can extend RCU to tools/kvm/. Thanks, Ingo