From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tim Deegan Subject: Re: lock in vhpet Date: Mon, 23 Apr 2012 10:14:45 +0100 Message-ID: <20120423091445.GA17920@ocelot.phlegethon.org> References: <20120419082717.GA23663@ocelot.phlegethon.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: "Zhang, Yang Z" Cc: "xen-devel@lists.xensource.com" , Keir Fraser , "andres@lagarcavilla.org" List-Id: xen-devel@lists.xenproject.org At 07:36 +0000 on 23 Apr (1335166577), Zhang, Yang Z wrote: > The p2m lock in __get_gfn_type_access() is the culprit. Here is the profiling data with 10 seconds: > > (XEN) p2m_lock 1 lock: > (XEN) lock: 190733(00000000:14CE5726), block: 67159(00000007:6AAA53F3) > > Those data is collected when win8 guest(16 vcpus) is idle. 16 VCPUs > blocked 30 seconds with 10 sec's profiling. It means 18% of cpu cycle > is waiting for the p2m lock. And those data only for idle guest. The > impaction is more seriously when run some workload inside guest. I > noticed that this change was adding by cs 24770. And before it, we > don't require the p2m lock in _get_gfn_type_access. So is this lock > really necessary? Ugh; that certainly is a regression. We used to be lock-free on p2m lookups and losing that will be bad for perf in lots of ways. IIRC the original aim was to use fine-grained per-page locks for this -- there should be no need to hold a per-domain lock during a normal read. Andres, what happened to that code? Making it an rwlock would be tricky as this interface doesn't differenctiate readers from writers. For the common case (no sharing/paging/mem-access) it ought to be a win since there is hardly any writing. But it would be better to make this particular lock/unlock go away. Tim.