From mboxrd@z Thu Jan 1 00:00:00 1970 From: Keir Fraser Subject: Re: Root cause of the issue that HVM guest boots slowly with pvops dom0 Date: Thu, 21 Jan 2010 08:44:33 +0000 Message-ID: References: <4B580D6A.4030802@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <4B580D6A.4030802@intel.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: "Yang, Xiaowei" , "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org On 21/01/2010 08:16, "Yang, Xiaowei" wrote: > - Limiting vCPU# of dom0 is always an easiest one - you may call it workaround > rather than a solution:) It not only reduces the total # of resched IPI ( = > mlock# * (vCPU#-1)), but reduces the cost of each handler - because of > spinlock. > But the impact is still there, more or less, when vCPU# > 1. > > - To remove mlock, another sharing method is needed between dom0 user space > app > and Xen HV. A pre-mlock()ed memory page for small (sub-page) hypercalls? Protected with a semaphore: failure to acquire semaphore means take slow path. Have all hypercallers in libxc launder their data buffers through a new interface that tries to grab and copy into the pre-allocated buffer. -- Keir