From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Yang, Xiaowei" Subject: Re: Root cause of the issue that HVM guest boots slowly with pvops dom0 Date: Fri, 22 Jan 2010 16:07:41 +0800 Message-ID: <4B595CCD.3070509@intel.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Keir Fraser Cc: "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org Keir Fraser wrote: > On 21/01/2010 09:27, "Keir Fraser" wrote: > >>> A pre-mlock()ed memory page for small (sub-page) hypercalls? Protected with >>> a semaphore: failure to acquire semaphore means take slow path. Have all >>> hypercallers in libxc launder their data buffers through a new interface >>> that tries to grab and copy into the pre-allocated buffer. >> I'll sort out a trial patch for this myself. > > How does the attached patch work for you? It ought to get you the same > speedup as your hack. The speed should be almost the same, regardless of twice memcpy. Some comments to your trial patch: 1. diff -r 6b61ef936e69 tools/libxc/xc_private.c --- a/tools/libxc/xc_private.c Fri Jan 22 14:50:30 2010 +0800 +++ b/tools/libxc/xc_private.c Fri Jan 22 15:32:48 2010 +0800 @@ -188,7 +188,10 @@ ((hcall_buf = calloc(1, sizeof(*hcall_buf))) != NULL) ) pthread_setspecific(hcall_buf_pkey, hcall_buf); if ( hcall_buf->buf == NULL ) + { hcall_buf->buf = xc_memalign(PAGE_SIZE, PAGE_SIZE); + lock_pages(hcall_buf->buf, PAGE_SIZE); + } if ( (len < PAGE_SIZE) && hcall_buf && hcall_buf->buf && !hcall_buf->oldbuf ) 2. _xc_clean_hcall_buf needs a more careful NULL pointer check. 3. It does modification to 5 out of 73 hypercalls invoking mlock. Other problem hypercalls could turn out to be the bottleneck later?:) Thanks, xiaowei