From mboxrd@z Thu Jan 1 00:00:00 1970 From: Keir Fraser Subject: Re: Root cause of the issue that HVM guest boots slowly with pvops dom0 Date: Fri, 22 Jan 2010 08:31:56 +0000 Message-ID: References: <4B595CCD.3070509@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <4B595CCD.3070509@intel.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: "Yang, Xiaowei" Cc: "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org On 22/01/2010 08:07, "Yang, Xiaowei" wrote: >> How does the attached patch work for you? It ought to get you the same >> speedup as your hack. > > The speed should be almost the same, regardless of twice memcpy. Did you actually try it out and confirm that? > Some comments to your trial patch: > 1. > diff -r 6b61ef936e69 tools/libxc/xc_private.c > --- a/tools/libxc/xc_private.c Fri Jan 22 14:50:30 2010 +0800 > +++ b/tools/libxc/xc_private.c Fri Jan 22 15:32:48 2010 +0800 Yes, missed that all-important bit! > 2. _xc_clean_hcall_buf needs a more careful NULL pointer check. Not really: free() accepts NULL. But I suppose it would be clearer to put the free(hcall_buf) inside the if(hcall_buf) block. > 3. It does modification to 5 out of 73 hypercalls invoking mlock. Other > problem > hypercalls could turn out to be the bottleneck later?:) The point of a new interface was to be able to do the callers incrementally. A bit of care is needed on each one, and most are not and probably never will be bottlenecks. -- Keir