From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Yang, Xiaowei" Subject: Re: Root cause of the issue that HVM guest boots slowly with pvops dom0 Date: Fri, 22 Jan 2010 16:48:49 +0800 Message-ID: <4B596671.2080805@intel.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Keir Fraser Cc: "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org Keir Fraser wrote: > On 22/01/2010 08:07, "Yang, Xiaowei" wrote: > >>> How does the attached patch work for you? It ought to get you the same >>> speedup as your hack. >> The speed should be almost the same, regardless of twice memcpy. > > Did you actually try it out and confirm that? Yes, I tried it out. And there are no obvious speed difference comparing your patch (my comment 1 included) and the hack. > >> Some comments to your trial patch: >> 1. >> diff -r 6b61ef936e69 tools/libxc/xc_private.c >> --- a/tools/libxc/xc_private.c Fri Jan 22 14:50:30 2010 +0800 >> +++ b/tools/libxc/xc_private.c Fri Jan 22 15:32:48 2010 +0800 > > Yes, missed that all-important bit! > >> 2. _xc_clean_hcall_buf needs a more careful NULL pointer check. > > Not really: free() accepts NULL. But I suppose it would be clearer to put > the free(hcall_buf) inside the if(hcall_buf) block. > >> 3. It does modification to 5 out of 73 hypercalls invoking mlock. Other >> problem >> hypercalls could turn out to be the bottleneck later?:) > > The point of a new interface was to be able to do the callers incrementally. > A bit of care is needed on each one, and most are not and probably never > will be bottlenecks. Agree. Anyway when we meet other pvops performance issue later, let's go back and have a check at this aspect. Thanks, xiaowei