From mboxrd@z Thu Jan 1 00:00:00 1970 From: Juergen Gross Subject: Re: [PATCH v3 10/10] libxl: fix caller of libxl_cpupool functions Date: Thu, 16 Jul 2015 11:33:29 +0200 Message-ID: <55A77A69.8010109@suse.com> References: <1436892073-14186-1-git-send-email-wei.liu2@citrix.com> <1436892073-14186-11-git-send-email-wei.liu2@citrix.com> <21925.18402.35455.636018@mariner.uk.xensource.com> <20150715171643.GI12455@zion.uk.xensource.com> <1437037044.13522.219.camel@citrix.com> <20150716090711.GM12455@zion.uk.xensource.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZFfXn-0002fk-Ov for xen-devel@lists.xenproject.org; Thu, 16 Jul 2015 09:33:31 +0000 In-Reply-To: <20150716090711.GM12455@zion.uk.xensource.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Wei Liu , Dario Faggioli Cc: Xen-devel , Ian Jackson , Ian Campbell List-Id: xen-devel@lists.xenproject.org On 07/16/2015 11:07 AM, Wei Liu wrote: > On Thu, Jul 16, 2015 at 10:57:24AM +0200, Dario Faggioli wrote: >> [Adding Juergen] >> >> On Wed, 2015-07-15 at 18:16 +0100, Wei Liu wrote: >>> On Tue, Jul 14, 2015 at 06:33:22PM +0100, Ian Jackson wrote: >>>> Wei Liu writes ("[PATCH v3 10/10] libxl: fix caller of libxl_cpupool functions"): >> >>>>> --- a/tools/libxl/libxl.c >>>>> +++ b/tools/libxl/libxl.c >>>>> @@ -771,8 +771,11 @@ libxl_cpupoolinfo * libxl_list_cpupool(libxl_ctx *ctx, int *nb_pool_out) >>>>> >>>>> poolid = 0; >>>>> for (i = 0;; i++) { >>>>> - if (cpupool_info(gc, &info, poolid, false)) >>>>> + libxl_cpupoolinfo_init(&info); >>>>> + if (cpupool_info(gc, &info, poolid, false)) { >>>>> + libxl_cpupoolinfo_dispose(&info); >>>>> break; >>>>> + } >>>> >>>> I'm not convinced by this change. >>>> >>>> I think that this function has broken error handling: if cpupool_info >>>> fails, it simply ignores the error. >>>> >>> >>> I think the semantics of this function is to get back as many cpupool >>> info as it can. >>> >> Yes, I also think this is the case. >> >>> Unfortunately the failing of cpupool_info can either mean the cpupool id >>> does not exist or other internal errors. So not cleaning up is the >>> right thing to do (speaking from semantics point of view). >>> >> Indeed. >> >>> I think I need to overhaul cpupool_info a bit if we want to make this >>> API better. >>> >> Well, perhaps having cpupool_info() treating specially the situation >> where the pool ID is not there may help... However, what would you do >> once you have this additional piece of information available? >> > > See below. > >> Maybe, depending on the error, we can cleanup the whole array? Is it >> this that we are after? >> >>> I'm not sure if it is possible to interleave pool ids. If so, we need a >>> way to get back the exact number of cpupools. Dario? >>> >> Sorry, what you mean by 'interleave pool ids'? >> > > For example, pools have ids 0 1 4 6. In this case when should we stop > querying hypervisor about cpu pools? Current model will stop at id 2. No, it won't. cpupool_info(..., poolid, false) will return the next cpupool with an id >= poolid. Only with the last parameter being "true" an exact match is required. Juergen