From: Wei Liu <wei.liu2@citrix.com>
To: Juergen Gross <jgross@suse.com>
Cc: Wei Liu <wei.liu2@citrix.com>,
Dario Faggioli <dario.faggioli@citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>,
George Dunlap <george.dunlap@citrix.com>,
xen-devel@lists.xenproject.org,
Anshul Makkar <anshul.makkar@citrix.com>
Subject: Re: [PATCH] libxl: avoid considering pCPUs outside of the cpupool during NUMA placement
Date: Fri, 21 Oct 2016 11:56:54 +0100 [thread overview]
Message-ID: <20161021105654.GU2639@citrix.com> (raw)
In-Reply-To: <c882cc44-fab0-43e2-0e98-58c9999a35d9@suse.com>
On Fri, Oct 21, 2016 at 12:50:58PM +0200, Juergen Gross wrote:
> On 21/10/16 12:29, Wei Liu wrote:
> > On Fri, Oct 21, 2016 at 11:56:14AM +0200, Dario Faggioli wrote:
> >> During NUMA automatic placement, the information
> >> of how many vCPUs can run on what NUMA nodes is used,
> >> in order to spread the load as evenly as possible.
> >>
> >> Such information is derived from vCPU hard and soft
> >> affinity, but that is not enough. In fact, affinity
> >> can be set to be a superset of the pCPUs that belongs
> >> to the cpupool in which a domain is but, of course,
> >> the domain will never run on pCPUs outside of its
> >> cpupool.
> >>
> >> Take this into account in the placement algorithm.
> >>
> >> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> >> Reported-by: George Dunlap <george.dunlap@citrix.com>
> >> ---
> >> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> >> Cc: Wei Liu <wei.liu2@citrix.com>
> >> Cc: George Dunlap <george.dunlap@citrix.com>
> >> Cc: Juergen Gross <jgross@suse.com>
> >> Cc: Anshul Makkar <anshul.makkar@citrix.com>
> >> ---
> >> Wei, this is bugfix, so I think it should go in 4.8.
> >>
> >
> > Yes. I agree.
> >
> >> Ian, this is bugfix, so I think it is a backporting candidate.
> >>
> >> Also, note that this function does not respect the libxl coding style, as far
> >> as error handling is concerned. However, given that I'm asking for it to go in
> >> now and to be backported, I've tried to keep the changes to the minimum.
> >>
> >> I'm up for a follow up patch for 4.9 to make the style compliant.
> >>
> >> Thanks, Dario
> >> ---
> >> tools/libxl/libxl_numa.c | 25 ++++++++++++++++++++++---
> >> 1 file changed, 22 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
> >> index 33289d5..f2a719d 100644
> >> --- a/tools/libxl/libxl_numa.c
> >> +++ b/tools/libxl/libxl_numa.c
> >> @@ -186,9 +186,12 @@ static int nr_vcpus_on_nodes(libxl__gc *gc, libxl_cputopology *tinfo,
> >> {
> >> libxl_dominfo *dinfo = NULL;
> >> libxl_bitmap dom_nodemap, nodes_counted;
> >> + libxl_cpupoolinfo cpupool_info;
> >> int nr_doms, nr_cpus;
> >> int i, j, k;
> >>
> >> + libxl_cpupoolinfo_init(&cpupool_info);
> >> +
> >
> > Please move this into the loop below, see (*).
>
> Why? libxl_cpupoolinfo_dispose() will clear cpupool_info.
>
One _init should pair with one _dipose.
Even _dispose is idempotent at the moment, it might not be so in the
future.
> >
> >> dinfo = libxl_list_domain(CTX, &nr_doms);
> >> if (dinfo == NULL)
> >> return ERROR_FAIL;
> >> @@ -205,12 +208,18 @@ static int nr_vcpus_on_nodes(libxl__gc *gc, libxl_cputopology *tinfo,
> >> }
> >>
> >> for (i = 0; i < nr_doms; i++) {
> >> - libxl_vcpuinfo *vinfo;
> >> - int nr_dom_vcpus;
> >> + libxl_vcpuinfo *vinfo = NULL;
> >
> > This is not necessary because vinfo is written right away.
>
> No, the first "goto next" is before vinfo is being written.
>
Yes, this is necessary. Thanks for catching this.
Wei.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-10-21 10:56 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-10-21 9:56 [PATCH] libxl: avoid considering pCPUs outside of the cpupool during NUMA placement Dario Faggioli
2016-10-21 10:29 ` Wei Liu
2016-10-21 10:50 ` Juergen Gross
2016-10-21 10:56 ` Wei Liu [this message]
2016-10-21 12:52 ` Dario Faggioli
2016-10-21 10:51 ` Juergen Gross
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161021105654.GU2639@citrix.com \
--to=wei.liu2@citrix.com \
--cc=anshul.makkar@citrix.com \
--cc=dario.faggioli@citrix.com \
--cc=george.dunlap@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jgross@suse.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).