xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dfaggioli@suse.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH] tools: libxl/xl: run NUMA placement even when an hard-affinity is set
Date: Mon, 20 Aug 2018 16:22:43 +0200	[thread overview]
Message-ID: <b346c13a4a05c75d531da1dcfa7ea684a0d5089e.camel@suse.com> (raw)
In-Reply-To: <20180820101428.djudte4z2wye3cz3@citrix.com>


[-- Attachment #1.1: Type: text/plain, Size: 3428 bytes --]

On Mon, 2018-08-20 at 11:14 +0100, Wei Liu wrote:
> On Fri, Aug 17, 2018 at 07:03:03PM +0200, Dario Faggioli wrote:
> > @@ -27,6 +27,8 @@
> >  
> >  #include "_paths.h"
> >  
> > +//#define DEBUG 1
> > +
> 
> Stray changes here?
> 
> You can use NDEBUG instead.
> 
I found this in one other place in libxl (libxl_event.c), and figured I
could use it here too.

Reason is that this guards a (potentially) rather expensive check,
which I personally consider too much, even for debug build. At the same
time, it could be handy for us developers to have it there, and be able
to enable it when making changes to affinity and/or NUMA placement
code.

But I don't have a too strong opinion; I'll change it to us NDEBUG if
that's what you prefer.

> > @@ -162,6 +165,38 @@ static int numa_place_domain(libxl__gc *gc,
> > uint32_t domid,
> >      rc = libxl_cpupool_info(CTX, &cpupool_info, cpupool);
> >      if (rc)
> >          goto out;
> > +    map = &cpupool_info.cpumap;
> > +
> > +    /*
> > +     * If there's a well defined hard affinity mask (i.e., the
> > same one for all
> > +     * the vcpus), we can try to run the placement considering
> > only the pcpus
> > +     * within such mask.
> > +     */
> > +    if (info->num_vcpu_hard_affinity)
> > +    {
> 
> Placement of "{" is wrong.
> 
Yep, sorry.

> > +        int j;
> > +
> > +        for (j = 0; j < info->num_vcpu_hard_affinity; j++)
> > +            assert(libxl_bitmap_equal(&info-
> > >vcpu_hard_affinity[0],
> > +                                      &info-
> > >vcpu_hard_affinity[j], 0));
> > +#endif /* DEBUG */
> 
> But why should the above be debug only? The assumption doesn't seem
> to
> always hold.
> 
I'm not sure I'm getting your point. If we're here, we do indeed want
to be sure that we don't have a different hard-affinity mask for the
various vcpus... In fact, if we do, which one should we use for
placement?

So, yes, if we are here, I think the assert() should be true. I just
considered this check only useful to developers, and too expensive even
for debug builds. But as said above, I can gate this with NDEBUG if you
prefer so.

> > +        /*
> > +         * Hard affinity should _really_ contain cpus that are
> > inside our
> > +         * cpupool. Anyway, if it does not, log a warning and only
> > use the
> > +         * cpupool's cpus for placement.
> > +         */
> > +        if (!libxl_bitmap_is_empty(&cpumap))
> > +            map = &cpumap;
> > +        else
> > +            LOG(WARN, "Hard affinity completely outside of
> > domain's cpupool?");
> 
> Should this be an error?
> 
> What is the expected interaction for hard affinity and cpupool?
> 
cpupools rulez. :-D

Basically, a domain can't possibly run on any cpu which is outside of
the pool where the domain itself lives, no matter what the affinity is.

BTW, I tested this better, and I believe there is no need for adding
this WARN. In fact, if the domain is being created inside a pool,
setting its hard affinity to a mask which is completely disjoint from
the cpupool's one fails _before_ getting here.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-08-20 14:23 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-17 17:03 [PATCH] tools: libxl/xl: run NUMA placement even when an hard-affinity is set Dario Faggioli
2018-08-20 10:14 ` Wei Liu
2018-08-20 14:22   ` Dario Faggioli [this message]
2018-08-21 10:25     ` Ian Jackson
2018-08-21 15:14       ` Dario Faggioli
2018-08-21 15:31         ` Ian Jackson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b346c13a4a05c75d531da1dcfa7ea684a0d5089e.camel@suse.com \
    --to=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).