xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@eu.citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [PATCH] xen: sched_credit: filter node-affinity mask against online cpus
Date: Mon, 16 Sep 2013 16:27:13 +0100	[thread overview]
Message-ID: <52372351.1000603@eu.citrix.com> (raw)
In-Reply-To: <1379345002.6095.150.camel@Solace>

On 16/09/13 16:23, Dario Faggioli wrote:
> On lun, 2013-09-16 at 15:00 +0100, George Dunlap wrote:
>> On 13/09/13 17:09, Dario Faggioli wrote:
>>
>>> diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
>>> index dbe6de6..d76cd88 100644
>>> --- a/xen/common/sched_credit.c
>>> +++ b/xen/common/sched_credit.c
>>> @@ -627,10 +638,18 @@ _csched_cpu_pick(const struct scheduler *ops, struct vcpu *vc, bool_t commit)
>>>        int balance_step;
>>>    
>>>        online = cpupool_scheduler_cpumask(vc->domain->cpupool);
>>> +    cpumask_and(&cpus, vc->cpu_affinity, online);
>>> +
>>>        for_each_csched_balance_step( balance_step )
>>>        {
>>> +        /*
>>> +         * We filter the node-affinity mask against
>>> +         * [vc->cpu_affinity && online] here to avoid problems if all
>>> +         * the cpus in in the node-affinity mask are offline (e.g.,
>>> +         * because the domain moved to a different cpupool).
>>> +         */
>> It's probably worth mentioning that the reason you do the cpumask_and
>> here and not in the other place this is called is that cpumask_cycle is
>> called after this one (which will choke on an empy mask), but not in the
>> other place it's called.
>>
> Well, it is indeed what the comment says, or at leas what I wanted it to
> say. :-)
>
> I tied to concentrate on the actual issue, which is avoid picking an
> offline cpu, independently on which specific ASSERT is the one that
> triggers.

Ok, but the question is, why AND it with online cpus here, but not in 
the other place?  If the "actual issue" is that the mask may be empty, 
it should be done in both places.  If the "actual issue" is that 
cpu_cycle() chokes on empty masks, then that's an important think to 
point out, so poor programmers don't get confused as to why it's 
different. :-)

  -George

  reply	other threads:[~2013-09-16 15:27 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-13 16:09 [PATCH] xen: sched_credit: filter node-affinity mask against online cpus Dario Faggioli
2013-09-16 14:00 ` George Dunlap
2013-09-16 15:23   ` Dario Faggioli
2013-09-16 15:27     ` George Dunlap [this message]
2013-09-17 15:14       ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52372351.1000603@eu.citrix.com \
    --to=george.dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=dario.faggioli@citrix.com \
    --cc=keir@xen.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).