From: George Dunlap <george.dunlap@eu.citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
xen-devel@lists.xen.org
Subject: Re: [PATCH v2] xen: sched_credit: filter node-affinity mask against online cpus
Date: Thu, 19 Sep 2013 11:46:51 +0100 [thread overview]
Message-ID: <523AD61B.7000102@eu.citrix.com> (raw)
In-Reply-To: <20130917151644.2240.68085.stgit@hit-nxdomain.opendns.com>
On 17/09/13 16:16, Dario Faggioli wrote:
> in _csched_cpu_pick(), as not doing so may result in the domain's
> node-affinity mask (as retrieved by csched_balance_cpumask() )
> and online mask (as retrieved by cpupool_scheduler_cpumask() )
> having an empty intersection.
>
> Therefore, when attempting a node-affinity load balancing step
> and running this:
>
> ...
> /* Pick an online CPU from the proper affinity mask */
> csched_balance_cpumask(vc, balance_step, &cpus);
> cpumask_and(&cpus, &cpus, online);
> ...
>
> we end up with an empty cpumask (in cpus). At this point, in
> the following code:
>
> ....
> /* If present, prefer vc's current processor */
> cpu = cpumask_test_cpu(vc->processor, &cpus)
> ? vc->processor
> : cpumask_cycle(vc->processor, &cpus);
> ....
>
> an ASSERT (from inside cpumask_cycle() ) triggers like this:
>
> (XEN) Xen call trace:
> (XEN) [<ffff82d08011b124>] _csched_cpu_pick+0x1d2/0x652
> (XEN) [<ffff82d08011b5b2>] csched_cpu_pick+0xe/0x10
> (XEN) [<ffff82d0801232de>] vcpu_migrate+0x167/0x31e
> (XEN) [<ffff82d0801238cc>] cpu_disable_scheduler+0x1c8/0x287
> (XEN) [<ffff82d080101b3f>] cpupool_unassign_cpu_helper+0x20/0xb4
> (XEN) [<ffff82d08010544f>] continue_hypercall_tasklet_handler+0x4a/0xb1
> (XEN) [<ffff82d080127793>] do_tasklet_work+0x78/0xab
> (XEN) [<ffff82d080127a70>] do_tasklet+0x5f/0x8b
> (XEN) [<ffff82d080158985>] idle_loop+0x57/0x5e
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) Assertion 'cpu < nr_cpu_ids' failed at /home/dario/Sources/xen/xen/xen.git/xen/include/xe:16481
>
> It is for example sufficient to have a domain with node-affinity
> to NUMA node 1 running, and issueing a `xl cpupool-numa-split'
> would make the above happen. That is because, by default, all
> the existing domains remain assigned to the first cpupool, and
> it now (after the cpupool-numa-split) only includes NUMA node 0.
>
> This change prevents that by generalizing the function used
> for figuring out whether a node-affinity load balancing step
> is legit or not. This way we can, in _csched_cpu_pick(),
> figure out early enough that the mask would end up empty,
> skip the step all together and avoid the splat.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
prev parent reply other threads:[~2013-09-19 10:46 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-09-17 15:16 [PATCH v2] xen: sched_credit: filter node-affinity mask against online cpus Dario Faggioli
2013-09-19 10:46 ` George Dunlap [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=523AD61B.7000102@eu.citrix.com \
--to=george.dunlap@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=dario.faggioli@citrix.com \
--cc=keir@xen.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).