From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
xen-devel@lists.xenproject.org, Juergen Gross <JGross@suse.com>
Subject: Re: [PATCH 2/5] xen: credit2: never consider CPUs outside of our cpupool.
Date: Thu, 9 Feb 2017 11:32:53 +0100 [thread overview]
Message-ID: <1486636373.3042.28.camel@citrix.com> (raw)
In-Reply-To: <589C41BA020000780013816B@prv-mh.provo.novell.com>
[-- Attachment #1.1.1: Type: text/plain, Size: 589 bytes --]
On Thu, 2017-02-09 at 02:17 -0700, Jan Beulich wrote:
> > > > On 08.02.17 at 19:55, <dario.faggioli@citrix.com> wrote:
> I'm going to commit what I have later today, and
> I'll pull in the one extra backport next time round.
>
Ok, patch attached.
I've tested it on top of current tip of staging-4.7.
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.1.2: xen-credit2-never-consider-cpus-outside-cpupool-4.7.patch --]
[-- Type: text/x-patch, Size: 7797 bytes --]
commit 09725f8af37415c30a1a53d4a34e67fabcba105d
Author: Dario Faggioli <dario.faggioli@citrix.com>
Date: Wed Feb 8 19:01:53 2017 +0100
xen: credit2: never consider CPUs outside of our cpupool.
In fact, relying on the mask of what pCPUs belong to
which Credit2 runqueue is not enough. If we only do that,
when Credit2 is the boot scheduler, we may ASSERT() or
panic when moving a pCPU from Pool-0 to another cpupool.
This is because pCPUs outside of any pool are considered
part of cpupool0. This puts us at risk of crash when those
same pCPUs are added to another pool and something
different than the idle domain is found to be running
on them.
Note that, even if we prevent the above to happen (which
is the purpose of this patch), this is still pretty bad,
in fact, when we remove a pCPU from Pool-0:
- in Credit1, as we do *not* update prv->ncpus and
prv->credit, which means we're considering the wrong
total credits when doing accounting;
- in Credit2, the pCPU remains part of one runqueue,
and is hence at least considered during load balancing,
even if no vCPU should really run there.
In Credit1, this "only" causes skewed accounting and
no crashes because there is a lot of `cpumask_and`ing
going on with the cpumask of the domains' cpupool
(which, BTW, comes at a price).
A quick and not to involved (and easily backportable)
solution for Credit2, is to do exactly the same.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 25b4c91..35dad15 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -331,19 +331,22 @@ static int csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc);
*/
static int get_fallback_cpu(struct csched2_vcpu *svc)
{
- int fallback_cpu, cpu = svc->vcpu->processor;
+ struct vcpu *v = svc->vcpu;
+ int cpu = v->processor;
- if ( likely(cpumask_test_cpu(cpu, svc->vcpu->cpu_hard_affinity)) )
- return cpu;
+ cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity,
+ cpupool_domain_cpumask(v->domain));
- cpumask_and(cpumask_scratch_cpu(cpu), svc->vcpu->cpu_hard_affinity,
- &svc->rqd->active);
- fallback_cpu = cpumask_first(cpumask_scratch_cpu(cpu));
- if ( likely(fallback_cpu < nr_cpu_ids) )
- return fallback_cpu;
+ if ( likely(cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) )
+ return cpu;
- cpumask_and(cpumask_scratch, svc->vcpu->cpu_hard_affinity,
- cpupool_domain_cpumask(svc->vcpu->domain));
+ if ( likely(cpumask_intersects(cpumask_scratch_cpu(cpu),
+ &svc->rqd->active)) )
+ {
+ cpumask_and(cpumask_scratch_cpu(cpu), &svc->rqd->active,
+ cpumask_scratch_cpu(cpu));
+ return cpumask_first(cpumask_scratch_cpu(cpu));
+ }
ASSERT(!cpumask_empty(cpumask_scratch_cpu(cpu)));
@@ -582,9 +585,12 @@ runq_tickle(const struct scheduler *ops, unsigned int cpu, struct csched2_vcpu *
goto tickle;
}
+ cpumask_and(cpumask_scratch_cpu(cpu), new->vcpu->cpu_hard_affinity,
+ cpupool_domain_cpumask(new->vcpu->domain));
+
/* Get a mask of idle, but not tickled, that new is allowed to run on. */
cpumask_andnot(&mask, &rqd->idle, &rqd->tickled);
- cpumask_and(&mask, &mask, new->vcpu->cpu_hard_affinity);
+ cpumask_and(&mask, &mask, cpumask_scratch_cpu(cpu));
/* If it's not empty, choose one */
i = cpumask_cycle(cpu, &mask);
@@ -599,7 +605,7 @@ runq_tickle(const struct scheduler *ops, unsigned int cpu, struct csched2_vcpu *
* that new is allowed to run on. */
cpumask_andnot(&mask, &rqd->active, &rqd->idle);
cpumask_andnot(&mask, &mask, &rqd->tickled);
- cpumask_and(&mask, &mask, new->vcpu->cpu_hard_affinity);
+ cpumask_and(&mask, &mask, cpumask_scratch_cpu(cpu));
for_each_cpu(i, &mask)
{
@@ -1160,6 +1166,9 @@ choose_cpu(const struct scheduler *ops, struct vcpu *vc)
return get_fallback_cpu(svc);
}
+ cpumask_and(cpumask_scratch_cpu(cpu), vc->cpu_hard_affinity,
+ cpupool_domain_cpumask(vc->domain));
+
/* First check to see if we're here because someone else suggested a place
* for us to move. */
if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &svc->flags) )
@@ -1169,16 +1178,14 @@ choose_cpu(const struct scheduler *ops, struct vcpu *vc)
printk("%s: Runqueue migrate aborted because target runqueue disappeared!\n",
__func__);
}
- else
+ else if ( cpumask_intersects(cpumask_scratch_cpu(cpu),
+ &svc->migrate_rqd->active) )
{
- cpumask_and(cpumask_scratch_cpu(cpu), vc->cpu_hard_affinity,
+ cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
&svc->migrate_rqd->active);
new_cpu = cpumask_any(cpumask_scratch_cpu(cpu));
- if ( new_cpu < nr_cpu_ids )
- {
- d2printk("%pv +\n", svc->vcpu);
- goto out_up;
- }
+ d2printk("%pv +\n", svc->vcpu);
+ goto out_up;
}
/* Fall-through to normal cpu pick */
}
@@ -1208,12 +1215,12 @@ choose_cpu(const struct scheduler *ops, struct vcpu *vc)
*/
if ( rqd == svc->rqd )
{
- if ( cpumask_intersects(vc->cpu_hard_affinity, &rqd->active) )
+ if ( cpumask_intersects(cpumask_scratch_cpu(cpu), &rqd->active) )
rqd_avgload = rqd->b_avgload - svc->avgload;
}
else if ( spin_trylock(&rqd->lock) )
{
- if ( cpumask_intersects(vc->cpu_hard_affinity, &rqd->active) )
+ if ( cpumask_intersects(cpumask_scratch_cpu(cpu), &rqd->active) )
rqd_avgload = rqd->b_avgload;
spin_unlock(&rqd->lock);
@@ -1231,7 +1238,7 @@ choose_cpu(const struct scheduler *ops, struct vcpu *vc)
new_cpu = get_fallback_cpu(svc);
else
{
- cpumask_and(cpumask_scratch_cpu(cpu), vc->cpu_hard_affinity,
+ cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
&prv->rqd[min_rqi].active);
new_cpu = cpumask_any(cpumask_scratch_cpu(cpu));
BUG_ON(new_cpu >= nr_cpu_ids);
@@ -1318,6 +1325,8 @@ static void migrate(const struct scheduler *ops,
__runq_deassign(svc);
cpumask_and(cpumask_scratch_cpu(cpu), svc->vcpu->cpu_hard_affinity,
+ cpupool_domain_cpumask(svc->vcpu->domain));
+ cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
&trqd->active);
svc->vcpu->processor = cpumask_any(cpumask_scratch_cpu(cpu));
BUG_ON(svc->vcpu->processor >= nr_cpu_ids);
@@ -1343,8 +1352,14 @@ static void migrate(const struct scheduler *ops,
static bool_t vcpu_is_migrateable(struct csched2_vcpu *svc,
struct csched2_runqueue_data *rqd)
{
+ struct vcpu *v = svc->vcpu;
+ int cpu = svc->vcpu->processor;
+
+ cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity,
+ cpupool_domain_cpumask(v->domain));
+
return !(svc->flags & CSFLAG_runq_migrate_request) &&
- cpumask_intersects(svc->vcpu->cpu_hard_affinity, &rqd->active);
+ cpumask_intersects(cpumask_scratch_cpu(cpu), &rqd->active);
}
static void balance_load(const struct scheduler *ops, int cpu, s_time_t now)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-02-09 10:33 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-17 17:26 [PATCH 0/5] xen: sched: scheduling (mostly, Credit2) and cpupool fixes and improvements Dario Faggioli
2017-01-17 17:26 ` [PATCH 1/5] xen: credit2: use the correct scratch cpumask Dario Faggioli
2017-01-19 12:22 ` George Dunlap
2017-01-17 17:26 ` [PATCH 2/5] xen: credit2: never consider CPUs outside of our cpupool Dario Faggioli
2017-01-19 8:08 ` Juergen Gross
2017-01-19 8:22 ` Dario Faggioli
2017-01-23 14:40 ` George Dunlap
2017-01-24 12:35 ` Juergen Gross
2017-01-24 12:49 ` Dario Faggioli
2017-01-24 16:37 ` George Dunlap
2017-01-23 15:20 ` George Dunlap
2017-02-03 8:41 ` Jan Beulich
2017-02-03 15:27 ` Dario Faggioli
2017-02-03 15:40 ` Jan Beulich
2017-02-08 16:48 ` Dario Faggioli
2017-02-08 17:02 ` Jan Beulich
2017-02-08 18:55 ` Dario Faggioli
2017-02-09 9:17 ` Jan Beulich
2017-02-09 9:25 ` Dario Faggioli
2017-02-09 10:32 ` Dario Faggioli [this message]
2017-01-17 17:26 ` [PATCH 3/5] xen: credit2: fix shutdown/suspend when playing with cpupools Dario Faggioli
2017-01-23 15:42 ` George Dunlap
2017-01-17 17:27 ` [PATCH 4/5] xen: sched: impove use of cpumask scratch space in Credit1 Dario Faggioli
2017-01-18 9:45 ` Jan Beulich
2017-01-18 9:54 ` Dario Faggioli
2017-01-23 15:47 ` George Dunlap
2017-01-17 17:27 ` [PATCH 5/5] xen: sched: simplify ACPI S3 resume path Dario Faggioli
2017-01-23 15:52 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1486636373.3042.28.camel@citrix.com \
--to=dario.faggioli@citrix.com \
--cc=JBeulich@suse.com \
--cc=JGross@suse.com \
--cc=george.dunlap@eu.citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).