From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: [RFC PATCH v1 15/16] xen: Credit1: sched_smt_cosched support in __runq_tickle().
Date: Sat, 25 Aug 2018 01:36:57 +0200 [thread overview]
Message-ID: <153515381769.8598.8048398737671442746.stgit@Istar.fritz.box> (raw)
In-Reply-To: <153515305655.8598.6054293649487840735.stgit@Istar.fritz.box>
If sched_smt_cosched is enabled, when tickling pcpus (upon vcpu
wakeups), take into account the fact that only pcpus of cores where
other vcpus of the same domain are running already, or fully idle ones,
will actually be able to pick the vcpu up.
*NB* there are places where the behavior needs a bit more refining, in
order to actually behave as described above (see the 'TODO's in the
sources).
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
---
xen/common/sched_credit.c | 60 ++++++++++++++++++++++++++++++++++++++-------
1 file changed, 51 insertions(+), 9 deletions(-)
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 9d6071e229..aecb4e3e05 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -416,8 +416,10 @@ static inline void __runq_tickle(struct csched_vcpu *new)
/*
* If there are no idlers, and the new vcpu is a higher priority than
- * the old vcpu, run it here.
- *
+ * the old vcpu, run it here. If SMT domain co-scheduling is enabled,
+ * though, we also must be already running another vcpu of the same domain
+ * on this core.
+ *
* If there are idle cpus, first try to find one suitable to run
* new, so we can avoid preempting cur. If we cannot find a
* suitable idler on which to run new, run it here, but try to
@@ -426,8 +428,13 @@ static inline void __runq_tickle(struct csched_vcpu *new)
if ( idlers_empty && new->pri > cur->pri )
{
ASSERT(cpumask_test_cpu(cpu, new->vcpu->cpu_hard_affinity));
- SCHED_STAT_CRANK(tickled_busy_cpu);
- __cpumask_set_cpu(cpu, &mask);
+ spin_lock(&spc->core->lock);
+ if ( !sched_smt_cosched || new->sdom == spc->core->sdom )
+ {
+ SCHED_STAT_CRANK(tickled_busy_cpu);
+ __cpumask_set_cpu(cpu, &mask);
+ }
+ spin_unlock(&spc->core->lock);
}
else if ( !idlers_empty )
{
@@ -473,6 +480,12 @@ static inline void __runq_tickle(struct csched_vcpu *new)
* leave cur alone (as it is running and is, likely, cache-hot)
* and wake some of them (which is waking up and so is, likely,
* cache cold anyway), and go for one of them.
+ *
+ * TODO: for SMT domain co-scheduling, we must also be sure that
+ * these idlers are on core where new->domain is running already
+ * as they can't be on fully idle cores, or we would have found
+ * them in prv->smt_idle). That can be done with a loop, or
+ * introducing new data structures...
*/
if ( !new_idlers_empty )
{
@@ -483,12 +496,24 @@ static inline void __runq_tickle(struct csched_vcpu *new)
/*
* If there are no suitable idlers for new, and it's higher
- * priority than cur, check whether we can migrate cur away.
- * We have to do it indirectly, via _VPF_migrating (instead
- * of just tickling any idler suitable for cur) because cur
- * is running.
+ * priority than cur, an option is to run it here, and migrate cur
+ * away. If domain co-scheduling is enabled, we can do that only if
+ * the core is idle, or we're running new->domain already.
+ *
+ * TODO: Similarly, when checking whether we can migrate cur away,
+ * we should not only check if there are idlers suitable for cur,
+ * but also whether they are on fully idle cores, or on ones that
+ * are running cur->domain already. That can be done with a loop,
+ * or introducing a new data structure...
+ *
+ * If we decide to migrate cur, we have to do it indirectly, via
+ * _VPF_migrating (instead of just tickling any suitable idler),
+ * as cur is running.
*/
- if ( new->pri > cur->pri )
+ spin_lock(&spc->core->lock);
+ if ( new->pri > cur->pri &&
+ (!sched_smt_cosched || spc->core->sdom == NULL ||
+ spc->core->sdom == new->sdom) )
{
if ( cpumask_intersects(cur->vcpu->cpu_hard_affinity,
&idle_mask) )
@@ -498,17 +523,34 @@ static inline void __runq_tickle(struct csched_vcpu *new)
SCHED_STAT_CRANK(migrate_kicked_away);
set_bit(_VPF_migrating, &cur->vcpu->pause_flags);
}
+ spin_unlock(&spc->core->lock);
/* Tickle cpu anyway, to let new preempt cur. */
SCHED_STAT_CRANK(tickled_busy_cpu);
__cpumask_set_cpu(cpu, &mask);
goto tickle;
}
+ spin_unlock(&spc->core->lock);
/* We get here only if we didn't find anyone. */
ASSERT(cpumask_empty(&mask));
}
}
+ /* Don't tickle cpus that won't be able to pick up new. */
+ if ( sched_smt_cosched )
+ {
+ unsigned int c;
+
+ for_each_cpu(c, &mask)
+ {
+ spc = CSCHED_PCPU(c);
+ spin_lock(&spc->core->lock);
+ if ( spc->core->sdom != NULL && spc->core->sdom != new->sdom )
+ cpumask_clear_cpu(c, &mask);
+ spin_unlock(&spc->core->lock);
+ }
+ }
+
if ( !cpumask_empty(&mask) )
{
/* Which of the idlers suitable for new shall we wake up? */
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-08-24 23:37 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-24 23:35 [RFC PATCH v1 00/16] xen: sched: implement core-scheduling Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 01/16] xen: Credit1: count runnable vcpus, not running ones Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 02/16] xen: Credit1: always steal from pcpus with runnable but not running vcpus Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 03/16] xen: Credit1: do not always tickle an idle pcpu Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 04/16] xen: sched: make the logic for tracking idle core generic Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 05/16] xen: Credit1: track fully idle cores Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 06/16] xen: Credit1: check for fully idle cores when tickling Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 07/16] xen: Credit1: reorg __runq_tickle() code a bit Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 08/16] xen: Credit1: reorg csched_schedule() " Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 09/16] xen: Credit1: SMT-aware domain co-scheduling parameter and data structs Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 10/16] xen: Credit1: support sched_smt_cosched in csched_schedule() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 11/16] xen: Credit1: support sched_smt_cosched in _csched_cpu_pick() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 12/16] xen: Credit1: support sched_smt_cosched in csched_runq_steal() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 13/16] xen: Credit1: sched_smt_cosched support in __csched_vcpu_is_migrateable() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 14/16] xen: Credit1: sched_smt_cosched support in __runq_tickle() for pinned vcpus Dario Faggioli
2018-08-24 23:36 ` Dario Faggioli [this message]
2018-08-24 23:37 ` [RFC PATCH v1 16/16] xen/tools: tracing of Credit1 SMT domain co-scheduling support Dario Faggioli
2018-09-07 16:00 ` [RFC PATCH v1 00/16] xen: sched: implement core-scheduling Juergen Gross
2018-10-11 17:37 ` Dario Faggioli
2018-10-12 5:15 ` Juergen Gross
2018-10-12 7:49 ` Dario Faggioli
2018-10-12 8:35 ` Juergen Gross
2018-10-12 9:15 ` Dario Faggioli
2018-10-12 9:23 ` Juergen Gross
2018-10-18 10:40 ` Dario Faggioli
2018-10-17 21:36 ` Tamas K Lengyel
2018-10-18 8:16 ` Dario Faggioli
2018-10-18 12:55 ` Tamas K Lengyel
2018-10-18 13:48 ` Dario Faggioli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=153515381769.8598.8048398737671442746.stgit@Istar.fritz.box \
--to=dfaggioli@suse.com \
--cc=george.dunlap@eu.citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).