From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wei.liu2@citrix.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>,
Bhavesh Davda <bhavesh.davda@oracle.com>,
Jan Beulich <jbeulich@suse.com>
Subject: Re: [RFC PATCH v1 00/16] xen: sched: implement core-scheduling
Date: Thu, 11 Oct 2018 19:37:03 +0200 [thread overview]
Message-ID: <3636eba03691a0331a6e82cb11a651fd41c475ec.camel@suse.com> (raw)
In-Reply-To: <7553b02f-514d-d577-a4ae-3478036f8f62@suse.com>
[-- Attachment #1.1: Type: text/plain, Size: 3112 bytes --]
Hey,
Sorry if replying took some time. :-P
On Fri, 2018-09-07 at 18:00 +0200, Juergen Gross wrote:
> On 25/08/18 01:35, Dario Faggioli wrote:
> >
> > There are git branches here:
> > https://gitlab.com/dfaggioli/xen.git rel/sched/core-scheduling-
> > RFCv1
> > https://github.com/fdario/xen.git rel/sched/core-scheduling-RFCv1
> >
> > Any comment is more than welcome.
>
> Have you thought about a more generic approach?
>
I had. And I have thought about it more since this email. :-)
> Instead of trying to schedule only vcpus of the same domain on a core
> I'd rather switch form vcpu scheduling to real core scheduling. The
> scheduler would see guest cores to be scheduled on physical cores. A
> guest core consists of "guest threads" being vcpus (vcpus are bound
> to their guest cores, so that part of the topology could even be used
> by the guest for performance tuning).
>
Right, so I think I got the big picture. And it was something that, as
I've said above, I've been thinking too, and we've also talked about
something similar with Andrew in Nanjing.
I'm still missing how something like this would work in details,
perhaps because I'm really used to reason within the boundaries of the
model we currently have.
So, for example:
- domain A has vCore0 and vCore1
- each vCore has 2 threads ({vCore0.0, vCore0.1} and
{vCore1.0, vCore1.1})
- we're on a 2-way SMT host
- vCore1 is running on physical core 3 on the host
- more specifically, vCore1.0 is currently executing on thread 0 of
physical core 3 of the host, and vCore1.1 is currently executing on
thread 1 of core 3 of the host
- say that both vCore1.0 and vCore1.1 are in guest context
Now:
* vCore1.0 blocks. What happens?
* vCore1.0 makes an hypercall. What happens?
* vCore1.0 VMEXITs. What happens?
> The state machine determining the core state from its vcpus would be
> scheduler agnostic (schedule.c), same for switching guest cores on a
> physical core.
>
What do you mean with "same for switching guest cores on a physical
core"?
All in all, I like the idea, because it is about introducing nice
abstractions, it is general, etc., but it looks like a major rework of
the scheduler.
And it's not that I am not up for major reworks, but I'd like to
understand properly what that is buying us.
Note that, while this series which tries to implement core-scheduling
for Credit1 is rather long and messy, doing the same (and with a
similar approach) for Credit2 is a lot easier and nicer. I have it
almost ready, and will send it soon.
> This scheme could even be expanded for socket scheduling.
>
Right. But again, in Credit2, I've been able to implement socket-wise
coscheduling with this approach (I mean, an approach similar to the one
in this series, but adapted to Credit2).
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #2: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-10-11 17:37 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-24 23:35 [RFC PATCH v1 00/16] xen: sched: implement core-scheduling Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 01/16] xen: Credit1: count runnable vcpus, not running ones Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 02/16] xen: Credit1: always steal from pcpus with runnable but not running vcpus Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 03/16] xen: Credit1: do not always tickle an idle pcpu Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 04/16] xen: sched: make the logic for tracking idle core generic Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 05/16] xen: Credit1: track fully idle cores Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 06/16] xen: Credit1: check for fully idle cores when tickling Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 07/16] xen: Credit1: reorg __runq_tickle() code a bit Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 08/16] xen: Credit1: reorg csched_schedule() " Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 09/16] xen: Credit1: SMT-aware domain co-scheduling parameter and data structs Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 10/16] xen: Credit1: support sched_smt_cosched in csched_schedule() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 11/16] xen: Credit1: support sched_smt_cosched in _csched_cpu_pick() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 12/16] xen: Credit1: support sched_smt_cosched in csched_runq_steal() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 13/16] xen: Credit1: sched_smt_cosched support in __csched_vcpu_is_migrateable() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 14/16] xen: Credit1: sched_smt_cosched support in __runq_tickle() for pinned vcpus Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 15/16] xen: Credit1: sched_smt_cosched support in __runq_tickle() Dario Faggioli
2018-08-24 23:37 ` [RFC PATCH v1 16/16] xen/tools: tracing of Credit1 SMT domain co-scheduling support Dario Faggioli
2018-09-07 16:00 ` [RFC PATCH v1 00/16] xen: sched: implement core-scheduling Juergen Gross
2018-10-11 17:37 ` Dario Faggioli [this message]
2018-10-12 5:15 ` Juergen Gross
2018-10-12 7:49 ` Dario Faggioli
2018-10-12 8:35 ` Juergen Gross
2018-10-12 9:15 ` Dario Faggioli
2018-10-12 9:23 ` Juergen Gross
2018-10-18 10:40 ` Dario Faggioli
2018-10-17 21:36 ` Tamas K Lengyel
2018-10-18 8:16 ` Dario Faggioli
2018-10-18 12:55 ` Tamas K Lengyel
2018-10-18 13:48 ` Dario Faggioli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3636eba03691a0331a6e82cb11a651fd41c475ec.camel@suse.com \
--to=dfaggioli@suse.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=bhavesh.davda@oracle.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=jgross@suse.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).