From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Wei Liu <wei.liu2@citrix.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>,
Bhavesh Davda <bhavesh.davda@oracle.com>,
Jan Beulich <jbeulich@suse.com>
Subject: Re: [RFC PATCH v1 00/16] xen: sched: implement core-scheduling
Date: Fri, 12 Oct 2018 09:49:27 +0200 [thread overview]
Message-ID: <8224a31a6fd968344499b52e4bc77d78576c1a8e.camel@suse.com> (raw)
In-Reply-To: <6ffe5f58-8aec-c349-e08b-58dc1d3d5469@suse.com>
[-- Attachment #1.1: Type: text/plain, Size: 4602 bytes --]
On Fri, 2018-10-12 at 07:15 +0200, Juergen Gross wrote:
> On 11/10/2018 19:37, Dario Faggioli wrote:
> >
> > So, for example:
> > - domain A has vCore0 and vCore1
> > - each vCore has 2 threads ({vCore0.0, vCore0.1} and
> > {vCore1.0, vCore1.1})
> > - we're on a 2-way SMT host
> > - vCore1 is running on physical core 3 on the host
> > - more specifically, vCore1.0 is currently executing on thread 0 of
> > physical core 3 of the host, and vCore1.1 is currently executing
> > on
> > thread 1 of core 3 of the host
> > - say that both vCore1.0 and vCore1.1 are in guest context
> >
> > Now:
> > * vCore1.0 blocks. What happens?
>
> It is going to vBlocked (the physical thread is sitting in the
> hypervisor waiting for either a (core-)scheduling event or for
> unblocking vCore1.0). vCore1.1 keeps running. Or, if vCore1.1
> is already vIdle/vBlocked, vCore1 is switching to blocked and the
> scheduler is looking for another vCore to schedule on the physical
> core.
>
Ok. And then we'll have one thread in guest context, and one thread in
Xen (albeit, idle, in this case). In these other cases...
> > * vCore1.0 makes an hypercall. What happens?
>
> Same as today. The hypercall is being executed.
>
> > * vCore1.0 VMEXITs. What happens?
>
> Same as today. The VMEXIT is handled.
>
... we have one thread in guest context, and one thread in Xen, and the
one in Xen is not just staying idle, it's doing hypercalls and VMEXIT
handling.
> In case you referring to a potential rendezvous for e.g. L1TF
> mitigation: this would be handled scheduler agnostic.
>
Yes, that was what I was thinking to. I.e., in order to be able to use
core-scheduling as a _fully_effective_ mitigation for stuff like L1TF,
we'd need something like that.
In fact, core-scheduling "per-se" mitigates leaks among guests, but if
we want to fully avoid for two threads to ever be in different security
contexts (like one in guest and one in Xen, to prevent Xen data leaking
to a guest), we do need some kind of synchronized Xen enters/exits,
AFAIUI.
What I'm trying to understand right now, is whether implementing things
in this way you're proposing, would help achieving that. And what I've
understood so far is that, no it doesn't.
The main difference between the two approaches would be that we
implement it once in schedule.c, for all schedulers. But this, I see it
as something having both up and down sides (yeah, like everything on
Earth, I know! :-P). More on this later.
> > All in all, I like the idea, because it is about introducing nice
> > abstractions, it is general, etc., but it looks like a major rework
> > of
> > the scheduler.
>
> Correct. Finally something to do :-p
>
Indeed! :-)
> > Note that, while this series which tries to implement core-
> > scheduling
> > for Credit1 is rather long and messy, doing the same (and with a
> > similar approach) for Credit2 is a lot easier and nicer. I have it
> > almost ready, and will send it soon.
>
> Okay, but would it keep vThreads of the same vCore let always running
> together on the same physical core?
>
It doesn't right now, as we don't have a way to expose such information
to the guest, yet. And since without such a mechanism, the guest can't
take advantage of something like this (neither from a performance nor
from a vuln. mitigation point of view), I kept that out.
But I certainly can see about making it do so (I was already planning
to).
> > Right. But again, in Credit2, I've been able to implement socket-
> > wise
> > coscheduling with this approach (I mean, an approach similar to the
> > one
> > in this series, but adapted to Credit2).
>
> And then there still is sched_rt.c
>
Ok, so I think this is the main benefit of this approach. We do the
thing once, and all schedulers get core-scheduling (or whatever
granularity of group scheduling we implement/allow).
But how easy it is to opt out, if one doesn't want it? E.g., in the
context of L1TF, what if I'm not affected, and hence am not interested
in core-scheduling? What if I want a cpupool with core-scheduling and
one without?
I may be wrong, but out of the top of my head, but it seems to me that
doing things in schedule.c makes this a lot harder, if possible at all.
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #2: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-10-12 7:49 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-24 23:35 [RFC PATCH v1 00/16] xen: sched: implement core-scheduling Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 01/16] xen: Credit1: count runnable vcpus, not running ones Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 02/16] xen: Credit1: always steal from pcpus with runnable but not running vcpus Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 03/16] xen: Credit1: do not always tickle an idle pcpu Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 04/16] xen: sched: make the logic for tracking idle core generic Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 05/16] xen: Credit1: track fully idle cores Dario Faggioli
2018-08-24 23:35 ` [RFC PATCH v1 06/16] xen: Credit1: check for fully idle cores when tickling Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 07/16] xen: Credit1: reorg __runq_tickle() code a bit Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 08/16] xen: Credit1: reorg csched_schedule() " Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 09/16] xen: Credit1: SMT-aware domain co-scheduling parameter and data structs Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 10/16] xen: Credit1: support sched_smt_cosched in csched_schedule() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 11/16] xen: Credit1: support sched_smt_cosched in _csched_cpu_pick() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 12/16] xen: Credit1: support sched_smt_cosched in csched_runq_steal() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 13/16] xen: Credit1: sched_smt_cosched support in __csched_vcpu_is_migrateable() Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 14/16] xen: Credit1: sched_smt_cosched support in __runq_tickle() for pinned vcpus Dario Faggioli
2018-08-24 23:36 ` [RFC PATCH v1 15/16] xen: Credit1: sched_smt_cosched support in __runq_tickle() Dario Faggioli
2018-08-24 23:37 ` [RFC PATCH v1 16/16] xen/tools: tracing of Credit1 SMT domain co-scheduling support Dario Faggioli
2018-09-07 16:00 ` [RFC PATCH v1 00/16] xen: sched: implement core-scheduling Juergen Gross
2018-10-11 17:37 ` Dario Faggioli
2018-10-12 5:15 ` Juergen Gross
2018-10-12 7:49 ` Dario Faggioli [this message]
2018-10-12 8:35 ` Juergen Gross
2018-10-12 9:15 ` Dario Faggioli
2018-10-12 9:23 ` Juergen Gross
2018-10-18 10:40 ` Dario Faggioli
2018-10-17 21:36 ` Tamas K Lengyel
2018-10-18 8:16 ` Dario Faggioli
2018-10-18 12:55 ` Tamas K Lengyel
2018-10-18 13:48 ` Dario Faggioli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8224a31a6fd968344499b52e4bc77d78576c1a8e.camel@suse.com \
--to=dfaggioli@suse.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=bhavesh.davda@oracle.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=jgross@suse.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).