From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <jgross@suse.com>, Xen Devel <xen-devel@lists.xen.org>
Cc: Peng Fan <peng.fan@nxp.com>,
Stefano Stabellini <sstabellini@kernel.org>,
George Dunlap <george.dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
anastassios.nanos@onapp.com, Jan Beulich <jbeulich@suse.com>,
Peng Fan <van.freenix@gmail.com>
Subject: Re: [DOC RFC] Heterogeneous Multi Processing Support in Xen
Date: Thu, 8 Dec 2016 22:45:48 +0100 [thread overview]
Message-ID: <1481233548.3445.175.camel@citrix.com> (raw)
In-Reply-To: <8ee2f981-fdad-63e2-5779-02fedc7d137d@suse.com>
[-- Attachment #1.1: Type: text/plain, Size: 2871 bytes --]
On Thu, 2016-12-08 at 11:38 +0100, Juergen Gross wrote:
> On 08/12/16 11:27, Dario Faggioli wrote:
> > On Thu, 2016-12-08 at 07:12 +0100, Juergen Gross wrote:
> > > Any idea how to avoid problems in the schedulers related to vcpus
> > > with
> > > different weights?
> > >
> > Sure: use Credit2! :-P
> >
> > And I'm not joking (not entirely, at least), as the alternative is
> > to
> > re-engineer significantly the algorithm inside Credit, which I'm
> > not
> > sure is doable or worthwhile, especially considering we have
> > alternatives.
>
> So you really solved the following problem in credit2?
>
So, pinning will always _affect_ scheduling, that is actually its goal.
And in fact, it really should be used when there is no alternative, or
when the scenario is understood well enough, that its effects are known
(or at least known to be beneficial for the workload running on the
host).
In Credit2, weights used to make a vCPU burn credits faster or slower
than the other vCPUs, while in Credit1, the algorithm is much more
complex. Also, in Credit2, everything is computed per-runqueue. Pinning
of course interferes, but should really be less disruptive than in
Credit1.
All this being said, I was not yet around when you came up with the
idea that pinning was disturbing weighted fairness, so I'm not sure
what the original argument was... I'll go back check the email
conversation in the archive. And again, all the times that one can use
cpupool, that should be the preferred solution, but there are
situations where that's just not suitable, and we need pinning.
This case is a little bit border-line. Sure using pinning is not ideal,
and in fact it's only happening in the initial stages. When actually
modifying the scheduler, we will, in Credit2, do something like having
one runqueue per class (or more, but certainly not any runqueues that
"cross" classes, as that would not work), which puts us in a pretty
decent situation, I think. For Credit, let's see, but I'm afraid we
won't be able to guarantee much more than technical correctness (i.e.,
not scheduling on forbidden classes).
> You have three domains with 2 vcpus each and different weights. Run
> them
> on 3 physical cpus with following pinning:
>
> dom1: pcpu 1 and 2
> dom2: pcpu 2 and 3
> dom3: pcpu 1 and 3
>
> How do you decide which vcpu to run on which pcpu for how long?
>
Ok, it was a public holiday here today, so I did not really have time
to think about this example. And tomorrow I'm on PTO. I'll look closely
on Monday.
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-12-08 21:45 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-07 18:29 [DOC RFC] Heterogeneous Multi Processing Support in Xen Dario Faggioli
2016-12-08 6:12 ` Juergen Gross
2016-12-08 10:27 ` Dario Faggioli
2016-12-08 10:38 ` Juergen Gross
2016-12-08 21:45 ` Dario Faggioli [this message]
2016-12-15 18:41 ` Dario Faggioli
2016-12-16 7:44 ` Juergen Gross
2016-12-08 10:14 ` Jan Beulich
2016-12-08 10:23 ` Dario Faggioli
2016-12-08 10:41 ` Jan Beulich
2016-12-08 19:09 ` Stefano Stabellini
2016-12-08 21:54 ` Dario Faggioli
2016-12-09 8:13 ` Jan Beulich
2016-12-09 8:29 ` Dario Faggioli
2016-12-09 9:09 ` Jan Beulich
2016-12-09 19:20 ` Stefano Stabellini
2016-12-16 8:00 ` George Dunlap
2016-12-16 8:05 ` George Dunlap
2016-12-16 8:07 ` George Dunlap
2017-03-01 0:05 ` Anastassios Nanos
2017-03-01 17:38 ` Dario Faggioli
2017-03-01 18:58 ` Stefano Stabellini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1481233548.3445.175.camel@citrix.com \
--to=dario.faggioli@citrix.com \
--cc=anastassios.nanos@onapp.com \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=jgross@suse.com \
--cc=peng.fan@nxp.com \
--cc=sstabellini@kernel.org \
--cc=van.freenix@gmail.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).