From: George Dunlap <george.dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>,
Dario Faggioli <dario.faggioli@citrix.com>
Cc: MarcusGranado <Marcus.Granado@eu.citrix.com>,
Justin Weaver <jtweaver@hawaii.edu>,
IanCampbell <Ian.Campbell@citrix.com>,
Li Yechen <lccycc123@gmail.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
JuergenGross <juergen.gross@ts.fujitsu.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>,
MattWilson <msw@amazon.com>,
xen-devel <xen-devel@lists.xenproject.org>,
DanielDe Graaf <dgdegra@tycho.nsa.gov>, KeirFraser <keir@xen.org>,
Elena Ufimtseva <ufimtseva@gmail.com>
Subject: Re: [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity
Date: Wed, 6 Nov 2013 16:12:26 +0000 [thread overview]
Message-ID: <527A6A6A.6030604@eu.citrix.com> (raw)
In-Reply-To: <527A6AF2020000780010033E@nat28.tlf.novell.com>
On 06/11/13 15:14, Jan Beulich wrote:
>>>> On 06.11.13 at 15:56, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>> On 06/11/13 14:26, Dario Faggioli wrote:
>>> On mer, 2013-11-06 at 11:44 +0000, George Dunlap wrote:
>>>> On 06/11/13 10:00, Dario Faggioli wrote:
>>>>> I see, and that sounds sensible to me... It's mostly a matter a matter
>>>>> of deciding whether o not we want something like that, and, if yes,
>>>>> whether we want it based on hard of soft.
>>>>>
>>>>> Personally, I think I agree with you on having it based on hard
>>>>> affinities by default.
>>>>>
>>>>> Let's see if George get to say something before I get to that part of
>>>>> the (re)implementation. :-)
>>>> I would probably have it based on soft affinities, since that's where we
>>>> expect to have the domain's vcpus actually running most if the time;
>>>>
>>> True. However, doing that would rule out cpupool and vcpu-pinning.
>> I guess I was assuming that a vcpu's soft affinity would always be
>> considered a subset of its hard affinity and the cpus in its cpupool.
> For CPU pools I agree, but didn't we mean hard affinity to control
> memory allocation, and soft affinity scheduling decisions (in which
> case there's no strict ordering between the two)? If not, I guess I'd
> need a brief but clear definition what "soft" and "hard" are supposed
> to represent...
We have two "affinities" at the moment:
* per-vcpu cpu affinity. This is this is a scheduling construct, and is
used to restrict vcpus to run on specific pcpus. This is what we would
call hard affinity -- "hard" because vcpus are *not allowed* to run on
cpus not in this mask.
* Domain NUMA affinity. As of 4.3 this has two functions: both a memory
allocation function, and a scheduling function. For the scheduling
function, it acts as a "soft affinity", but it's domain-wide, rather
than being per-vcpu. It's "soft" because the scheduler will *try* to
run it on pcpus in that mask, but if it cannot, it *will allow* them to
run on pcpus not in that mask.
At the moment you can set this manually, or if it's not set, then Xen
will set it automatically based on the vcpu hard affinities (and I think
the cpupool that it's in).
What we're proposing is to remove the domain-wide soft affinity and
replace it with a per-vcpu soft affinity. That way each vcpu has some
pcpus that it prefers (in the soft affinity *and* the hard affinity),
some that it doesn't prefer but is OK with (hard affinity but not soft
affinity), and some that it cannot run on at all (not in the hard affinity).
The question Dario has is this: given that we now have per-vcpu hard and
soft scheduling affinity, how should we automatically construct the
per-domain memory allocation affinity, if at all? Should we construct
it from the "hard" scheduling affinities, or from the "soft" scheduling
affinities?
I said that I thought we should use the soft affinity; but I really
meant the "effective soft affinity" -- i.e., the union of soft, hard,
and cpupools.
-George
next prev parent reply other threads:[~2013-11-06 16:13 UTC|newest]
Thread overview: 71+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-05 14:33 [PATCH RESEND 00/12] Implement per-vcpu NUMA node-affinity for credit1 Dario Faggioli
2013-11-05 14:34 ` [PATCH RESEND 01/12] xen: numa-sched: leave node-affinity alone if not in "auto" mode Dario Faggioli
2013-11-05 14:43 ` George Dunlap
2013-11-05 14:34 ` [PATCH RESEND 02/12] xl: allow for node-wise specification of vcpu pinning Dario Faggioli
2013-11-05 14:50 ` George Dunlap
2013-11-06 8:48 ` Dario Faggioli
2013-11-07 18:17 ` Ian Jackson
2013-11-08 9:24 ` Dario Faggioli
2013-11-08 15:20 ` Ian Jackson
2013-11-05 14:34 ` [PATCH RESEND 03/12] xl: implement and enable dryrun mode for `xl vcpu-pin' Dario Faggioli
2013-11-05 14:34 ` [PATCH RESEND 04/12] xl: test script for the cpumap parser (for vCPU pinning) Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity Dario Faggioli
2013-11-05 14:52 ` Jan Beulich
2013-11-05 15:03 ` George Dunlap
2013-11-05 15:11 ` Jan Beulich
2013-11-05 15:24 ` George Dunlap
2013-11-05 22:15 ` Dario Faggioli
2013-11-05 15:11 ` George Dunlap
2013-11-05 15:23 ` Jan Beulich
2013-11-05 15:39 ` George Dunlap
2013-11-05 16:56 ` George Dunlap
2013-11-05 17:16 ` George Dunlap
2013-11-05 17:30 ` Jan Beulich
2013-11-05 23:12 ` Dario Faggioli
2013-11-05 23:01 ` Dario Faggioli
2013-11-06 9:39 ` Dario Faggioli
2013-11-06 9:46 ` Jan Beulich
2013-11-06 10:00 ` Dario Faggioli
2013-11-06 11:44 ` George Dunlap
2013-11-06 14:26 ` Dario Faggioli
2013-11-06 14:56 ` George Dunlap
2013-11-06 15:14 ` Jan Beulich
2013-11-06 16:12 ` George Dunlap [this message]
2013-11-06 16:22 ` Jan Beulich
2013-11-06 16:48 ` Dario Faggioli
2013-11-06 16:20 ` Dario Faggioli
2013-11-06 16:23 ` Dario Faggioli
2013-11-05 17:24 ` Jan Beulich
2013-11-05 17:31 ` George Dunlap
2013-11-05 23:08 ` Dario Faggioli
2013-11-05 22:54 ` Dario Faggioli
2013-11-05 22:22 ` Dario Faggioli
2013-11-06 11:41 ` Dario Faggioli
2013-11-06 14:47 ` George Dunlap
2013-11-06 16:53 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 06/12] xen: numa-sched: domain node-affinity always comes from vcpu node-affinity Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 07/12] xen: numa-sched: use per-vcpu node-affinity for actual scheduling Dario Faggioli
2013-11-05 16:20 ` George Dunlap
2013-11-06 9:15 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 08/12] xen: numa-sched: enable getting/specifying per-vcpu node-affinity Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 09/12] libxc: " Dario Faggioli
2013-11-07 18:27 ` Ian Jackson
2013-11-12 16:01 ` Konrad Rzeszutek Wilk
2013-11-12 16:43 ` George Dunlap
2013-11-12 16:55 ` Konrad Rzeszutek Wilk
2013-11-12 18:40 ` Dario Faggioli
2013-11-12 19:13 ` Konrad Rzeszutek Wilk
2013-11-12 21:36 ` Dario Faggioli
2013-11-13 10:57 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 10/12] libxl: " Dario Faggioli
2013-11-07 18:29 ` Ian Jackson
2013-11-08 9:18 ` Dario Faggioli
2013-11-08 15:07 ` Ian Jackson
2013-11-05 14:36 ` [PATCH RESEND 11/12] xl: " Dario Faggioli
2013-11-07 18:33 ` Ian Jackson
2013-11-08 9:33 ` Dario Faggioli
2013-11-08 15:18 ` Ian Jackson
2013-11-05 14:36 ` [PATCH RESEND 12/12] xl: numa-sched: enable specifying node-affinity in VM config file Dario Faggioli
2013-11-07 18:35 ` Ian Jackson
2013-11-08 9:49 ` Dario Faggioli
2013-11-08 15:22 ` Ian Jackson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=527A6A6A.6030604@eu.citrix.com \
--to=george.dunlap@eu.citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=Marcus.Granado@eu.citrix.com \
--cc=dario.faggioli@citrix.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=jtweaver@hawaii.edu \
--cc=juergen.gross@ts.fujitsu.com \
--cc=keir@xen.org \
--cc=lccycc123@gmail.com \
--cc=msw@amazon.com \
--cc=ufimtseva@gmail.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).