From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: MarcusGranado <Marcus.Granado@eu.citrix.com>,
Justin Weaver <jtweaver@hawaii.edu>,
IanCampbell <Ian.Campbell@citrix.com>,
Li Yechen <lccycc123@gmail.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
JuergenGross <juergen.gross@ts.fujitsu.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>,
Jan Beulich <JBeulich@suse.com>,
xen-devel <xen-devel@lists.xenproject.org>,
DanielDe Graaf <dgdegra@tycho.nsa.gov>, KeirFraser <keir@xen.org>,
MattWilson <msw@amazon.com>,
Elena Ufimtseva <ufimtseva@gmail.com>
Subject: Re: [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity
Date: Wed, 6 Nov 2013 17:48:31 +0100 [thread overview]
Message-ID: <1383756511.9207.165.camel@Solace> (raw)
In-Reply-To: <527A6A6A.6030604@eu.citrix.com>
[-- Attachment #1.1: Type: text/plain, Size: 2534 bytes --]
On mer, 2013-11-06 at 16:12 +0000, George Dunlap wrote:
> We have two "affinities" at the moment:
> * per-vcpu cpu affinity. This is this is a scheduling construct, and is
> used to restrict vcpus to run on specific pcpus. This is what we would
> call hard affinity -- "hard" because vcpus are *not allowed* to run on
> cpus not in this mask.
> * Domain NUMA affinity. As of 4.3 this has two functions: both a memory
> allocation function, and a scheduling function. For the scheduling
> function, it acts as a "soft affinity", but it's domain-wide, rather
> than being per-vcpu. It's "soft" because the scheduler will *try* to
> run it on pcpus in that mask, but if it cannot, it *will allow* them to
> run on pcpus not in that mask.
>
> At the moment you can set this manually, or if it's not set, then Xen
> will set it automatically based on the vcpu hard affinities (and I think
> the cpupool that it's in).
>
> What we're proposing is to remove the domain-wide soft affinity and
> replace it with a per-vcpu soft affinity. That way each vcpu has some
> pcpus that it prefers (in the soft affinity *and* the hard affinity),
> some that it doesn't prefer but is OK with (hard affinity but not soft
> affinity), and some that it cannot run on at all (not in the hard affinity).
>
Great explanation... A bit longer but *much* better than mine. :-)
> The question Dario has is this: given that we now have per-vcpu hard and
> soft scheduling affinity, how should we automatically construct the
> per-domain memory allocation affinity, if at all? Should we construct
> it from the "hard" scheduling affinities, or from the "soft" scheduling
> affinities?
>
Exactly.
> I said that I thought we should use the soft affinity; but I really
> meant the "effective soft affinity" -- i.e., the union of soft, hard,
> and cpupools.
>
Aha... I see it now... Only one more thing: union or intersection? Union
would be nice, especially because there are very few risks of it being
empty, but it may be too broad (if one of the tree is 'all cpus/nodes',
so will the memory affinity). I'd go for the intersection, with some
extra care to avoid degeneration into the empty set.
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2013-11-06 16:48 UTC|newest]
Thread overview: 71+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-05 14:33 [PATCH RESEND 00/12] Implement per-vcpu NUMA node-affinity for credit1 Dario Faggioli
2013-11-05 14:34 ` [PATCH RESEND 01/12] xen: numa-sched: leave node-affinity alone if not in "auto" mode Dario Faggioli
2013-11-05 14:43 ` George Dunlap
2013-11-05 14:34 ` [PATCH RESEND 02/12] xl: allow for node-wise specification of vcpu pinning Dario Faggioli
2013-11-05 14:50 ` George Dunlap
2013-11-06 8:48 ` Dario Faggioli
2013-11-07 18:17 ` Ian Jackson
2013-11-08 9:24 ` Dario Faggioli
2013-11-08 15:20 ` Ian Jackson
2013-11-05 14:34 ` [PATCH RESEND 03/12] xl: implement and enable dryrun mode for `xl vcpu-pin' Dario Faggioli
2013-11-05 14:34 ` [PATCH RESEND 04/12] xl: test script for the cpumap parser (for vCPU pinning) Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity Dario Faggioli
2013-11-05 14:52 ` Jan Beulich
2013-11-05 15:03 ` George Dunlap
2013-11-05 15:11 ` Jan Beulich
2013-11-05 15:24 ` George Dunlap
2013-11-05 22:15 ` Dario Faggioli
2013-11-05 15:11 ` George Dunlap
2013-11-05 15:23 ` Jan Beulich
2013-11-05 15:39 ` George Dunlap
2013-11-05 16:56 ` George Dunlap
2013-11-05 17:16 ` George Dunlap
2013-11-05 17:30 ` Jan Beulich
2013-11-05 23:12 ` Dario Faggioli
2013-11-05 23:01 ` Dario Faggioli
2013-11-06 9:39 ` Dario Faggioli
2013-11-06 9:46 ` Jan Beulich
2013-11-06 10:00 ` Dario Faggioli
2013-11-06 11:44 ` George Dunlap
2013-11-06 14:26 ` Dario Faggioli
2013-11-06 14:56 ` George Dunlap
2013-11-06 15:14 ` Jan Beulich
2013-11-06 16:12 ` George Dunlap
2013-11-06 16:22 ` Jan Beulich
2013-11-06 16:48 ` Dario Faggioli [this message]
2013-11-06 16:20 ` Dario Faggioli
2013-11-06 16:23 ` Dario Faggioli
2013-11-05 17:24 ` Jan Beulich
2013-11-05 17:31 ` George Dunlap
2013-11-05 23:08 ` Dario Faggioli
2013-11-05 22:54 ` Dario Faggioli
2013-11-05 22:22 ` Dario Faggioli
2013-11-06 11:41 ` Dario Faggioli
2013-11-06 14:47 ` George Dunlap
2013-11-06 16:53 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 06/12] xen: numa-sched: domain node-affinity always comes from vcpu node-affinity Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 07/12] xen: numa-sched: use per-vcpu node-affinity for actual scheduling Dario Faggioli
2013-11-05 16:20 ` George Dunlap
2013-11-06 9:15 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 08/12] xen: numa-sched: enable getting/specifying per-vcpu node-affinity Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 09/12] libxc: " Dario Faggioli
2013-11-07 18:27 ` Ian Jackson
2013-11-12 16:01 ` Konrad Rzeszutek Wilk
2013-11-12 16:43 ` George Dunlap
2013-11-12 16:55 ` Konrad Rzeszutek Wilk
2013-11-12 18:40 ` Dario Faggioli
2013-11-12 19:13 ` Konrad Rzeszutek Wilk
2013-11-12 21:36 ` Dario Faggioli
2013-11-13 10:57 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 10/12] libxl: " Dario Faggioli
2013-11-07 18:29 ` Ian Jackson
2013-11-08 9:18 ` Dario Faggioli
2013-11-08 15:07 ` Ian Jackson
2013-11-05 14:36 ` [PATCH RESEND 11/12] xl: " Dario Faggioli
2013-11-07 18:33 ` Ian Jackson
2013-11-08 9:33 ` Dario Faggioli
2013-11-08 15:18 ` Ian Jackson
2013-11-05 14:36 ` [PATCH RESEND 12/12] xl: numa-sched: enable specifying node-affinity in VM config file Dario Faggioli
2013-11-07 18:35 ` Ian Jackson
2013-11-08 9:49 ` Dario Faggioli
2013-11-08 15:22 ` Ian Jackson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1383756511.9207.165.camel@Solace \
--to=dario.faggioli@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=Marcus.Granado@eu.citrix.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=george.dunlap@eu.citrix.com \
--cc=jtweaver@hawaii.edu \
--cc=juergen.gross@ts.fujitsu.com \
--cc=keir@xen.org \
--cc=lccycc123@gmail.com \
--cc=msw@amazon.com \
--cc=ufimtseva@gmail.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).