From: George Dunlap <george.dunlap@eu.citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>,
Jan Beulich <JBeulich@suse.com>
Cc: MarcusGranado <Marcus.Granado@eu.citrix.com>,
Justin Weaver <jtweaver@hawaii.edu>,
IanCampbell <Ian.Campbell@citrix.com>,
Li Yechen <lccycc123@gmail.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
JuergenGross <juergen.gross@ts.fujitsu.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>,
Matt Wilson <msw@amazon.com>,
xen-devel <xen-devel@lists.xenproject.org>,
Daniel De Graaf <dgdegra@tycho.nsa.gov>,
KeirFraser <keir@xen.org>, Elena Ufimtseva <ufimtseva@gmail.com>
Subject: Re: [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity
Date: Wed, 6 Nov 2013 11:44:40 +0000 [thread overview]
Message-ID: <527A2BA8.9060601@eu.citrix.com> (raw)
In-Reply-To: <1383732000.9207.99.camel@Solace>
On 06/11/13 10:00, Dario Faggioli wrote:
> On mer, 2013-11-06 at 09:46 +0000, Jan Beulich wrote:
>>>>> On 06.11.13 at 10:39, Dario Faggioli <dario.faggioli@citrix.com> wrote:
>>> Now, we're talking about killing vc->cpu_affinity and not introducing
>>> vc->node_affinity and, instead, introduce vc->cpu_hard_affinity and
>>> vc->cpu_soft_affinity and, more important, not to link any of the above
>>> to d->node_affinity. That means all the above operations _will_NOT_
>>> automatically affect d->node_affinity any longer, at least from the
>>> hypervisor (and, most likely, libxc) perspective. OTOH, I'm almost sure
>>> that I can force libxl (and xl) to retain the exact same behaviour it is
>>> exposing to the user (just by adding an extra call when needed).
>>>
>>> So, although all this won't be an issue for xl and libxl consumers (or,
>>> at least, that's my goal), it will change how the hypervisor used to
>>> behave in all those situations. This means that xl and libxl users will
>>> see no change, while folks issuing hypercalls and/or libxc calls will.
>>>
>>> Is that ok? I mean, I know there are no stability concerns for those
>>> APIs, but still, is that an acceptable change?
>> I would think that as long as d->node_affinity is empty, it should
>> still be set based on all vCPU-s' hard affinities
>>
> I see, and that sounds sensible to me... It's mostly a matter a matter
> of deciding whether o not we want something like that, and, if yes,
> whether we want it based on hard of soft.
>
> Personally, I think I agree with you on having it based on hard
> affinities by default.
>
> Let's see if George get to say something before I get to that part of
> the (re)implementation. :-)
I would probably have it based on soft affinities, since that's where we
expect to have the domain's vcpus actually running most if the time; but
it's really a bike-shed issue, and something we can change / adjust in
the future.
(Although I suppose ideal behavior would be for the allocator to have
three levels of preference instead of just two: allocate from soft
affinity first; if that's not available, allocate from hard affinity;
and finally allocate wherever you can find ram. But that's probably
more work than it's worth at this point.)
So what's the plan now Dario? You're going to re-spin the patch series
to just do hard and soft affinities at the HV level, plumbing the
results through the toolstack?
I think for now I might advise putting off doing a NUMA interface at the
libxl level, and do a full vNUMA interface in another series (perhaps
for 4.5, depending on the timing).
-George
next prev parent reply other threads:[~2013-11-06 11:44 UTC|newest]
Thread overview: 71+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-05 14:33 [PATCH RESEND 00/12] Implement per-vcpu NUMA node-affinity for credit1 Dario Faggioli
2013-11-05 14:34 ` [PATCH RESEND 01/12] xen: numa-sched: leave node-affinity alone if not in "auto" mode Dario Faggioli
2013-11-05 14:43 ` George Dunlap
2013-11-05 14:34 ` [PATCH RESEND 02/12] xl: allow for node-wise specification of vcpu pinning Dario Faggioli
2013-11-05 14:50 ` George Dunlap
2013-11-06 8:48 ` Dario Faggioli
2013-11-07 18:17 ` Ian Jackson
2013-11-08 9:24 ` Dario Faggioli
2013-11-08 15:20 ` Ian Jackson
2013-11-05 14:34 ` [PATCH RESEND 03/12] xl: implement and enable dryrun mode for `xl vcpu-pin' Dario Faggioli
2013-11-05 14:34 ` [PATCH RESEND 04/12] xl: test script for the cpumap parser (for vCPU pinning) Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity Dario Faggioli
2013-11-05 14:52 ` Jan Beulich
2013-11-05 15:03 ` George Dunlap
2013-11-05 15:11 ` Jan Beulich
2013-11-05 15:24 ` George Dunlap
2013-11-05 22:15 ` Dario Faggioli
2013-11-05 15:11 ` George Dunlap
2013-11-05 15:23 ` Jan Beulich
2013-11-05 15:39 ` George Dunlap
2013-11-05 16:56 ` George Dunlap
2013-11-05 17:16 ` George Dunlap
2013-11-05 17:30 ` Jan Beulich
2013-11-05 23:12 ` Dario Faggioli
2013-11-05 23:01 ` Dario Faggioli
2013-11-06 9:39 ` Dario Faggioli
2013-11-06 9:46 ` Jan Beulich
2013-11-06 10:00 ` Dario Faggioli
2013-11-06 11:44 ` George Dunlap [this message]
2013-11-06 14:26 ` Dario Faggioli
2013-11-06 14:56 ` George Dunlap
2013-11-06 15:14 ` Jan Beulich
2013-11-06 16:12 ` George Dunlap
2013-11-06 16:22 ` Jan Beulich
2013-11-06 16:48 ` Dario Faggioli
2013-11-06 16:20 ` Dario Faggioli
2013-11-06 16:23 ` Dario Faggioli
2013-11-05 17:24 ` Jan Beulich
2013-11-05 17:31 ` George Dunlap
2013-11-05 23:08 ` Dario Faggioli
2013-11-05 22:54 ` Dario Faggioli
2013-11-05 22:22 ` Dario Faggioli
2013-11-06 11:41 ` Dario Faggioli
2013-11-06 14:47 ` George Dunlap
2013-11-06 16:53 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 06/12] xen: numa-sched: domain node-affinity always comes from vcpu node-affinity Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 07/12] xen: numa-sched: use per-vcpu node-affinity for actual scheduling Dario Faggioli
2013-11-05 16:20 ` George Dunlap
2013-11-06 9:15 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 08/12] xen: numa-sched: enable getting/specifying per-vcpu node-affinity Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 09/12] libxc: " Dario Faggioli
2013-11-07 18:27 ` Ian Jackson
2013-11-12 16:01 ` Konrad Rzeszutek Wilk
2013-11-12 16:43 ` George Dunlap
2013-11-12 16:55 ` Konrad Rzeszutek Wilk
2013-11-12 18:40 ` Dario Faggioli
2013-11-12 19:13 ` Konrad Rzeszutek Wilk
2013-11-12 21:36 ` Dario Faggioli
2013-11-13 10:57 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 10/12] libxl: " Dario Faggioli
2013-11-07 18:29 ` Ian Jackson
2013-11-08 9:18 ` Dario Faggioli
2013-11-08 15:07 ` Ian Jackson
2013-11-05 14:36 ` [PATCH RESEND 11/12] xl: " Dario Faggioli
2013-11-07 18:33 ` Ian Jackson
2013-11-08 9:33 ` Dario Faggioli
2013-11-08 15:18 ` Ian Jackson
2013-11-05 14:36 ` [PATCH RESEND 12/12] xl: numa-sched: enable specifying node-affinity in VM config file Dario Faggioli
2013-11-07 18:35 ` Ian Jackson
2013-11-08 9:49 ` Dario Faggioli
2013-11-08 15:22 ` Ian Jackson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=527A2BA8.9060601@eu.citrix.com \
--to=george.dunlap@eu.citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=Marcus.Granado@eu.citrix.com \
--cc=dario.faggioli@citrix.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=jtweaver@hawaii.edu \
--cc=juergen.gross@ts.fujitsu.com \
--cc=keir@xen.org \
--cc=lccycc123@gmail.com \
--cc=msw@amazon.com \
--cc=ufimtseva@gmail.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).