From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Cc: MarcusGranado <Marcus.Granado@eu.citrix.com>,
Justin Weaver <jtweaver@hawaii.edu>,
IanCampbell <Ian.Campbell@citrix.com>,
Li Yechen <lccycc123@gmail.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
JuergenGross <juergen.gross@ts.fujitsu.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>,
Jan Beulich <JBeulich@suse.com>,
xen-devel <xen-devel@lists.xenproject.org>,
Daniel De Graaf <dgdegra@tycho.nsa.gov>,
KeirFraser <keir@xen.org>, Matt Wilson <msw@amazon.com>,
Elena Ufimtseva <ufimtseva@gmail.com>
Subject: Re: [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity
Date: Wed, 6 Nov 2013 15:26:03 +0100 [thread overview]
Message-ID: <1383747963.9207.134.camel@Solace> (raw)
In-Reply-To: <527A2BA8.9060601@eu.citrix.com>
[-- Attachment #1.1: Type: text/plain, Size: 3578 bytes --]
On mer, 2013-11-06 at 11:44 +0000, George Dunlap wrote:
> On 06/11/13 10:00, Dario Faggioli wrote:
> > I see, and that sounds sensible to me... It's mostly a matter a matter
> > of deciding whether o not we want something like that, and, if yes,
> > whether we want it based on hard of soft.
> >
> > Personally, I think I agree with you on having it based on hard
> > affinities by default.
> >
> > Let's see if George get to say something before I get to that part of
> > the (re)implementation. :-)
>
> I would probably have it based on soft affinities, since that's where we
> expect to have the domain's vcpus actually running most if the time;
>
True. However, doing that would rule out cpupool and vcpu-pinning. That
is to say, if you create a domain in a pool or with its vcpu pinned (by
specifying "pool=" or "cpus=" in the config file), it'll get its memory
striped over all the nodes. In fact, in both these cases, there really
is no soft affinity involved. This is bad, because people may be already
used to create a domain with "cpus=", and have the memory allocated from
the relevant nodes.
OTOH, there is no similar thing (i.e., a user interface/API) for soft
affinities, and the way I was using it at the xl and libxl level for the
sake of NUMA performance, I can easily implement there on top of soft
affinities.
So, in summary, I'd have liked to have it based on soft affinity too,
but I think that would break more things than having it based on hard
ones.
> but
> it's really a bike-shed issue, and something we can change / adjust in
> the future.
>
That is also true, indeed.
> (Although I suppose ideal behavior would be for the allocator to have
> three levels of preference instead of just two: allocate from soft
> affinity first; if that's not available, allocate from hard affinity;
> and finally allocate wherever you can find ram. But that's probably
> more work than it's worth at this point.)
>
I like this too, but that's definitely something longer term than this
or next week.
> So what's the plan now Dario? You're going to re-spin the patch series
> to just do hard and soft affinities at the HV level, plumbing the
> results through the toolstack?
>
Yes, that was what I had in mind, and what I already started working on
(see the other mail I sent before lunch about (re)naming). I should be
able to craft and send something by ether today or tomorrow.
> I think for now I might advise putting off doing a NUMA interface at the
> libxl level, and do a full vNUMA interface in another series (perhaps
> for 4.5, depending on the timing).
>
Well, I agree that all this is of very little use without vNUMA, but at
the same time, it's not necessarily only useful for it. Also, whatever
it is vcpu-node-affinity or soft-affinity, if it is not wired up
properly up to the higher layers, there's very few point of having the
HV part only... So my idea was to redo and resend everything, including
libxl and xl bits.
Of course, that doesn't mean we must necessarily have this for 4.4
(although I think it would still be feasible), just that we either
check-in or wait for both the implementation and the interface. Again,
how's the updated release schedule?
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2013-11-06 14:26 UTC|newest]
Thread overview: 71+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-05 14:33 [PATCH RESEND 00/12] Implement per-vcpu NUMA node-affinity for credit1 Dario Faggioli
2013-11-05 14:34 ` [PATCH RESEND 01/12] xen: numa-sched: leave node-affinity alone if not in "auto" mode Dario Faggioli
2013-11-05 14:43 ` George Dunlap
2013-11-05 14:34 ` [PATCH RESEND 02/12] xl: allow for node-wise specification of vcpu pinning Dario Faggioli
2013-11-05 14:50 ` George Dunlap
2013-11-06 8:48 ` Dario Faggioli
2013-11-07 18:17 ` Ian Jackson
2013-11-08 9:24 ` Dario Faggioli
2013-11-08 15:20 ` Ian Jackson
2013-11-05 14:34 ` [PATCH RESEND 03/12] xl: implement and enable dryrun mode for `xl vcpu-pin' Dario Faggioli
2013-11-05 14:34 ` [PATCH RESEND 04/12] xl: test script for the cpumap parser (for vCPU pinning) Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity Dario Faggioli
2013-11-05 14:52 ` Jan Beulich
2013-11-05 15:03 ` George Dunlap
2013-11-05 15:11 ` Jan Beulich
2013-11-05 15:24 ` George Dunlap
2013-11-05 22:15 ` Dario Faggioli
2013-11-05 15:11 ` George Dunlap
2013-11-05 15:23 ` Jan Beulich
2013-11-05 15:39 ` George Dunlap
2013-11-05 16:56 ` George Dunlap
2013-11-05 17:16 ` George Dunlap
2013-11-05 17:30 ` Jan Beulich
2013-11-05 23:12 ` Dario Faggioli
2013-11-05 23:01 ` Dario Faggioli
2013-11-06 9:39 ` Dario Faggioli
2013-11-06 9:46 ` Jan Beulich
2013-11-06 10:00 ` Dario Faggioli
2013-11-06 11:44 ` George Dunlap
2013-11-06 14:26 ` Dario Faggioli [this message]
2013-11-06 14:56 ` George Dunlap
2013-11-06 15:14 ` Jan Beulich
2013-11-06 16:12 ` George Dunlap
2013-11-06 16:22 ` Jan Beulich
2013-11-06 16:48 ` Dario Faggioli
2013-11-06 16:20 ` Dario Faggioli
2013-11-06 16:23 ` Dario Faggioli
2013-11-05 17:24 ` Jan Beulich
2013-11-05 17:31 ` George Dunlap
2013-11-05 23:08 ` Dario Faggioli
2013-11-05 22:54 ` Dario Faggioli
2013-11-05 22:22 ` Dario Faggioli
2013-11-06 11:41 ` Dario Faggioli
2013-11-06 14:47 ` George Dunlap
2013-11-06 16:53 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 06/12] xen: numa-sched: domain node-affinity always comes from vcpu node-affinity Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 07/12] xen: numa-sched: use per-vcpu node-affinity for actual scheduling Dario Faggioli
2013-11-05 16:20 ` George Dunlap
2013-11-06 9:15 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 08/12] xen: numa-sched: enable getting/specifying per-vcpu node-affinity Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 09/12] libxc: " Dario Faggioli
2013-11-07 18:27 ` Ian Jackson
2013-11-12 16:01 ` Konrad Rzeszutek Wilk
2013-11-12 16:43 ` George Dunlap
2013-11-12 16:55 ` Konrad Rzeszutek Wilk
2013-11-12 18:40 ` Dario Faggioli
2013-11-12 19:13 ` Konrad Rzeszutek Wilk
2013-11-12 21:36 ` Dario Faggioli
2013-11-13 10:57 ` Dario Faggioli
2013-11-05 14:35 ` [PATCH RESEND 10/12] libxl: " Dario Faggioli
2013-11-07 18:29 ` Ian Jackson
2013-11-08 9:18 ` Dario Faggioli
2013-11-08 15:07 ` Ian Jackson
2013-11-05 14:36 ` [PATCH RESEND 11/12] xl: " Dario Faggioli
2013-11-07 18:33 ` Ian Jackson
2013-11-08 9:33 ` Dario Faggioli
2013-11-08 15:18 ` Ian Jackson
2013-11-05 14:36 ` [PATCH RESEND 12/12] xl: numa-sched: enable specifying node-affinity in VM config file Dario Faggioli
2013-11-07 18:35 ` Ian Jackson
2013-11-08 9:49 ` Dario Faggioli
2013-11-08 15:22 ` Ian Jackson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1383747963.9207.134.camel@Solace \
--to=dario.faggioli@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=Marcus.Granado@eu.citrix.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=george.dunlap@eu.citrix.com \
--cc=jtweaver@hawaii.edu \
--cc=juergen.gross@ts.fujitsu.com \
--cc=keir@xen.org \
--cc=lccycc123@gmail.com \
--cc=msw@amazon.com \
--cc=ufimtseva@gmail.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).