From: Dario Faggioli <raistlin@linux.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH] xl: introduce specific VCPU to PCPU mapping in config file
Date: Mon, 14 May 2012 19:03:37 +0200 [thread overview]
Message-ID: <1337015017.28326.50.camel@Solace> (raw)
In-Reply-To: <1336983554.31817.8.camel@zakaz.uk.xensource.com>
[-- Attachment #1.1: Type: text/plain, Size: 3658 bytes --]
On Mon, 2012-05-14 at 09:19 +0100, Ian Campbell wrote:
> So effectively we create the domain pinned to the right nodes and then
> rebind all the CPUS later to be mapped to specific phys-processors?
>
Yep. Although that map might be considered a build info, it seems more a
low toolstack level thing to me. Besides, we already have a cpumap.
Besides, it's a fu*$ing array of ints as big as the number of vcpus...
And fo all these reasons I wanted to avoid putting it in build_info as
bad as hell! :-)
> This
> means that the memory which is allocated is from the correct nodes, even
> though we appear to do the pinning later. Clever.
>
Thanks. All the above being true, I also didn't want to miss the
memory/node affinity side effect having the correct info in
b_info->cpumap brings.
> > + /* If single vcpu to pcpu mapping was requested, honour it */
> > + if (vcpu_to_pcpu) {
>
> This is always allocated above, isn't it? I'm concerned that this might
> break the non-1-1 case.
>
Not really, it's only allocated in case the correct `cpus=` specifier is
found (that being `cpus=[2, 3]`).
> > + libxl_cpumap vcpu_cpumap;
> > +
> > + libxl_cpumap_alloc(ctx, &vcpu_cpumap);
> > + for (i = 0; i < d_config.b_info.max_vcpus; i++) {
> > +
> > + if (vcpu_to_pcpu[i] != -1) {
> > + libxl_cpumap_set_none(&vcpu_cpumap);
> > + libxl_cpumap_set(&vcpu_cpumap, vcpu_to_pcpu[i]);
> > + } else {
> > + libxl_cpumap_set_any(&vcpu_cpumap);
> > + }
> > + if (libxl_set_vcpuaffinity(ctx, domid, i, &vcpu_cpumap)) {
> > + fprintf(stderr, "setting affinity failed on vcpu `%d'.\n", i);
> > + libxl_cpumap_dispose(&vcpu_cpumap);
> > + free(vcpu_to_pcpu);
> > + ret = ERROR_FAIL;
> > + goto error_out;
> > + }
> > + }
> > + libxl_cpumap_dispose(&vcpu_cpumap);
> > + free(vcpu_to_pcpu);
>
> vpuc_to_pcpu = NULL, in case you go back around again...
>
Yep. This is needed, thanks!
> For that reason it might be preferable to put vcpu_to_pcpu struct
> dom_info and pass that to parse_config -- I think Goncalo is doing
> something similar for the vncviewer option.
>
Adding the proper =NULL above makes it work for all the cases, namely
config-update, save/restore and migration. Nevertheless, I looked into
moving the array in `struct domain_create', but it doesn't fit there, at
least according to my personal taste. It requires adding a parameter to
parse_config_data() that, in many cases, would need to be created on
purpose with only that field as the meaningful one (or just be NULL),
resulting in something really counter intuitive and hard to read and
maintain (again, according to my personal taste :-)). Also, this is
somehow different from "autoconnect" or "vncviewer", which fits
perfectly there, as they're parameters to a specific create-ish command
resulting in some actual action (i.e., popping the console or the VNC
window up!).
So, I'm resending with the array still as a global variable, but of
course I'm open to rework the patch again if you and/or the other
maintainers want it the other way around (within `struct
domain_create').
Thanks a lot for looking into this.
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
prev parent reply other threads:[~2012-05-14 17:03 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-11 23:20 [PATCH] xl: introduce specific VCPU to PCPU mapping in config file Dario Faggioli
2012-05-11 23:23 ` Dario Faggioli
2012-05-13 8:35 ` Dario Faggioli
2012-05-14 8:14 ` Ian Campbell
2012-05-14 10:08 ` Dario Faggioli
2012-05-14 8:19 ` Ian Campbell
2012-05-14 17:03 ` Dario Faggioli [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1337015017.28326.50.camel@Solace \
--to=raistlin@linux.it \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).