From: Ian Campbell <ian.campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Marcus Granado <Marcus.Granado@eu.citrix.com>,
Keir Fraser <keir@xen.org>, Matt Wilson <msw@amazon.com>,
Li Yechen <lccycc123@gmail.com>,
George Dunlap <george.dunlap@eu.citrix.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
Juergen Gross <juergen.gross@ts.fujitsu.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>,
xen-devel@lists.xen.org, Jan Beulich <JBeulich@suse.com>,
Justin Weaver <jtweaver@hawaii.edu>,
Elena Ufimtseva <ufimtseva@gmail.com>
Subject: Re: [PATCH v3 10/14] libxc: get and set soft and hard affinity
Date: Tue, 19 Nov 2013 17:08:58 +0000 [thread overview]
Message-ID: <1384880938.16252.37.camel@hastur.hellion.org.uk> (raw)
In-Reply-To: <20131118181805.31002.28692.stgit@Solace>
On Mon, 2013-11-18 at 19:18 +0100, Dario Faggioli wrote:
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
There are a few preexisting issues with the setaffinity function, but
this just duplicates them into the new cpumap, so I don't see any point
in holding up the series for them. Perhaps you could put them on your
todo list?
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index f9ae4bf..bddf4e0 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -192,44 +192,52 @@ int xc_domain_node_getaffinity(xc_interface *xch,
> int xc_vcpu_setaffinity(xc_interface *xch,
> uint32_t domid,
> int vcpu,
> - xc_cpumap_t cpumap)
> + xc_cpumap_t cpumap,
> + uint32_t flags,
> + xc_cpumap_t ecpumap_out)
> {
> DECLARE_DOMCTL;
> - DECLARE_HYPERCALL_BUFFER(uint8_t, local);
> + DECLARE_HYPERCALL_BUFFER(uint8_t, cpumap_local);
> + DECLARE_HYPERCALL_BUFFER(uint8_t, ecpumap_local);
> int ret = -1;
> int cpusize;
>
> cpusize = xc_get_cpumap_size(xch);
> - if (!cpusize)
> + if ( !cpusize )
> {
> PERROR("Could not get number of cpus");
> - goto out;
> + return -1;;
Double ";;"?
> }
>
> - local = xc_hypercall_buffer_alloc(xch, local, cpusize);
> - if ( local == NULL )
> + cpumap_local = xc_hypercall_buffer_alloc(xch, cpumap_local, cpusize);
> + ecpumap_local = xc_hypercall_buffer_alloc(xch, ecpumap_local, cpusize);
> + if ( cpumap_local == NULL || cpumap_local == NULL)
> {
> - PERROR("Could not allocate memory for setvcpuaffinity domctl hypercall");
> + PERROR("Could not allocate hcall buffers for DOMCTL_setvcpuaffinity");
> goto out;
> }
>
> domctl.cmd = XEN_DOMCTL_setvcpuaffinity;
> domctl.domain = (domid_t)domid;
> domctl.u.vcpuaffinity.vcpu = vcpu;
> - /* Soft affinity is there, but not used anywhere for now, so... */
> - domctl.u.vcpuaffinity.flags = XEN_VCPUAFFINITY_HARD;
> -
> - memcpy(local, cpumap, cpusize);
> -
> - set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, local);
> + domctl.u.vcpuaffinity.flags = flags;
>
> + memcpy(cpumap_local, cpumap, cpusize);
This risks running of the end of the supplies cpumap, if it is smaller
than cpusize.
But more importantly why is this not using the hypercall buffer bounce
mechanism?
> + set_xen_guest_handle(domctl.u.vcpuaffinity.cpumap.bitmap, cpumap_local);
> domctl.u.vcpuaffinity.cpumap.nr_bits = cpusize * 8;
>
> + set_xen_guest_handle(domctl.u.vcpuaffinity.eff_cpumap.bitmap,
> + ecpumap_local);
> + domctl.u.vcpuaffinity.eff_cpumap.nr_bits = cpusize * 8;
> +
> ret = do_domctl(xch, &domctl);
>
> - xc_hypercall_buffer_free(xch, local);
> + if ( ecpumap_out != NULL )
> + memcpy(ecpumap_out, ecpumap_local, cpusize);
Likewise this risks overrunning ecpumap_out, doesn't it?
> out:
> + xc_hypercall_buffer_free(xch, cpumap_local);
> + xc_hypercall_buffer_free(xch, ecpumap_local);
> return ret;
> }
>
next prev parent reply other threads:[~2013-11-19 17:08 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-18 18:16 [PATCH v3 00/14] Series short description Dario Faggioli
2013-11-18 18:16 ` [PATCH v3 01/14] xl: match output of vcpu-list with pinning syntax Dario Faggioli
2013-11-18 18:16 ` [PATCH v3 02/14] libxl: sanitize error handling in libxl_get_max_{cpus, nodes} Dario Faggioli
2013-11-19 12:24 ` George Dunlap
2013-11-19 12:34 ` Dario Faggioli
2013-11-18 18:16 ` [PATCH v3 03/14] xl: allow for node-wise specification of vcpu pinning Dario Faggioli
2013-11-18 18:17 ` [PATCH v3 04/14] xl: implement and enable dryrun mode for `xl vcpu-pin' Dario Faggioli
2013-11-18 18:17 ` [PATCH v3 05/14] xl: test script for the cpumap parser (for vCPU pinning) Dario Faggioli
2013-11-18 18:17 ` [PATCH v3 06/14] xen: sched: rename v->cpu_affinity into v->cpu_hard_affinity Dario Faggioli
2013-11-18 18:17 ` [PATCH v3 07/14] xen: sched: introduce soft-affinity and use it instead d->node-affinity Dario Faggioli
2013-11-18 18:17 ` [PATCH v3 08/14] xen: derive NUMA node affinity from hard and soft CPU affinity Dario Faggioli
2013-11-19 14:14 ` George Dunlap
2013-11-19 16:20 ` Jan Beulich
2013-11-19 16:35 ` Dario Faggioli
2013-11-18 18:17 ` [PATCH v3 09/14] xen: sched: DOMCTL_*vcpuaffinity works with hard and soft affinity Dario Faggioli
2013-11-19 14:32 ` George Dunlap
2013-11-19 16:39 ` Jan Beulich
2013-11-22 18:55 ` Dario Faggioli
2013-11-25 9:32 ` Jan Beulich
2013-11-25 9:54 ` Dario Faggioli
2013-11-25 10:00 ` Jan Beulich
2013-11-25 10:58 ` George Dunlap
2013-11-18 18:18 ` [PATCH v3 10/14] libxc: get and set soft and hard affinity Dario Faggioli
2013-11-19 14:51 ` George Dunlap
2013-11-19 14:57 ` Ian Campbell
2013-11-19 14:58 ` George Dunlap
2013-11-19 17:08 ` Ian Campbell [this message]
2013-11-19 18:01 ` Dario Faggioli
2013-11-18 18:18 ` [PATCH v3 11/14] libxl: get and set soft affinity Dario Faggioli
2013-11-19 15:41 ` George Dunlap
2013-11-19 16:09 ` Dario Faggioli
2013-11-19 17:15 ` Ian Campbell
2013-11-19 18:58 ` Dario Faggioli
2013-11-20 11:30 ` Ian Campbell
2013-11-20 13:59 ` George Dunlap
2013-11-20 14:04 ` Ian Campbell
2013-11-20 16:59 ` Ian Jackson
2013-11-20 17:46 ` Dario Faggioli
2013-11-20 14:09 ` George Dunlap
2013-11-19 17:24 ` Ian Campbell
2013-11-19 17:51 ` Dario Faggioli
2013-11-20 11:27 ` Ian Campbell
2013-11-20 11:29 ` George Dunlap
2013-11-20 11:32 ` Ian Campbell
2013-11-20 11:40 ` Dario Faggioli
2013-11-20 14:45 ` George Dunlap
2013-11-20 14:52 ` Dario Faggioli
2013-11-20 12:00 ` Dario Faggioli
2013-11-20 12:05 ` Ian Campbell
2013-11-20 12:18 ` Dario Faggioli
2013-11-20 12:26 ` Ian Campbell
2013-11-20 14:50 ` Dario Faggioli
2013-11-20 14:56 ` Ian Campbell
2013-11-20 16:27 ` Dario Faggioli
2013-11-18 18:18 ` [PATCH v3 12/14] xl: enable getting and setting soft Dario Faggioli
2013-11-19 17:30 ` Ian Campbell
2013-11-19 17:52 ` Dario Faggioli
2013-11-18 18:18 ` [PATCH v3 13/14] xl: enable for specifying node-affinity in the config file Dario Faggioli
2013-11-19 17:35 ` Ian Campbell
2013-11-18 18:18 ` [PATCH v3 14/14] libxl: automatic NUMA placement affects soft affinity Dario Faggioli
2013-11-19 17:41 ` Ian Campbell
2013-11-19 17:57 ` Dario Faggioli
2013-11-18 18:20 ` [PATCH v3 00/14] Series short description Dario Faggioli
2013-11-19 16:00 ` George Dunlap
2013-11-19 16:08 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1384880938.16252.37.camel@hastur.hellion.org.uk \
--to=ian.campbell@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=Marcus.Granado@eu.citrix.com \
--cc=dario.faggioli@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jtweaver@hawaii.edu \
--cc=juergen.gross@ts.fujitsu.com \
--cc=keir@xen.org \
--cc=lccycc123@gmail.com \
--cc=msw@amazon.com \
--cc=ufimtseva@gmail.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).