xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com, george.dunlap@eu.citrix.com,
	msw@linux.com, dario.faggioli@citrix.com, lccycc123@gmail.com,
	ian.jackson@eu.citrix.com, xen-devel@lists.xen.org,
	JBeulich@suse.com
Subject: Re: [PATCH v8 4/9] libxc: Introduce xc_domain_setvnuma to set vNUMA
Date: Wed, 27 Aug 2014 16:31:15 -0400	[thread overview]
Message-ID: <20140827203115.GD10321@laptop.dumpdata.com> (raw)
In-Reply-To: <1409039106-955-2-git-send-email-ufimtseva@gmail.com>

On Tue, Aug 26, 2014 at 03:45:01AM -0400, Elena Ufimtseva wrote:
> With the introduction of the XEN_DOMCTL_setvnumainfo
> in patch titled: "xen: vnuma topology and subop hypercalls"
> we put in the plumbing here to use from the toolstack. The user
> is allowed to call this multiple times if they wish so.
> It will error out if the nr_vnodes or nr_vcpus is zero.
> 
> Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> ---
>  tools/libxc/xc_domain.c |   65 +++++++++++++++++++++++++++++++++++++++++++++++
>  tools/libxc/xenctrl.h   |   10 ++++++++
>  2 files changed, 75 insertions(+)
> 
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c67ac9a..1708766 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -2124,6 +2124,71 @@ int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
>      return do_domctl(xch, &domctl);
>  }
>  
> +/* Plumbing Xen with vNUMA topology */
> +int xc_domain_setvnuma(xc_interface *xch,
> +                       uint32_t domid,
> +                       uint32_t nr_vnodes,
> +                       uint32_t nr_regions,
> +                       uint32_t nr_vcpus,
> +                       vmemrange_t *vmemrange,
> +                       unsigned int *vdistance,
> +                       unsigned int *vcpu_to_vnode,
> +                       unsigned int *vnode_to_pnode)
> +{
> +    int rc;
> +    DECLARE_DOMCTL;
> +    DECLARE_HYPERCALL_BOUNCE(vmemrange, sizeof(*vmemrange) * nr_regions,
> +                             XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    DECLARE_HYPERCALL_BOUNCE(vdistance, sizeof(*vdistance) *
> +                             nr_vnodes * nr_vnodes,
> +                             XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    DECLARE_HYPERCALL_BOUNCE(vcpu_to_vnode, sizeof(*vcpu_to_vnode) * nr_vcpus,
> +                             XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    DECLARE_HYPERCALL_BOUNCE(vnode_to_pnode, sizeof(*vnode_to_pnode) *
> +                             nr_vnodes,
> +                             XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    errno = EINVAL;
> +
> +    if ( nr_vnodes == 0 || nr_regions == 0 || nr_regions < nr_vnodes )
> +        return -1;
> +
> +    if ( !vdistance || !vcpu_to_vnode || !vmemrange || !vnode_to_pnode )
> +    {
> +        PERROR("%s: Cant set vnuma without initializing topology", __func__);
> +        return -1;
> +    }
> +
> +    if ( xc_hypercall_bounce_pre(xch, vmemrange)      ||
> +         xc_hypercall_bounce_pre(xch, vdistance)      ||
> +         xc_hypercall_bounce_pre(xch, vcpu_to_vnode)  ||
> +         xc_hypercall_bounce_pre(xch, vnode_to_pnode) )
> +    {
> +        rc = -1;
> +        goto vnumaset_fail;
> +
> +    }
> +
> +    set_xen_guest_handle(domctl.u.vnuma.vmemrange, vmemrange);
> +    set_xen_guest_handle(domctl.u.vnuma.vdistance, vdistance);
> +    set_xen_guest_handle(domctl.u.vnuma.vcpu_to_vnode, vcpu_to_vnode);
> +    set_xen_guest_handle(domctl.u.vnuma.vnode_to_pnode, vnode_to_pnode);
> +
> +    domctl.cmd = XEN_DOMCTL_setvnumainfo;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.vnuma.nr_vnodes = nr_vnodes;
> +    domctl.u.vnuma.nr_regions = nr_regions;
> +
> +    rc = do_domctl(xch, &domctl);
> +
> + vnumaset_fail:
> +    xc_hypercall_bounce_post(xch, vmemrange);
> +    xc_hypercall_bounce_post(xch, vdistance);
> +    xc_hypercall_bounce_post(xch, vcpu_to_vnode);
> +    xc_hypercall_bounce_post(xch, vnode_to_pnode);
> +
> +    return rc;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 1c5d0db..1c8aa42 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -1245,6 +1245,16 @@ int xc_domain_set_memmap_limit(xc_interface *xch,
>                                 uint32_t domid,
>                                 unsigned long map_limitkb);
>  
> +int xc_domain_setvnuma(xc_interface *xch,
> +                        uint32_t domid,
> +                        uint32_t nr_vnodes,
> +                        uint32_t nr_regions,
> +                        uint32_t nr_vcpus,
> +                        vmemrange_t *vmemrange,
> +                        unsigned int *vdistance,
> +                        unsigned int *vcpu_to_vnode,
> +                        unsigned int *vnode_to_pnode);
> +
>  #if defined(__i386__) || defined(__x86_64__)
>  /*
>   * PC BIOS standard E820 types and structure.
> -- 
> 1.7.10.4
> 

  reply	other threads:[~2014-08-27 20:31 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-26  7:45 [PATCH v8 3/9] vnuma hook to debug-keys u Elena Ufimtseva
2014-08-26  7:45 ` [PATCH v8 4/9] libxc: Introduce xc_domain_setvnuma to set vNUMA Elena Ufimtseva
2014-08-27 20:31   ` Konrad Rzeszutek Wilk [this message]
2014-08-26  7:45 ` [PATCH v8 5/9] libxl: vnuma types declararion Elena Ufimtseva
2014-08-26  7:45 ` [PATCH v8 6/9] libxl: build numa nodes memory blocks Elena Ufimtseva
2014-08-26  7:45 ` [PATCH v8 7/9] libxc: allocate domain memory for vnuma enabled Elena Ufimtseva
2014-08-26  7:45 ` [PATCH v8 8/9] libxl: vnuma nodes placement bits Elena Ufimtseva
2014-08-26  7:45 ` [PATCH v8 9/9] libxl: vnuma topology configuration parser and doc Elena Ufimtseva
2014-08-26 15:29 ` [PATCH v8 3/9] vnuma hook to debug-keys u Jan Beulich
2014-08-26 15:36   ` Elena Ufimtseva
2014-08-26 15:46     ` Jan Beulich
2014-08-26 15:56       ` Elena Ufimtseva
2014-08-27 20:26 ` Konrad Rzeszutek Wilk
2014-08-27 20:33   ` Elena Ufimtseva

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140827203115.GD10321@laptop.dumpdata.com \
    --to=konrad.wilk@oracle.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=keir@xen.org \
    --cc=lccycc123@gmail.com \
    --cc=msw@linux.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=ufimtseva@gmail.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).