xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com,
	stefano.stabellini@eu.citrix.com, george.dunlap@eu.citrix.com,
	msw@linux.com, dario.faggioli@citrix.com, lccycc123@gmail.com,
	ian.jackson@eu.citrix.com, xen-devel@lists.xen.org,
	JBeulich@suse.com
Subject: Re: [PATCH v4 2/7] libxc: Plumb Xen with vNUMA topology for domain
Date: Mon, 16 Dec 2013 14:16:46 -0500	[thread overview]
Message-ID: <20131216191646.GA25548@phenom.dumpdata.com> (raw)
In-Reply-To: <1386136035-19544-3-git-send-email-ufimtseva@gmail.com>

On Wed, Dec 04, 2013 at 12:47:10AM -0500, Elena Ufimtseva wrote:
> Per-domain vNUMA topology initialization.
> domctl hypercall is used to set vNUMA topology
> per domU during domain build time.
> 
> Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
> ---
>  tools/libxc/xc_domain.c |   64 +++++++++++++++++++++++++++++++++++++++++++++++
>  tools/libxc/xenctrl.h   |    9 +++++++
>  2 files changed, 73 insertions(+)
> 
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index 1ccafc5..a436a3a 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -1776,6 +1776,70 @@ int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
>      return do_domctl(xch, &domctl);
>  }
>  
> +/* Plumbs Xen with vNUMA topology */
> +int xc_domain_setvnuma(xc_interface *xch,
> +                        uint32_t domid,
> +                        uint16_t nr_vnodes,
> +                        uint16_t nr_vcpus,
> +                        vmemrange_t *vmemrange,
> +                        unsigned int *vdistance,
> +                        unsigned int *vcpu_to_vnode,
> +                        unsigned int *vnode_to_pnode)
> +{
> +    int rc;
> +    DECLARE_DOMCTL;
> +    DECLARE_HYPERCALL_BOUNCE(vmemrange, sizeof(*vmemrange) * nr_vnodes,
> +                                    XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    DECLARE_HYPERCALL_BOUNCE(vdistance, sizeof(*vdistance) *
> +                                    nr_vnodes * nr_vnodes,
> +                                    XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    DECLARE_HYPERCALL_BOUNCE(vcpu_to_vnode, sizeof(*vcpu_to_vnode) * nr_vcpus,
> +                                    XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    DECLARE_HYPERCALL_BOUNCE(vnode_to_pnode, sizeof(*vnode_to_pnode) *
> +                                    nr_vnodes,
> +                                    XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> +    if ( nr_vnodes == 0 ) {
> +        errno = EINVAL;
> +        return -1;
> +    }
> +
> +    if ( vdistance == NULL || vcpu_to_vnode == NULL ||
> +         vmemrange == NULL || vnode_to_pnode == NULL ) {

You could also make this:

	if ( !vdistance || !vcpu_to_vnode || !...

> +        PERROR("Incorrect parameters for XEN_DOMCTL_setvnumainfo.\n");
> +        errno = EINVAL;
> +        return -1;
> +    }
> +
> +    if ( xc_hypercall_bounce_pre(xch, vmemrange)      ||
> +         xc_hypercall_bounce_pre(xch, vdistance)      ||
> +         xc_hypercall_bounce_pre(xch, vcpu_to_vnode)  ||
> +         xc_hypercall_bounce_pre(xch, vnode_to_pnode) ) {
> +        PERROR("Could not bounce buffer for xc_domain_setvnuma.\n");
> +        return -1;
> +    }
> +
> +    set_xen_guest_handle(domctl.u.vnuma.vmemrange, vmemrange);
> +    set_xen_guest_handle(domctl.u.vnuma.vdistance, vdistance);
> +    set_xen_guest_handle(domctl.u.vnuma.vcpu_to_vnode, vcpu_to_vnode);
> +    set_xen_guest_handle(domctl.u.vnuma.vnode_to_pnode, vnode_to_pnode);
> +
> +    domctl.cmd = XEN_DOMCTL_setvnumainfo;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.vnuma.nr_vnodes = nr_vnodes;
> +    domctl.u.vnuma.__pad = 0;

I don't think you need to set the __pad to anything.


Otherwise 'Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>'
> +
> +    rc = do_domctl(xch, &domctl);
> +
> +    xc_hypercall_bounce_post(xch, vmemrange);
> +    xc_hypercall_bounce_post(xch, vdistance);
> +    xc_hypercall_bounce_post(xch, vcpu_to_vnode);
> +    xc_hypercall_bounce_post(xch, vnode_to_pnode);
> +
> +    if ( rc )
> +        errno = EFAULT;
> +    return rc;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 4ac6b8a..f360726 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -1136,6 +1136,15 @@ int xc_domain_set_memmap_limit(xc_interface *xch,
>                                 uint32_t domid,
>                                 unsigned long map_limitkb);
>  
> +int xc_domain_setvnuma(xc_interface *xch,
> +                        uint32_t domid,
> +                        uint16_t nr_vnodes,
> +                        uint16_t nr_vcpus,
> +                        vmemrange_t *vmemrange,
> +                        unsigned int *vdistance,
> +                        unsigned int *vcpu_to_vnode,
> +                        unsigned int *vnode_to_pnode);
> +
>  #if defined(__i386__) || defined(__x86_64__)
>  /*
>   * PC BIOS standard E820 types and structure.
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

  reply	other threads:[~2013-12-16 19:16 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-04  5:47 [PATCH v4 0/7] vNUMA introduction Elena Ufimtseva
2013-12-04  5:47 ` [PATCH v4 1/7] xen: vNUMA support for PV guests Elena Ufimtseva
2013-12-04 11:34   ` Jan Beulich
2013-12-04 18:02     ` Elena Ufimtseva
2013-12-04  5:47 ` [PATCH v4 2/7] libxc: Plumb Xen with vNUMA topology for domain Elena Ufimtseva
2013-12-16 19:16   ` Konrad Rzeszutek Wilk [this message]
2013-12-04  5:47 ` [PATCH v4 3/7] xl: vnuma memory parsing and supplement functions Elena Ufimtseva
2013-12-16 19:57   ` Konrad Rzeszutek Wilk
2013-12-04  5:47 ` [PATCH v4 4/7] xl: vnuma distance, vcpu and pnode masks parser Elena Ufimtseva
2013-12-04  5:47 ` [PATCH v4 5/7] libxc: vnuma memory domain allocation Elena Ufimtseva
2013-12-04  5:47 ` [PATCH v4 6/7] libxl: vNUMA supporting interface Elena Ufimtseva
2013-12-04  5:47 ` [PATCH v4 7/7] xen: adds vNUMA info debug-key u Elena Ufimtseva
2013-12-04 11:23   ` Jan Beulich
2014-02-13 12:49 ` [PATCH v4 0/7] vNUMA introduction Li Yechen
2014-02-13 16:26   ` Elena Ufimtseva

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131216191646.GA25548@phenom.dumpdata.com \
    --to=konrad.wilk@oracle.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=keir@xen.org \
    --cc=lccycc123@gmail.com \
    --cc=msw@linux.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=ufimtseva@gmail.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).