From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "Tim (Xen.org)" <tim@xen.org>,
Dan Magenheimer <dan.magenheimer@oracle.com>,
"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
"Keir (Xen.org)" <keir@xen.org>,
"konrad@kernel.org" <konrad@kernel.org>
Subject: Re: [PATCH 2/6] xc: use XENMEM_claim_pages during guest creation.
Date: Wed, 13 Mar 2013 10:42:30 -0400 [thread overview]
Message-ID: <20130313144230.GF25782@phenom.dumpdata.com> (raw)
In-Reply-To: <1363170195.32410.124.camel@zakaz.uk.xensource.com>
On Wed, Mar 13, 2013 at 10:23:15AM +0000, Ian Campbell wrote:
> On Mon, 2013-03-11 at 14:20 +0000, Konrad Rzeszutek Wilk wrote:
> > From: Dan Magenheimer <dan.magenheimer@oracle.com>
> >
> > We add an extra parameter to the structures passed to the
> > PV routine (arch_setup_meminit) and HVM routine (setup_guest)
> > that determines whether the claim hypercall is to be done.
> >
> > The contents of the 'claim_mode' is defined as an 'uint8_t'
> > in case the hypercall expands in the future with extra
> > flags (for example for per-NUMA allocation). For right now
> > the proper values are: 0 to disable it or 1 to enable
> > it.
>
> At the hypercall layer mem_flags is 32 bits, while this is only 8? But
> also it seems that claim_mode at the libxc layer doesn't really
> correspond directly to mem_flags anyway, which this commentary seems to
> suggest it somehow does.
>
> I think it would be less confusing to just have "int claim_enabled" for
> now and add a claim_flags if and when they come into being.
Yes! Thanks for spotting that.
>
> >
> > Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> > tools/libxc/xc_dom.h | 1 +
> > tools/libxc/xc_dom_x86.c | 12 ++++++++++++
> > tools/libxc/xc_domain.c | 24 ++++++++++++++++++++++++
> > tools/libxc/xc_hvm_build_x86.c | 17 +++++++++++++++++
> > tools/libxc/xenctrl.h | 6 ++++++
> > tools/libxc/xenguest.h | 2 ++
> > 6 files changed, 62 insertions(+)
> >
> > diff --git a/tools/libxc/xc_dom.h b/tools/libxc/xc_dom.h
> > index 779b9d4..34a6e38 100644
> > --- a/tools/libxc/xc_dom.h
> > +++ b/tools/libxc/xc_dom.h
> > @@ -135,6 +135,7 @@ struct xc_dom_image {
> > domid_t guest_domid;
> > int8_t vhpt_size_log2; /* for IA64 */
> > int8_t superpages;
> > + uint8_t claim_mode; /* 0 by default disables it, 1 enables it */
> > int shadow_enabled;
> >
> > int xen_version;
> > diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
> > index eb9ac07..c1b7905 100644
> > --- a/tools/libxc/xc_dom_x86.c
> > +++ b/tools/libxc/xc_dom_x86.c
> > @@ -706,6 +706,13 @@ int arch_setup_meminit(struct xc_dom_image *dom)
> > }
> > else
> > {
> > + /* try to claim pages for early warning of insufficient memory avail */
> > + if ( dom->claim_mode ) {
> > + rc = xc_domain_claim_pages(dom->xch, dom->guest_domid,
> > + dom->total_pages);
> > + if ( rc )
> > + return rc;
> > + }
> > /* setup initial p2m */
> > for ( pfn = 0; pfn < dom->total_pages; pfn++ )
> > dom->p2m_host[pfn] = pfn;
> > @@ -722,6 +729,11 @@ int arch_setup_meminit(struct xc_dom_image *dom)
> > dom->xch, dom->guest_domid, allocsz,
> > 0, 0, &dom->p2m_host[i]);
> > }
> > +
> > + /* Ensure no unclaimed pages are left unused.
> > + * OK to call if hadn't done the earlier claim call. */
> > + xc_domain_claim_pages(dom->xch, dom->guest_domid,
> > + 0 /* cancels the claim */);
> > }
> >
> > return rc;
> > diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> > index 480ce91..b8a345c 100644
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -775,6 +775,30 @@ int xc_domain_add_to_physmap(xc_interface *xch,
> > return do_memory_op(xch, XENMEM_add_to_physmap, &xatp, sizeof(xatp));
> > }
> >
> > +int xc_domain_claim_pages(xc_interface *xch,
> > + uint32_t domid,
> > + unsigned long nr_pages)
> > +{
> > + int err;
> > + xen_pfn_t *extent_start = NULL;
> > + DECLARE_HYPERCALL_BOUNCE(extent_start, 0, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
> > + struct xen_memory_reservation reservation = {
> > + .nr_extents = nr_pages,
> > + .extent_order = 0,
> > + .mem_flags = 0, /* no flags */
> > + .domid = domid
> > + };
> > +
> > + set_xen_guest_handle(reservation.extent_start, extent_start);
>
> This is unused? I think you just need:
> set_xen_guest_handle(reservation.extent_start, HYPERCALL_BUFFER_NULL);
> and drop the declaration of the bounce above.
>
> (personally I think a new arg struct for this subop would have been more
> obvious than forcing it into the reservation struct, but what's done is
> done)
>
> > + err = do_memory_op(xch, XENMEM_claim_pages, &reservation, sizeof(reservation));
> > + return err;
> > +}
> > +unsigned long xc_domain_get_outstanding_pages(xc_interface *xch)
> > +{
> > + return do_memory_op(xch, XENMEM_get_outstanding_pages, NULL, 0);
> > +}
> > +
> > int xc_domain_populate_physmap(xc_interface *xch,
> > uint32_t domid,
> > unsigned long nr_extents,
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 3b5d777..b6b56b3 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -252,6 +252,7 @@ static int setup_guest(xc_interface *xch,
> > unsigned long stat_normal_pages = 0, stat_2mb_pages = 0,
> > stat_1gb_pages = 0;
> > int pod_mode = 0;
> > + unsigned int claim_mode = args->claim_mode;
> >
> > if ( nr_pages > target_pages )
> > pod_mode = XENMEMF_populate_on_demand;
> > @@ -329,6 +330,16 @@ static int setup_guest(xc_interface *xch,
> > xch, dom, 0xa0, 0, pod_mode, &page_array[0x00]);
> > cur_pages = 0xc0;
> > stat_normal_pages = 0xc0;
> > +
> > + /* try to claim pages for early warning of insufficient memory available */
> > + if ( claim_mode ) {
> > + rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > + if ( rc != 0 )
> > + {
> > + PERROR("Could not allocate memory for HVM guest.");
> > + goto error_out;
> > + }
> > + }
> > while ( (rc == 0) && (nr_pages > cur_pages) )
> > {
> > /* Clip count to maximum 1GB extent. */
> > @@ -506,10 +517,16 @@ static int setup_guest(xc_interface *xch,
> > munmap(page0, PAGE_SIZE);
> > }
> >
> > + /* ensure no unclaimed pages are left unused */
> > + xc_domain_claim_pages(xch, dom, 0 /* cancels the claim */);
>
> This cannot fail?
It can (say we use an older hypervisor that does not have this subop), in
which case any of the operations would return -ENOSYS - which is OK.
But in case of the hypervisor having this implemented - yes - this call
should not fail.
Thought I should probably redo the first call to be more like:
if (claim_mode) {
rc =..
if ( rc == -ENOSYS)
rc = 0; // Whatever, hypervisor is out sync.
if ( rc != 0 )
>
> > +
> > free(page_array);
> > return 0;
> >
> > error_out:
> > + /* ensure no unclaimed pages are left unused */
> > + xc_domain_claim_pages(xch, dom, 0 /* cancels the claim */);
> > +
> > free(page_array);
> > return -1;
>
> Could we consolidate the error/success paths by returning rc?
Of course.
>
> > }
> > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> > index 32122fd..e695456 100644
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -1129,6 +1129,12 @@ int xc_domain_populate_physmap_exact(xc_interface *xch,
> > unsigned int mem_flags,
> > xen_pfn_t *extent_start);
> >
> > +int xc_domain_claim_pages(xc_interface *xch,
> > + uint32_t domid,
> > + unsigned long nr_pages);
> > +
> > +unsigned long xc_domain_get_outstanding_pages(xc_interface *xch);
> > +
> > int xc_domain_memory_exchange_pages(xc_interface *xch,
> > int domid,
> > unsigned long nr_in_extents,
> > diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
> > index 7d4ac33..e3c879b 100644
> > --- a/tools/libxc/xenguest.h
> > +++ b/tools/libxc/xenguest.h
> > @@ -231,6 +231,8 @@ struct xc_hvm_build_args {
> >
> > /* Extra SMBIOS structures passed to HVMLOADER */
> > struct xc_hvm_firmware_module smbios_module;
> > + /* Whether to use claim hypercall (1 - enable, 0 - disable). */
> > + uint8_t claim_mode;
> > };
> >
> > /**
>
>
next prev parent reply other threads:[~2013-03-13 14:42 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-11 14:20 [PATCH v11] claim and its friends for allocating multiple self-ballooning guests Konrad Rzeszutek Wilk
2013-03-11 14:20 ` [PATCH 1/6] mmu: Introduce XENMEM_claim_pages (subop of memory ops) Konrad Rzeszutek Wilk
2013-03-11 14:20 ` [PATCH 2/6] xc: use XENMEM_claim_pages during guest creation Konrad Rzeszutek Wilk
2013-03-13 10:23 ` Ian Campbell
2013-03-13 14:42 ` Konrad Rzeszutek Wilk [this message]
2013-03-13 14:46 ` Ian Campbell
2013-03-11 14:20 ` [PATCH 3/6] xl: Implement XENMEM_claim_pages support via 'claim_mode' global config Konrad Rzeszutek Wilk
2013-03-13 10:41 ` Ian Campbell
2013-03-13 14:57 ` Konrad Rzeszutek Wilk
2013-03-13 15:05 ` Ian Campbell
2013-03-11 14:20 ` [PATCH 4/6] xc: XENMEM_claim_pages outstanding claims value Konrad Rzeszutek Wilk
2013-03-13 10:43 ` Ian Campbell
2013-03-13 14:57 ` Konrad Rzeszutek Wilk
2013-03-11 14:20 ` [PATCH 5/6] xl: export 'outstanding_pages' value from xcinfo Konrad Rzeszutek Wilk
2013-03-13 10:44 ` Ian Campbell
2013-03-11 14:20 ` [PATCH 6/6] xl: 'xl list' supports '-c' for global claim information Konrad Rzeszutek Wilk
2013-03-13 10:51 ` Ian Campbell
2013-03-13 15:02 ` Konrad Rzeszutek Wilk
2013-03-13 16:03 ` Ian Campbell
2013-03-15 18:05 ` Konrad Rzeszutek Wilk
2013-03-18 9:42 ` Ian Campbell
2013-03-18 13:10 ` Konrad Rzeszutek Wilk
-- strict thread matches above, loose matches on Subject: below --
2013-03-04 17:47 [PATCH v10] claim and its friends for allocating multiple self-ballooning guests Konrad Rzeszutek Wilk
2013-03-04 17:47 ` [PATCH 2/6] xc: use XENMEM_claim_pages during guest creation Konrad Rzeszutek Wilk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130313144230.GF25782@phenom.dumpdata.com \
--to=konrad.wilk@oracle.com \
--cc=Ian.Campbell@citrix.com \
--cc=dan.magenheimer@oracle.com \
--cc=keir@xen.org \
--cc=konrad@kernel.org \
--cc=tim@xen.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).