xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [PATCH] libxc: fix claim mode when creating HVM guest
Date: Wed, 29 Jan 2014 13:33:49 -0500	[thread overview]
Message-ID: <20140129183349.GA14312@phenom.dumpdata.com> (raw)
In-Reply-To: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>

On Mon, Jan 27, 2014 at 05:53:38PM +0000, Wei Liu wrote:
> The original code is wrong because:
> * claim mode wants to know the total number of pages needed while
>   original code provides the additional number of pages needed.
> * if pod is enabled memory will already be allocated by the time we try
>   to claim memory.
> 
> So the fix would be:
> * move claim mode before actual memory allocation.
> * pass the right number of pages to hypervisor.
> 
> The "right number of pages" should be number of pages of target memory
> minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
> 
> This fixes bug #32.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Konrad Wilk <konrad.wilk@oracle.com>

And also 'Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>'

Thank you!
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> WRT 4.4 release: this patch should be accpeted, otherwise PoD + claim
> mode is complete broken. If this patch is deemed too complicated, we
> should flip the switch to disable claim mode by default for 4.4.
> ---
>  tools/libxc/xc_hvm_build_x86.c |   36 +++++++++++++++++++++++-------------
>  1 file changed, 23 insertions(+), 13 deletions(-)
> 
> diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> index 77bd365..dd3b522 100644
> --- a/tools/libxc/xc_hvm_build_x86.c
> +++ b/tools/libxc/xc_hvm_build_x86.c
> @@ -49,6 +49,8 @@
>  #define NR_SPECIAL_PAGES     8
>  #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
>  
> +#define VGA_HOLE_SIZE (0x20)
> +
>  static int modules_init(struct xc_hvm_build_args *args,
>                          uint64_t vend, struct elf_binary *elf,
>                          uint64_t *mstart_out, uint64_t *mend_out)
> @@ -302,14 +304,31 @@ static int setup_guest(xc_interface *xch,
>      for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
>          page_array[i] += mmio_size >> PAGE_SHIFT;
>  
> +    /*
> +     * Try to claim pages for early warning of insufficient memory available.
> +     * This should go before xc_domain_set_pod_target, becuase that function
> +     * actually allocates memory for the guest. Claiming after memory has been
> +     * allocated is pointless.
> +     */
> +    if ( claim_enabled ) {
> +        rc = xc_domain_claim_pages(xch, dom, target_pages - VGA_HOLE_SIZE);
> +        if ( rc != 0 )
> +        {
> +            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> +            goto error_out;
> +        }
> +    }
> +
>      if ( pod_mode )
>      {
>          /*
> -         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
> -         * adjust the PoD cache size so that domain tot_pages will be
> -         * target_pages - 0x20 after this call.
> +         * Subtract VGA_HOLE_SIZE from target_pages for the VGA
> +         * "hole".  Xen will adjust the PoD cache size so that domain
> +         * tot_pages will be target_pages - VGA_HOLE_SIZE after
> +         * this call.
>           */
> -        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
> +        rc = xc_domain_set_pod_target(xch, dom,
> +                                      target_pages - VGA_HOLE_SIZE,
>                                        NULL, NULL, NULL);
>          if ( rc != 0 )
>          {
> @@ -333,15 +352,6 @@ static int setup_guest(xc_interface *xch,
>      cur_pages = 0xc0;
>      stat_normal_pages = 0xc0;
>  
> -    /* try to claim pages for early warning of insufficient memory available */
> -    if ( claim_enabled ) {
> -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> -        if ( rc != 0 )
> -        {
> -            PERROR("Could not allocate memory for HVM guest as we cannot claim memory!");
> -            goto error_out;
> -        }
> -    }
>      while ( (rc == 0) && (nr_pages > cur_pages) )
>      {
>          /* Clip count to maximum 1GB extent. */
> -- 
> 1.7.10.4
> 

  parent reply	other threads:[~2014-01-29 18:33 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-27 17:53 [PATCH] libxc: fix claim mode when creating HVM guest Wei Liu
2014-01-27 19:09 ` George Dunlap
2014-01-27 19:15   ` Konrad Rzeszutek Wilk
2014-01-28 11:28   ` Ian Campbell
2014-01-29 18:33 ` Konrad Rzeszutek Wilk [this message]
2014-01-30 14:38   ` George Dunlap
2014-02-04 15:50     ` Ian Campbell
2014-02-04 15:52       ` Ian Campbell
2014-02-04 16:00         ` Processed: " xen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140129183349.GA14312@phenom.dumpdata.com \
    --to=konrad.wilk@oracle.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).