xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@eu.citrix.com>
To: xen-devel@lists.xensource.com
Cc: george.dunlap@eu.citrix.com
Subject: [PATCH] PoD: Fix domain build populate-on-demand cache allocation
Date: Mon, 9 Aug 2010 14:18:10 +0100	[thread overview]
Message-ID: <c453b237894e0aee585b.1281359890@gdunlap-desktop> (raw)

Rather than trying to count the number of PoD entries we're putting in, we
simply pass the target # of pages - the vga hole, and let the hypervisor
do the calculation.

Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>

diff -r fe930e1b2ce8 -r c453b237894e tools/libxc/xc_hvm_build.c
--- a/tools/libxc/xc_hvm_build.c	Fri Aug 06 18:35:02 2010 +0100
+++ b/tools/libxc/xc_hvm_build.c	Mon Aug 09 14:18:02 2010 +0100
@@ -123,7 +123,6 @@
     xen_pfn_t *page_array = NULL;
     unsigned long i, nr_pages = (unsigned long)memsize << (20 - PAGE_SHIFT);
     unsigned long target_pages = (unsigned long)target << (20 - PAGE_SHIFT);
-    unsigned long pod_pages = 0;
     unsigned long entry_eip, cur_pages;
     void *hvm_info_page;
     uint32_t *ident_pt;
@@ -237,11 +236,6 @@
             {
                 stat_1gb_pages += done;
                 done <<= SUPERPAGE_1GB_SHIFT;
-                if ( pod_mode && target_pages > cur_pages )
-                {
-                    int d = target_pages - cur_pages;
-                    pod_pages += ( done < d ) ? done : d;
-                }
                 cur_pages += done;
                 count -= done;
             }
@@ -284,11 +278,6 @@
                 {
                     stat_2mb_pages += done;
                     done <<= SUPERPAGE_2MB_SHIFT;
-                    if ( pod_mode && target_pages > cur_pages )
-                    {
-                        int d = target_pages - cur_pages;
-                        pod_pages += ( done < d ) ? done : d;
-                    }
                     cur_pages += done;
                     count -= done;
                 }
@@ -302,15 +291,16 @@
                 xch, dom, count, 0, 0, &page_array[cur_pages]);
             cur_pages += count;
             stat_normal_pages += count;
-            if ( pod_mode )
-                pod_pages -= count;
         }
     }
 
+    /* Subtract 0x20 from target_pages for the VGA "hole".  Xen will
+     * adjust the PoD cache size so that domain tot_pages will be
+     * target_pages - 0x20 after this call. */
     if ( pod_mode )
         rc = xc_domain_memory_set_pod_target(xch,
                                              dom,
-                                             pod_pages,
+                                             target_pages - 0x20,
                                              NULL, NULL, NULL);
 
     if ( rc != 0 )

             reply	other threads:[~2010-08-09 13:18 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-09 13:18 George Dunlap [this message]
2010-08-11 14:56 ` [PATCH] PoD: Fix domain build populate-on-demand cache allocation Stefano Stabellini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c453b237894e0aee585b.1281359890@gdunlap-desktop \
    --to=george.dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).