qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Anthony PERARD <anthony.perard@citrix.com>
To: QEMU-devel <qemu-devel@nongnu.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Xen Devel <xen-devel@lists.xensource.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Qemu-devel] [PATCH 4/5] xen: Change memory access behavior during migration.
Date: Thu, 24 Nov 2011 16:08:12 +0000	[thread overview]
Message-ID: <1322150893-887-5-git-send-email-anthony.perard@citrix.com> (raw)
In-Reply-To: <1322150893-887-1-git-send-email-anthony.perard@citrix.com>

Do not allocate RAM during pre-migration runstate.
Do not actually "do" set_memory during migration.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen-all.c |   13 +++++++++++++
 1 files changed, 13 insertions(+), 0 deletions(-)

diff --git a/xen-all.c b/xen-all.c
index 40e8869..279651a 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -191,6 +191,11 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size)
 
     trace_xen_ram_alloc(ram_addr, size);
 
+    if (runstate_check(RUN_STATE_PREMIGRATE)) {
+        /* RAM already populated in Xen */
+        return;
+    }
+
     nr_pfn = size >> TARGET_PAGE_BITS;
     pfn_list = g_malloc(sizeof (*pfn_list) * nr_pfn);
 
@@ -271,6 +276,13 @@ go_physmap:
     DPRINTF("mapping vram to %llx - %llx, from %llx\n",
             start_addr, start_addr + size, phys_offset);
 
+    if (runstate_check(RUN_STATE_INMIGRATE)) {
+        /* The mapping should already be done and can not be done a second
+         * time. So we just add to the physmap list instead.
+         */
+        goto done;
+    }
+
     pfn = phys_offset >> TARGET_PAGE_BITS;
     start_gpfn = start_addr >> TARGET_PAGE_BITS;
     for (i = 0; i < size >> TARGET_PAGE_BITS; i++) {
@@ -285,6 +297,7 @@ go_physmap:
         }
     }
 
+done:
     physmap = g_malloc(sizeof (XenPhysmap));
 
     physmap->start_addr = start_addr;
-- 
Anthony PERARD

  parent reply	other threads:[~2011-11-24 16:08 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-11-24 16:08 [Qemu-devel] [PATCH 0/5] Have a working migration with Xen Anthony PERARD
2011-11-24 16:08 ` [Qemu-devel] [PATCH 1/5] vl.c: Do not save RAM state when Xen is used Anthony PERARD
2011-11-24 17:23   ` Stefano Stabellini
2011-11-24 18:06     ` Anthony PERARD
2011-11-24 16:08 ` [Qemu-devel] [PATCH 2/5] xen mapcache: Check if a memory space has moved Anthony PERARD
2011-11-24 17:40   ` Stefano Stabellini
2011-11-24 17:57   ` Stefano Stabellini
2011-11-24 16:08 ` [Qemu-devel] [PATCH 3/5] Introduce premigrate RunState Anthony PERARD
2011-11-24 16:08 ` Anthony PERARD [this message]
2011-11-24 16:08 ` [Qemu-devel] [PATCH 5/5] vga-cirrus: Workaround during restore when using Xen Anthony PERARD
2011-11-24 18:30   ` Stefano Stabellini
2011-11-24 18:49     ` Anthony PERARD
2011-11-25 11:51       ` Stefano Stabellini
2011-11-25 12:33         ` Anthony PERARD

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1322150893-887-5-git-send-email-anthony.perard@citrix.com \
    --to=anthony.perard@citrix.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).