qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Huang Ying <ying.huang@intel.com>
To: Avi Kivity <avi@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	Anthony Liguori <aliguori@linux.vnet.ibm.com>
Cc: Dean Nelson <dnelson@redhat.com>,
	Andi Kleen <andi@firstfloor.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: [Qemu-devel] [PATCH uq/master 2/2] MCE, unpoison memory address across reboot
Date: Thu, 13 Jan 2011 16:34:45 +0800	[thread overview]
Message-ID: <1294907685.4596.44.camel@yhuang-dev> (raw)

In Linux kernel HWPoison processing implementation, the virtual
address in processes mapping the error physical memory page is marked
as HWPoison.  So that, the further accessing to the virtual
address will kill corresponding processes with SIGBUS.

If the error physical memory page is used by a KVM guest, the SIGBUS
will be sent to QEMU, and QEMU will simulate a MCE to report that
memory error to the guest OS.  If the guest OS can not recover from
the error (for example, the page is accessed by kernel code), guest OS
will reboot the system.  But because the underlying host virtual
address backing the guest physical memory is still poisoned, if the
guest system accesses the corresponding guest physical memory even
after rebooting, the SIGBUS will still be sent to QEMU and MCE will be
simulated.  That is, guest system can not recover via rebooting.

In fact, across rebooting, the contents of guest physical memory page
need not to be kept.  We can allocate a new host physical page to
back the corresponding guest physical address.

This patch fixes this issue in QEMU via calling qemu_ram_remap() to
clear the corresponding page table entry, so that make it possible to
allocate a new page to recover the issue.

Signed-off-by: Huang Ying <ying.huang@intel.com>
---
 kvm.h             |    2 ++
 target-i386/kvm.c |   39 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -580,6 +580,42 @@ static int kvm_get_supported_msrs(void)
     return ret;
 }
 
+struct HWPoisonPage;
+typedef struct HWPoisonPage HWPoisonPage;
+struct HWPoisonPage
+{
+    ram_addr_t ram_addr;
+    QLIST_ENTRY(HWPoisonPage) list;
+};
+
+static QLIST_HEAD(hwpoison_page_list, HWPoisonPage) hwpoison_page_list =
+    QLIST_HEAD_INITIALIZER(hwpoison_page_list);
+
+void kvm_unpoison_all(void *param)
+{
+    HWPoisonPage *page, *next_page;
+
+    QLIST_FOREACH_SAFE(page, &hwpoison_page_list, list, next_page) {
+        QLIST_REMOVE(page, list);
+        qemu_ram_remap(page->ram_addr, TARGET_PAGE_SIZE);
+        qemu_free(page);
+    }
+}
+
+static void kvm_hwpoison_page_add(ram_addr_t ram_addr)
+{
+    HWPoisonPage *page;
+
+    QLIST_FOREACH(page, &hwpoison_page_list, list) {
+        if (page->ram_addr == ram_addr)
+            return;
+    }
+
+    page = qemu_malloc(sizeof(HWPoisonPage));
+    page->ram_addr = ram_addr;
+    QLIST_INSERT_HEAD(&hwpoison_page_list, page, list);
+}
+
 int kvm_arch_init(void)
 {
     uint64_t identity_base = 0xfffbc000;
@@ -632,6 +668,7 @@ int kvm_arch_init(void)
         fprintf(stderr, "e820_add_entry() table is full\n");
         return ret;
     }
+    qemu_register_reset(kvm_unpoison_all, NULL);
 
     return 0;
 }
@@ -1940,6 +1977,7 @@ int kvm_on_sigbus_vcpu(CPUState *env, in
                 hardware_memory_error();
             }
         }
+        kvm_hwpoison_page_add(ram_addr);
 
         if (code == BUS_MCEERR_AR) {
             /* Fake an Intel architectural Data Load SRAR UCR */
@@ -1984,6 +2022,7 @@ int kvm_on_sigbus(int code, void *addr)
                     "QEMU itself instead of guest system!: %p\n", addr);
             return 0;
         }
+        kvm_hwpoison_page_add(ram_addr);
         kvm_mce_inj_srao_memscrub2(first_cpu, paddr);
     } else
 #endif
--- a/kvm.h
+++ b/kvm.h
@@ -188,6 +188,8 @@ int kvm_physical_memory_addr_from_ram(ra
                                       target_phys_addr_t *phys_addr);
 #endif
 
+void kvm_unpoison_all(void *param);
+
 #endif
 int kvm_set_ioeventfd_mmio_long(int fd, uint32_t adr, uint32_t val, bool assign);
 

             reply	other threads:[~2011-01-13  8:34 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-13  8:34 Huang Ying [this message]
2011-01-13  9:01 ` [Qemu-devel] Re: [PATCH uq/master 2/2] MCE, unpoison memory address across reboot Jan Kiszka
2011-01-14  1:51   ` Huang Ying
2011-01-14  8:38     ` Jan Kiszka
2011-01-17  2:08       ` Huang Ying
2011-01-17  9:48         ` Jan Kiszka
2011-01-13  9:29 ` Jan Kiszka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1294907685.4596.44.camel@yhuang-dev \
    --to=ying.huang@intel.com \
    --cc=aliguori@linux.vnet.ibm.com \
    --cc=andi@firstfloor.org \
    --cc=avi@redhat.com \
    --cc=dnelson@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).