xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
To: xen-devel@lists.xensource.com
Cc: olaf@aepfle.de, ian.campbell@citrix.com, andres@gridcentric.ca,
	tim@xen.org, keir.xen@gmail.com, ian.jackson@citrix.com,
	adin@gridcentric.ca
Subject: [PATCH 6 of 7] x86/mm: Clean up mem event structures on domain destruction
Date: Wed, 29 Feb 2012 21:43:51 -0500	[thread overview]
Message-ID: <26ee64c0034ebfebaf80.1330569831@xdev.gridcentric.ca> (raw)
In-Reply-To: <patchbomb.1330569825@xdev.gridcentric.ca>

 xen/arch/x86/mm/mem_event.c |  11 +++++++++++
 xen/common/domain.c         |   3 +++
 xen/include/asm-arm/mm.h    |   3 ++-
 xen/include/asm-ia64/mm.h   |   3 ++-
 xen/include/asm-x86/mm.h    |   2 ++
 5 files changed, 20 insertions(+), 2 deletions(-)


Otherwise we wind up with zombie domains, still holding onto refs to the mem
event ring pages.

Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Acked-by: Tim Deegan <tim@xen.org>

diff -r d3d824ae525c -r 26ee64c0034e xen/arch/x86/mm/mem_event.c
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -487,6 +487,17 @@ int do_mem_event_op(int op, uint32_t dom
     return ret;
 }
 
+/* Clean up on domain destruction */
+void mem_event_cleanup(struct domain *d)
+{
+    if ( d->mem_event->paging.ring_page )
+        (void)mem_event_disable(d, &d->mem_event->paging);
+    if ( d->mem_event->access.ring_page )
+        (void)mem_event_disable(d, &d->mem_event->access);
+    if ( d->mem_event->share.ring_page )
+        (void)mem_event_disable(d, &d->mem_event->share);
+}
+
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
                      XEN_GUEST_HANDLE(void) u_domctl)
 {
diff -r d3d824ae525c -r 26ee64c0034e xen/common/domain.c
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -480,6 +480,9 @@ int domain_kill(struct domain *d)
             break;
         }
         d->is_dying = DOMDYING_dead;
+        /* Mem event cleanup has to go here because the rings 
+         * have to be put before we call put_domain. */
+        mem_event_cleanup(d);
         put_domain(d);
         send_global_virq(VIRQ_DOM_EXC);
         /* fallthrough */
diff -r d3d824ae525c -r 26ee64c0034e xen/include/asm-arm/mm.h
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -266,7 +266,8 @@ int  get_page(struct page_info *page, st
         machine_to_phys_mapping[(mfn)] = (pfn);                \
     })
 
-#define put_gfn(d, g)   ((void)0)
+#define put_gfn(d, g)           ((void)0)
+#define mem_event_cleanup(d)    ((void)0)
 
 #define INVALID_MFN             (~0UL)
 
diff -r d3d824ae525c -r 26ee64c0034e xen/include/asm-ia64/mm.h
--- a/xen/include/asm-ia64/mm.h
+++ b/xen/include/asm-ia64/mm.h
@@ -551,7 +551,8 @@ extern u64 translate_domain_pte(u64 ptev
     gmfn_to_mfn_foreign((_d), (gpfn))
 
 #define get_gfn_untyped(d, gpfn) gmfn_to_mfn(d, gpfn)
-#define put_gfn(d, g)   ((void)0)
+#define put_gfn(d, g)           ((void)0)
+#define mem_event_cleanup(d)    ((void)0)
 
 #define __gpfn_invalid(_d, gpfn)			\
 	(lookup_domain_mpa((_d), ((gpfn)<<PAGE_SHIFT), NULL) == INVALID_MFN)
diff -r d3d824ae525c -r 26ee64c0034e xen/include/asm-x86/mm.h
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -634,6 +634,8 @@ unsigned int domain_clamp_alloc_bitsize(
 
 unsigned long domain_get_maximum_gpfn(struct domain *d);
 
+void mem_event_cleanup(struct domain *d);
+
 extern struct domain *dom_xen, *dom_io, *dom_cow;	/* for vmcoreinfo */
 
 /* Definition of an mm lock: spinlock with extra fields for debugging */

  parent reply	other threads:[~2012-03-01  2:43 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-01  2:43 [PATCH 0 of 7] Mem event ring interface setup update, V2 Andres Lagar-Cavilla
2012-03-01  2:43 ` [PATCH 1 of 7] Tools: Sanitize mem_event/access/paging interfaces Andres Lagar-Cavilla
2012-03-01  2:43 ` [PATCH 2 of 7] x86/hvm: refactor calls to prepare and tear down a helper ring Andres Lagar-Cavilla
2012-03-01  2:43 ` [PATCH 3 of 7] Use a reserved pfn in the guest address space to store mem event rings Andres Lagar-Cavilla
2012-03-01  2:43 ` [PATCH 4 of 7] x86/mm: wire up sharing ring Andres Lagar-Cavilla
2012-03-01  2:43 ` [PATCH 5 of 7] Tools: libxc side for setting up the mem " Andres Lagar-Cavilla
2012-03-01  2:43 ` Andres Lagar-Cavilla [this message]
2012-03-01  6:54   ` [PATCH 6 of 7] x86/mm: Clean up mem event structures on domain destruction Olaf Hering
2012-03-01  2:43 ` [PATCH 7 of 7] x86/mm: Fix mem event error message typos Andres Lagar-Cavilla
  -- strict thread matches above, loose matches on Subject: below --
2012-02-23  6:05 [PATCH 0 of 7] Mem event ring setup interface update Andres Lagar-Cavilla
2012-02-23  6:05 ` [PATCH 6 of 7] x86/mm: Clean up mem event structures on domain destruction Andres Lagar-Cavilla
2012-02-23 14:32   ` Olaf Hering

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=26ee64c0034ebfebaf80.1330569831@xdev.gridcentric.ca \
    --to=andres@lagarcavilla.org \
    --cc=adin@gridcentric.ca \
    --cc=andres@gridcentric.ca \
    --cc=ian.campbell@citrix.com \
    --cc=ian.jackson@citrix.com \
    --cc=keir.xen@gmail.com \
    --cc=olaf@aepfle.de \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).