From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
To: xen-devel@lists.xensource.com
Cc: ian.campbell@citrix.com, andres@gridcentric.ca, tim@xen.org,
JBeulich@suse.com, ian.jackson@citrix.com, adin@gridcentric.ca
Subject: [PATCH 6 of 8] x86/mm: Clean up mem event structures on domain destruction
Date: Tue, 06 Mar 2012 18:50:28 -0500 [thread overview]
Message-ID: <a1ad4c2eccce658288b3.1331077828@xdev.gridcentric.ca> (raw)
In-Reply-To: <patchbomb.1331077822@xdev.gridcentric.ca>
xen/arch/x86/mm/mem_event.c | 11 +++++++++++
xen/common/domain.c | 3 +++
xen/include/asm-arm/mm.h | 3 ++-
xen/include/asm-ia64/mm.h | 3 ++-
xen/include/asm-x86/mm.h | 6 ++++++
5 files changed, 24 insertions(+), 2 deletions(-)
Otherwise we wind up with zombie domains, still holding onto refs to the mem
event ring pages.
Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Acked-by: Tim Deegan <tim@xen.org>
diff -r 0b28eaa6422f -r a1ad4c2eccce xen/arch/x86/mm/mem_event.c
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -487,6 +487,17 @@ int do_mem_event_op(int op, uint32_t dom
return ret;
}
+/* Clean up on domain destruction */
+void mem_event_cleanup(struct domain *d)
+{
+ if ( d->mem_event->paging.ring_page )
+ (void)mem_event_disable(d, &d->mem_event->paging);
+ if ( d->mem_event->access.ring_page )
+ (void)mem_event_disable(d, &d->mem_event->access);
+ if ( d->mem_event->share.ring_page )
+ (void)mem_event_disable(d, &d->mem_event->share);
+}
+
int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
XEN_GUEST_HANDLE(void) u_domctl)
{
diff -r 0b28eaa6422f -r a1ad4c2eccce xen/common/domain.c
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -480,6 +480,9 @@ int domain_kill(struct domain *d)
break;
}
d->is_dying = DOMDYING_dead;
+ /* Mem event cleanup has to go here because the rings
+ * have to be put before we call put_domain. */
+ mem_event_cleanup(d);
put_domain(d);
send_global_virq(VIRQ_DOM_EXC);
/* fallthrough */
diff -r 0b28eaa6422f -r a1ad4c2eccce xen/include/asm-arm/mm.h
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -266,7 +266,8 @@ int get_page(struct page_info *page, st
machine_to_phys_mapping[(mfn)] = (pfn); \
})
-#define put_gfn(d, g) ((void)0)
+static inline void put_gfn(struct domain *d, unsigned long gfn) {}
+static inline void mem_event_cleanup(struct domain *d) {}
#define INVALID_MFN (~0UL)
diff -r 0b28eaa6422f -r a1ad4c2eccce xen/include/asm-ia64/mm.h
--- a/xen/include/asm-ia64/mm.h
+++ b/xen/include/asm-ia64/mm.h
@@ -551,7 +551,8 @@ extern u64 translate_domain_pte(u64 ptev
gmfn_to_mfn_foreign((_d), (gpfn))
#define get_gfn_untyped(d, gpfn) gmfn_to_mfn(d, gpfn)
-#define put_gfn(d, g) ((void)0)
+static inline void put_gfn(struct domain *d, unsigned long gfn) {}
+static inline void mem_event_cleanup(struct domain *d) {}
#define __gpfn_invalid(_d, gpfn) \
(lookup_domain_mpa((_d), ((gpfn)<<PAGE_SHIFT), NULL) == INVALID_MFN)
diff -r 0b28eaa6422f -r a1ad4c2eccce xen/include/asm-x86/mm.h
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -634,6 +634,12 @@ unsigned int domain_clamp_alloc_bitsize(
unsigned long domain_get_maximum_gpfn(struct domain *d);
+#ifdef CONFIG_X86_64
+void mem_event_cleanup(struct domain *d);
+#else
+static inline void mem_event_cleanup(struct domain *d) {}
+#endif
+
extern struct domain *dom_xen, *dom_io, *dom_cow; /* for vmcoreinfo */
/* Definition of an mm lock: spinlock with extra fields for debugging */
next prev parent reply other threads:[~2012-03-06 23:50 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-06 23:50 [PATCH 0 of 8] Mem event ring interface setup update, V3 Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 1 of 8] Tools: Remove shared page from mem_event/access/paging interfaces Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 2 of 8] x86/hvm: refactor calls to prepare and tear down a helper ring Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 3 of 8] Use a reserved pfn in the guest address space to store mem event rings Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 4 of 8] x86/mm: wire up sharing ring Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 5 of 8] Tools: libxc side for setting up the mem " Andres Lagar-Cavilla
2012-03-06 23:50 ` Andres Lagar-Cavilla [this message]
2012-03-06 23:50 ` [PATCH 7 of 8] x86/mm: Fix mem event error message typos Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 8 of 8] Tools: After a helper maps a ring, yank it from the guest physmap Andres Lagar-Cavilla
2012-03-08 15:42 ` Ian Campbell
2012-03-12 11:23 ` Ian Jackson
2012-03-08 13:23 ` [PATCH 0 of 8] Mem event ring interface setup update, V3 Tim Deegan
2012-03-08 14:50 ` Andres Lagar-Cavilla
-- strict thread matches above, loose matches on Subject: below --
2012-03-08 15:02 [PATCH 0 of 8] Mem event ring interface setup update, V3 rebased Andres Lagar-Cavilla
2012-03-08 15:03 ` [PATCH 6 of 8] x86/mm: Clean up mem event structures on domain destruction Andres Lagar-Cavilla
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a1ad4c2eccce658288b3.1331077828@xdev.gridcentric.ca \
--to=andres@lagarcavilla.org \
--cc=JBeulich@suse.com \
--cc=adin@gridcentric.ca \
--cc=andres@gridcentric.ca \
--cc=ian.campbell@citrix.com \
--cc=ian.jackson@citrix.com \
--cc=tim@xen.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).