From: Andres Lagar-Cavilla <andres@lagarcavilla.org>
To: xen-devel@lists.xensource.com
Cc: ian.campbell@citrix.com, andres@gridcentric.ca, tim@xen.org,
JBeulich@suse.com, ian.jackson@citrix.com, adin@gridcentric.ca
Subject: [PATCH 2 of 8] x86/hvm: refactor calls to prepare and tear down a helper ring
Date: Tue, 06 Mar 2012 18:50:24 -0500 [thread overview]
Message-ID: <ede46d85792a85eda5e7.1331077824@xdev.gridcentric.ca> (raw)
In-Reply-To: <patchbomb.1331077822@xdev.gridcentric.ca>
xen/arch/x86/hvm/hvm.c | 48 ++++++++++++++++++++++++++++++++----------
xen/include/asm-x86/hvm/hvm.h | 7 ++++++
2 files changed, 43 insertions(+), 12 deletions(-)
These are currently used for the rings connecting Xen with qemu. Refactor them
so the same code can be later used for mem event rings.
Signed-off-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Acked-by: Tim Deegan <tim@xen.org>
diff -r e40fd5b939a8 -r ede46d85792a xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -333,6 +333,19 @@ static void hvm_init_ioreq_page(
domain_pause(d);
}
+void destroy_ring_for_helper(
+ void **_va, struct page_info *page)
+{
+ void *va = *_va;
+
+ if ( va != NULL )
+ {
+ unmap_domain_page_global(va);
+ put_page_and_type(page);
+ *_va = NULL;
+ }
+}
+
static void hvm_destroy_ioreq_page(
struct domain *d, struct hvm_ioreq_page *iorp)
{
@@ -340,18 +353,14 @@ static void hvm_destroy_ioreq_page(
ASSERT(d->is_dying);
- if ( iorp->va != NULL )
- {
- unmap_domain_page_global(iorp->va);
- put_page_and_type(iorp->page);
- iorp->va = NULL;
- }
+ destroy_ring_for_helper(&iorp->va, iorp->page);
spin_unlock(&iorp->lock);
}
-static int hvm_set_ioreq_page(
- struct domain *d, struct hvm_ioreq_page *iorp, unsigned long gmfn)
+int prepare_ring_for_helper(
+ struct domain *d, unsigned long gmfn, struct page_info **_page,
+ void **_va)
{
struct page_info *page;
p2m_type_t p2mt;
@@ -392,14 +401,30 @@ static int hvm_set_ioreq_page(
return -ENOMEM;
}
+ *_va = va;
+ *_page = page;
+
+ put_gfn(d, gmfn);
+
+ return 0;
+}
+
+static int hvm_set_ioreq_page(
+ struct domain *d, struct hvm_ioreq_page *iorp, unsigned long gmfn)
+{
+ struct page_info *page;
+ void *va;
+ int rc;
+
+ if ( (rc = prepare_ring_for_helper(d, gmfn, &page, &va)) )
+ return rc;
+
spin_lock(&iorp->lock);
if ( (iorp->va != NULL) || d->is_dying )
{
+ destroy_ring_for_helper(&iorp->va, iorp->page);
spin_unlock(&iorp->lock);
- unmap_domain_page_global(va);
- put_page_and_type(mfn_to_page(mfn));
- put_gfn(d, gmfn);
return -EINVAL;
}
@@ -407,7 +432,6 @@ static int hvm_set_ioreq_page(
iorp->page = page;
spin_unlock(&iorp->lock);
- put_gfn(d, gmfn);
domain_unpause(d);
diff -r e40fd5b939a8 -r ede46d85792a xen/include/asm-x86/hvm/hvm.h
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -26,6 +26,7 @@
#include <asm/hvm/asid.h>
#include <public/domctl.h>
#include <public/hvm/save.h>
+#include <asm/mm.h>
/* Interrupt acknowledgement sources. */
enum hvm_intsrc {
@@ -194,6 +195,12 @@ int hvm_vcpu_cacheattr_init(struct vcpu
void hvm_vcpu_cacheattr_destroy(struct vcpu *v);
void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip);
+/* Prepare/destroy a ring for a dom0 helper. Helper with talk
+ * with Xen on behalf of this hvm domain. */
+int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
+ struct page_info **_page, void **_va);
+void destroy_ring_for_helper(void **_va, struct page_info *page);
+
bool_t hvm_send_assist_req(struct vcpu *v);
void hvm_set_guest_tsc(struct vcpu *v, u64 guest_tsc);
next prev parent reply other threads:[~2012-03-06 23:50 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-06 23:50 [PATCH 0 of 8] Mem event ring interface setup update, V3 Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 1 of 8] Tools: Remove shared page from mem_event/access/paging interfaces Andres Lagar-Cavilla
2012-03-06 23:50 ` Andres Lagar-Cavilla [this message]
2012-03-06 23:50 ` [PATCH 3 of 8] Use a reserved pfn in the guest address space to store mem event rings Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 4 of 8] x86/mm: wire up sharing ring Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 5 of 8] Tools: libxc side for setting up the mem " Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 6 of 8] x86/mm: Clean up mem event structures on domain destruction Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 7 of 8] x86/mm: Fix mem event error message typos Andres Lagar-Cavilla
2012-03-06 23:50 ` [PATCH 8 of 8] Tools: After a helper maps a ring, yank it from the guest physmap Andres Lagar-Cavilla
2012-03-08 15:42 ` Ian Campbell
2012-03-12 11:23 ` Ian Jackson
2012-03-08 13:23 ` [PATCH 0 of 8] Mem event ring interface setup update, V3 Tim Deegan
2012-03-08 14:50 ` Andres Lagar-Cavilla
-- strict thread matches above, loose matches on Subject: below --
2012-03-08 15:02 [PATCH 0 of 8] Mem event ring interface setup update, V3 rebased Andres Lagar-Cavilla
2012-03-08 15:02 ` [PATCH 2 of 8] x86/hvm: refactor calls to prepare and tear down a helper ring Andres Lagar-Cavilla
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ede46d85792a85eda5e7.1331077824@xdev.gridcentric.ca \
--to=andres@lagarcavilla.org \
--cc=JBeulich@suse.com \
--cc=adin@gridcentric.ca \
--cc=andres@gridcentric.ca \
--cc=ian.campbell@citrix.com \
--cc=ian.jackson@citrix.com \
--cc=tim@xen.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).