xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable
@ 2011-09-08  9:57 Olaf Hering
  2011-09-08  9:57 ` [PATCH 1 of 2] mem_event: pass mem_event_domain pointer to mem_event functions Olaf Hering
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Olaf Hering @ 2011-09-08  9:57 UTC (permalink / raw)
  To: xen-devel


The following two patches allow the parallel use of memsharing, xenpaging and
xen-access by using an independent ring buffer for each feature.

Please review.

v2:
 - update mem_event_check_ring arguments, check domain rather than domain_id
 - check ring_full first because its value was just evaluated
 - check if ring buffer is initialized before calling
   mem_access_domctl/mem_paging_domctl

Olaf

 tools/libxc/Makefile                |    2 
 tools/libxc/xc_mem_access.c         |   21 ++-
 tools/libxc/xc_mem_event.c          |   15 --
 tools/libxc/xc_mem_paging.c         |   33 +++-
 tools/libxc/xc_memshr.c             |   16 +-
 tools/libxc/xenctrl.h               |    9 -
 tools/tests/xen-access/xen-access.c |    4 
 tools/xenpaging/xenpaging.c         |    4 
 xen/arch/ia64/xen/dom0_ops.c        |    2 
 xen/arch/x86/hvm/hvm.c              |    4 
 xen/arch/x86/mm/mem_event.c         |  242 +++++++++++++++++++-----------------
 xen/arch/x86/mm/mem_paging.c        |    4 
 xen/arch/x86/mm/mem_sharing.c       |   22 +--
 xen/arch/x86/mm/p2m.c               |   18 +-
 xen/include/asm-x86/mem_event.h     |    8 -
 xen/include/public/domctl.h         |   43 +++---
 xen/include/xen/sched.h             |    6 
 17 files changed, 250 insertions(+), 203 deletions(-)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1 of 2] mem_event: pass mem_event_domain pointer to mem_event functions
  2011-09-08  9:57 [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable Olaf Hering
@ 2011-09-08  9:57 ` Olaf Hering
  2011-09-08  9:57 ` [PATCH 2 of 2] mem_event: use different ringbuffers for share, paging and access Olaf Hering
  2011-09-12 13:57 ` [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable Tim Deegan
  2 siblings, 0 replies; 6+ messages in thread
From: Olaf Hering @ 2011-09-08  9:57 UTC (permalink / raw)
  To: xen-devel

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1315475746 -7200
# Node ID 95b5a9b92e1634cec7ade3ca2a894037181e8193
# Parent  0268e73809532a4a3ca18a075efcee3c62caf458
mem_event: pass mem_event_domain pointer to mem_event functions

Pass a struct mem_event_domain pointer to the various mem_event
functions.  This will be used in a subsequent patch which creates
different ring buffers for the memshare, xenpaging and memaccess
functionality.

Remove the struct domain argument from some functions.

v2:
  - pass struct domain instead of domain_id to mem_event_check_ring()
  - check ring_full first because its value was just evaluated

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 0268e7380953 -r 95b5a9b92e16 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4025,7 +4025,7 @@ static int hvm_memory_event_traps(long p
     if ( (p & HVMPME_onchangeonly) && (value == old) )
         return 1;
     
-    rc = mem_event_check_ring(d);
+    rc = mem_event_check_ring(d, &d->mem_event);
     if ( rc )
         return rc;
     
@@ -4048,7 +4048,7 @@ static int hvm_memory_event_traps(long p
         req.gla_valid = 1;
     }
     
-    mem_event_put_request(d, &req);      
+    mem_event_put_request(d, &d->mem_event, &req);
     
     return 1;
 }
diff -r 0268e7380953 -r 95b5a9b92e16 xen/arch/x86/mm/mem_event.c
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -33,21 +33,21 @@
 #define xen_rmb()  rmb()
 #define xen_wmb()  wmb()
 
-#define mem_event_ring_lock_init(_d)  spin_lock_init(&(_d)->mem_event.ring_lock)
-#define mem_event_ring_lock(_d)       spin_lock(&(_d)->mem_event.ring_lock)
-#define mem_event_ring_unlock(_d)     spin_unlock(&(_d)->mem_event.ring_lock)
+#define mem_event_ring_lock_init(_med)  spin_lock_init(&(_med)->ring_lock)
+#define mem_event_ring_lock(_med)       spin_lock(&(_med)->ring_lock)
+#define mem_event_ring_unlock(_med)     spin_unlock(&(_med)->ring_lock)
 
-static int mem_event_enable(struct domain *d, mfn_t ring_mfn, mfn_t shared_mfn)
+static int mem_event_enable(struct domain *d, struct mem_event_domain *med, mfn_t ring_mfn, mfn_t shared_mfn)
 {
     int rc;
 
     /* Map ring and shared pages */
-    d->mem_event.ring_page = map_domain_page(mfn_x(ring_mfn));
-    if ( d->mem_event.ring_page == NULL )
+    med->ring_page = map_domain_page(mfn_x(ring_mfn));
+    if ( med->ring_page == NULL )
         goto err;
 
-    d->mem_event.shared_page = map_domain_page(mfn_x(shared_mfn));
-    if ( d->mem_event.shared_page == NULL )
+    med->shared_page = map_domain_page(mfn_x(shared_mfn));
+    if ( med->shared_page == NULL )
         goto err_ring;
 
     /* Allocate event channel */
@@ -56,15 +56,15 @@ static int mem_event_enable(struct domai
     if ( rc < 0 )
         goto err_shared;
 
-    ((mem_event_shared_page_t *)d->mem_event.shared_page)->port = rc;
-    d->mem_event.xen_port = rc;
+    ((mem_event_shared_page_t *)med->shared_page)->port = rc;
+    med->xen_port = rc;
 
     /* Prepare ring buffer */
-    FRONT_RING_INIT(&d->mem_event.front_ring,
-                    (mem_event_sring_t *)d->mem_event.ring_page,
+    FRONT_RING_INIT(&med->front_ring,
+                    (mem_event_sring_t *)med->ring_page,
                     PAGE_SIZE);
 
-    mem_event_ring_lock_init(d);
+    mem_event_ring_lock_init(med);
 
     /* Wake any VCPUs paused for memory events */
     mem_event_unpause_vcpus(d);
@@ -72,34 +72,34 @@ static int mem_event_enable(struct domai
     return 0;
 
  err_shared:
-    unmap_domain_page(d->mem_event.shared_page);
-    d->mem_event.shared_page = NULL;
+    unmap_domain_page(med->shared_page);
+    med->shared_page = NULL;
  err_ring:
-    unmap_domain_page(d->mem_event.ring_page);
-    d->mem_event.ring_page = NULL;
+    unmap_domain_page(med->ring_page);
+    med->ring_page = NULL;
  err:
     return 1;
 }
 
-static int mem_event_disable(struct domain *d)
+static int mem_event_disable(struct mem_event_domain *med)
 {
-    unmap_domain_page(d->mem_event.ring_page);
-    d->mem_event.ring_page = NULL;
+    unmap_domain_page(med->ring_page);
+    med->ring_page = NULL;
 
-    unmap_domain_page(d->mem_event.shared_page);
-    d->mem_event.shared_page = NULL;
+    unmap_domain_page(med->shared_page);
+    med->shared_page = NULL;
 
     return 0;
 }
 
-void mem_event_put_request(struct domain *d, mem_event_request_t *req)
+void mem_event_put_request(struct domain *d, struct mem_event_domain *med, mem_event_request_t *req)
 {
     mem_event_front_ring_t *front_ring;
     RING_IDX req_prod;
 
-    mem_event_ring_lock(d);
+    mem_event_ring_lock(med);
 
-    front_ring = &d->mem_event.front_ring;
+    front_ring = &med->front_ring;
     req_prod = front_ring->req_prod_pvt;
 
     /* Copy request */
@@ -107,23 +107,23 @@ void mem_event_put_request(struct domain
     req_prod++;
 
     /* Update ring */
-    d->mem_event.req_producers--;
+    med->req_producers--;
     front_ring->req_prod_pvt = req_prod;
     RING_PUSH_REQUESTS(front_ring);
 
-    mem_event_ring_unlock(d);
+    mem_event_ring_unlock(med);
 
-    notify_via_xen_event_channel(d, d->mem_event.xen_port);
+    notify_via_xen_event_channel(d, med->xen_port);
 }
 
-void mem_event_get_response(struct domain *d, mem_event_response_t *rsp)
+void mem_event_get_response(struct mem_event_domain *med, mem_event_response_t *rsp)
 {
     mem_event_front_ring_t *front_ring;
     RING_IDX rsp_cons;
 
-    mem_event_ring_lock(d);
+    mem_event_ring_lock(med);
 
-    front_ring = &d->mem_event.front_ring;
+    front_ring = &med->front_ring;
     rsp_cons = front_ring->rsp_cons;
 
     /* Copy response */
@@ -134,7 +134,7 @@ void mem_event_get_response(struct domai
     front_ring->rsp_cons = rsp_cons;
     front_ring->sring->rsp_event = rsp_cons + 1;
 
-    mem_event_ring_unlock(d);
+    mem_event_ring_unlock(med);
 }
 
 void mem_event_unpause_vcpus(struct domain *d)
@@ -152,35 +152,35 @@ void mem_event_mark_and_pause(struct vcp
     vcpu_sleep_nosync(v);
 }
 
-void mem_event_put_req_producers(struct domain *d)
+void mem_event_put_req_producers(struct mem_event_domain *med)
 {
-    mem_event_ring_lock(d);
-    d->mem_event.req_producers--;
-    mem_event_ring_unlock(d);
+    mem_event_ring_lock(med);
+    med->req_producers--;
+    mem_event_ring_unlock(med);
 }
 
-int mem_event_check_ring(struct domain *d)
+int mem_event_check_ring(struct domain *d, struct mem_event_domain *med)
 {
     struct vcpu *curr = current;
     int free_requests;
     int ring_full = 1;
 
-    if ( !d->mem_event.ring_page )
+    if ( !med->ring_page )
         return -1;
 
-    mem_event_ring_lock(d);
+    mem_event_ring_lock(med);
 
-    free_requests = RING_FREE_REQUESTS(&d->mem_event.front_ring);
-    if ( d->mem_event.req_producers < free_requests )
+    free_requests = RING_FREE_REQUESTS(&med->front_ring);
+    if ( med->req_producers < free_requests )
     {
-        d->mem_event.req_producers++;
+        med->req_producers++;
         ring_full = 0;
     }
 
-    if ( (curr->domain->domain_id == d->domain_id) && ring_full )
+    if ( ring_full && (curr->domain == d) )
         mem_event_mark_and_pause(curr);
 
-    mem_event_ring_unlock(d);
+    mem_event_ring_unlock(med);
 
     return ring_full;
 }
@@ -230,6 +230,7 @@ int mem_event_domctl(struct domain *d, x
         {
             struct domain *dom_mem_event = current->domain;
             struct vcpu *v = current;
+            struct mem_event_domain *med = &d->mem_event;
             unsigned long ring_addr = mec->ring_addr;
             unsigned long shared_addr = mec->shared_addr;
             l1_pgentry_t l1e;
@@ -242,7 +243,7 @@ int mem_event_domctl(struct domain *d, x
              * the cache is in an undefined state and so is the guest
              */
             rc = -EBUSY;
-            if ( d->mem_event.ring_page )
+            if ( med->ring_page )
                 break;
 
             /* Currently only EPT is supported */
@@ -270,7 +271,7 @@ int mem_event_domctl(struct domain *d, x
                 break;
 
             rc = -EINVAL;
-            if ( mem_event_enable(d, ring_mfn, shared_mfn) != 0 )
+            if ( mem_event_enable(d, med, ring_mfn, shared_mfn) != 0 )
                 break;
 
             rc = 0;
@@ -279,7 +280,7 @@ int mem_event_domctl(struct domain *d, x
 
         case XEN_DOMCTL_MEM_EVENT_OP_DISABLE:
         {
-            rc = mem_event_disable(d);
+            rc = mem_event_disable(&d->mem_event);
         }
         break;
 
diff -r 0268e7380953 -r 95b5a9b92e16 xen/arch/x86/mm/mem_sharing.c
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -281,12 +281,12 @@ static struct page_info* mem_sharing_all
     vcpu_pause_nosync(v);
     req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
 
-    if(mem_event_check_ring(d)) return page;
+    if(mem_event_check_ring(d, &d->mem_event)) return page;
 
     req.gfn = gfn;
     req.p2mt = p2m_ram_shared;
     req.vcpu_id = v->vcpu_id;
-    mem_event_put_request(d, &req);
+    mem_event_put_request(d, &d->mem_event, &req);
 
     return page;
 }
@@ -301,7 +301,7 @@ int mem_sharing_sharing_resume(struct do
     mem_event_response_t rsp;
 
     /* Get request off the ring */
-    mem_event_get_response(d, &rsp);
+    mem_event_get_response(&d->mem_event, &rsp);
 
     /* Unpause domain/vcpu */
     if( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
diff -r 0268e7380953 -r 95b5a9b92e16 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -755,7 +755,7 @@ void p2m_mem_paging_drop_page(struct dom
     mem_event_request_t req;
 
     /* Check that there's space on the ring for this request */
-    if ( mem_event_check_ring(d) == 0)
+    if ( mem_event_check_ring(d, &d->mem_event) == 0)
     {
         /* Send release notification to pager */
         memset(&req, 0, sizeof(req));
@@ -763,7 +763,7 @@ void p2m_mem_paging_drop_page(struct dom
         req.gfn = gfn;
         req.vcpu_id = v->vcpu_id;
 
-        mem_event_put_request(d, &req);
+        mem_event_put_request(d, &d->mem_event, &req);
     }
 }
 
@@ -775,7 +775,7 @@ void p2m_mem_paging_populate(struct doma
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* Check that there's space on the ring for this request */
-    if ( mem_event_check_ring(d) )
+    if ( mem_event_check_ring(d, &d->mem_event) )
         return;
 
     memset(&req, 0, sizeof(req));
@@ -803,7 +803,7 @@ void p2m_mem_paging_populate(struct doma
     else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
     {
         /* gfn is already on its way back and vcpu is not paused */
-        mem_event_put_req_producers(d);
+        mem_event_put_req_producers(&d->mem_event);
         return;
     }
 
@@ -812,7 +812,7 @@ void p2m_mem_paging_populate(struct doma
     req.p2mt = p2mt;
     req.vcpu_id = v->vcpu_id;
 
-    mem_event_put_request(d, &req);
+    mem_event_put_request(d, &d->mem_event, &req);
 }
 
 int p2m_mem_paging_prep(struct domain *d, unsigned long gfn)
@@ -842,7 +842,7 @@ void p2m_mem_paging_resume(struct domain
     mfn_t mfn;
 
     /* Pull the response off the ring */
-    mem_event_get_response(d, &rsp);
+    mem_event_get_response(&d->mem_event, &rsp);
 
     /* Fix p2m entry if the page was not dropped */
     if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
@@ -889,7 +889,7 @@ void p2m_mem_access_check(unsigned long 
     p2m_unlock(p2m);
 
     /* Otherwise, check if there is a memory event listener, and send the message along */
-    res = mem_event_check_ring(d);
+    res = mem_event_check_ring(d, &d->mem_event);
     if ( res < 0 ) 
     {
         /* No listener */
@@ -933,7 +933,7 @@ void p2m_mem_access_check(unsigned long 
     
     req.vcpu_id = v->vcpu_id;
 
-    mem_event_put_request(d, &req);   
+    mem_event_put_request(d, &d->mem_event, &req);
 
     /* VCPU paused, mem event request sent */
 }
@@ -943,7 +943,7 @@ void p2m_mem_access_resume(struct p2m_do
     struct domain *d = p2m->domain;
     mem_event_response_t rsp;
 
-    mem_event_get_response(d, &rsp);
+    mem_event_get_response(&d->mem_event, &rsp);
 
     /* Unpause domain */
     if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
diff -r 0268e7380953 -r 95b5a9b92e16 xen/include/asm-x86/mem_event.h
--- a/xen/include/asm-x86/mem_event.h
+++ b/xen/include/asm-x86/mem_event.h
@@ -26,10 +26,10 @@
 
 /* Pauses VCPU while marking pause flag for mem event */
 void mem_event_mark_and_pause(struct vcpu *v);
-int mem_event_check_ring(struct domain *d);
-void mem_event_put_req_producers(struct domain *d);
-void mem_event_put_request(struct domain *d, mem_event_request_t *req);
-void mem_event_get_response(struct domain *d, mem_event_response_t *rsp);
+int mem_event_check_ring(struct domain *d, struct mem_event_domain *med);
+void mem_event_put_req_producers(struct mem_event_domain *med);
+void mem_event_put_request(struct domain *d, struct mem_event_domain *med, mem_event_request_t *req);
+void mem_event_get_response(struct mem_event_domain *med, mem_event_response_t *rsp);
 void mem_event_unpause_vcpus(struct domain *d);
 
 int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 2 of 2] mem_event: use different ringbuffers for share, paging and access
  2011-09-08  9:57 [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable Olaf Hering
  2011-09-08  9:57 ` [PATCH 1 of 2] mem_event: pass mem_event_domain pointer to mem_event functions Olaf Hering
@ 2011-09-08  9:57 ` Olaf Hering
  2011-09-12 13:57 ` [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable Tim Deegan
  2 siblings, 0 replies; 6+ messages in thread
From: Olaf Hering @ 2011-09-08  9:57 UTC (permalink / raw)
  To: xen-devel

# HG changeset patch
# User Olaf Hering <olaf@aepfle.de>
# Date 1315475748 -7200
# Node ID ce8a9eaca04aad17dacd5a8850f4ed88f92bd37a
# Parent  95b5a9b92e1634cec7ade3ca2a894037181e8193
mem_event: use different ringbuffers for share, paging and access

Up to now a single ring buffer was used for mem_share, xenpaging and
xen-access.  Each helper would have to cooperate and pull only its own
requests from the ring.  Unfortunately this was not implemented. And
even if it was, it would make the whole concept fragile because a crash
or early exit of one helper would stall the others.

What happend up to now is that active xenpaging + memory_sharing would
push memsharing requests in the buffer. xenpaging is not prepared for
such requests.

This patch creates an independet ring buffer for mem_share, xenpaging
and xen-access and adds also new functions to enable xenpaging and
xen-access. The xc_mem_event_enable/xc_mem_event_disable functions will
be removed. The various XEN_DOMCTL_MEM_EVENT_* macros were cleaned up.
Due to the removal the API changed, so the SONAME will be changed too.

v2:
 - check if ring buffer is initialized before calling
   mem_access_domctl/mem_paging_domctl

Signed-off-by: Olaf Hering <olaf@aepfle.de>

diff -r 95b5a9b92e16 -r ce8a9eaca04a tools/libxc/Makefile
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -1,7 +1,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-MAJOR    = 4.0
+MAJOR    = 4.2
 MINOR    = 0
 
 CTRL_SRCS-y       :=
diff -r 95b5a9b92e16 -r ce8a9eaca04a tools/libxc/xc_mem_access.c
--- a/tools/libxc/xc_mem_access.c
+++ b/tools/libxc/xc_mem_access.c
@@ -24,12 +24,29 @@
 #include "xc_private.h"
 
 
+int xc_mem_access_enable(xc_interface *xch, domid_t domain_id,
+                        void *shared_page, void *ring_page)
+{
+    return xc_mem_event_control(xch, domain_id,
+                                XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE,
+                                XEN_DOMCTL_MEM_EVENT_OP_ACCESS,
+                                shared_page, ring_page, INVALID_MFN);
+}
+
+int xc_mem_access_disable(xc_interface *xch, domid_t domain_id)
+{
+    return xc_mem_event_control(xch, domain_id,
+                                XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE,
+                                XEN_DOMCTL_MEM_EVENT_OP_ACCESS,
+                                NULL, NULL, INVALID_MFN);
+}
+
 int xc_mem_access_resume(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
     return xc_mem_event_control(xch, domain_id,
                                 XEN_DOMCTL_MEM_EVENT_OP_ACCESS_RESUME,
-                                XEN_DOMCTL_MEM_EVENT_OP_ACCESS, NULL, NULL,
-                                gfn);
+                                XEN_DOMCTL_MEM_EVENT_OP_ACCESS,
+                                NULL, NULL, gfn);
 }
 
 /*
diff -r 95b5a9b92e16 -r ce8a9eaca04a tools/libxc/xc_mem_event.c
--- a/tools/libxc/xc_mem_event.c
+++ b/tools/libxc/xc_mem_event.c
@@ -42,18 +42,3 @@ int xc_mem_event_control(xc_interface *x
     return do_domctl(xch, &domctl);
 }
 
-int xc_mem_event_enable(xc_interface *xch, domid_t domain_id,
-                        void *shared_page, void *ring_page)
-{
-    return xc_mem_event_control(xch, domain_id,
-                                XEN_DOMCTL_MEM_EVENT_OP_ENABLE, 0,
-                                shared_page, ring_page, INVALID_MFN);
-}
-
-int xc_mem_event_disable(xc_interface *xch, domid_t domain_id)
-{
-    return xc_mem_event_control(xch, domain_id,
-                                XEN_DOMCTL_MEM_EVENT_OP_DISABLE, 0,
-                                NULL, NULL, INVALID_MFN);
-}
-
diff -r 95b5a9b92e16 -r ce8a9eaca04a tools/libxc/xc_mem_paging.c
--- a/tools/libxc/xc_mem_paging.c
+++ b/tools/libxc/xc_mem_paging.c
@@ -24,36 +24,53 @@
 #include "xc_private.h"
 
 
+int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
+                        void *shared_page, void *ring_page)
+{
+    return xc_mem_event_control(xch, domain_id,
+                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE,
+                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
+                                shared_page, ring_page, INVALID_MFN);
+}
+
+int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id)
+{
+    return xc_mem_event_control(xch, domain_id,
+                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE,
+                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
+                                NULL, NULL, INVALID_MFN);
+}
+
 int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
     return xc_mem_event_control(xch, domain_id,
                                 XEN_DOMCTL_MEM_EVENT_OP_PAGING_NOMINATE,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING, NULL, NULL,
-                                gfn);
+                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
+                                NULL, NULL, gfn);
 }
 
 int xc_mem_paging_evict(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
     return xc_mem_event_control(xch, domain_id,
                                 XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING, NULL, NULL,
-                                gfn);
+                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
+                                NULL, NULL, gfn);
 }
 
 int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
     return xc_mem_event_control(xch, domain_id,
                                 XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING, NULL, NULL,
-                                gfn);
+                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
+                                NULL, NULL, gfn);
 }
 
 int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id, unsigned long gfn)
 {
     return xc_mem_event_control(xch, domain_id,
                                 XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME,
-                                XEN_DOMCTL_MEM_EVENT_OP_PAGING, NULL, NULL,
-                                gfn);
+                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
+                                NULL, NULL, gfn);
 }
 
 
diff -r 95b5a9b92e16 -r ce8a9eaca04a tools/libxc/xc_memshr.c
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -36,7 +36,7 @@ int xc_memshr_control(xc_interface *xch,
     domctl.interface_version = XEN_DOMCTL_INTERFACE_VERSION;
     domctl.domain = (domid_t)domid;
     op = &(domctl.u.mem_sharing_op);
-    op->op = XEN_DOMCTL_MEM_SHARING_OP_CONTROL;
+    op->op = XEN_DOMCTL_MEM_EVENT_OP_SHARING_CONTROL;
     op->u.enable = enable;
 
     return do_domctl(xch, &domctl);
@@ -55,7 +55,7 @@ int xc_memshr_nominate_gfn(xc_interface 
     domctl.interface_version = XEN_DOMCTL_INTERFACE_VERSION;
     domctl.domain = (domid_t)domid;
     op = &(domctl.u.mem_sharing_op);
-    op->op = XEN_DOMCTL_MEM_SHARING_OP_NOMINATE_GFN;
+    op->op = XEN_DOMCTL_MEM_EVENT_OP_SHARING_NOMINATE_GFN;
     op->u.nominate.u.gfn = gfn;
 
     ret = do_domctl(xch, &domctl);
@@ -77,7 +77,7 @@ int xc_memshr_nominate_gref(xc_interface
     domctl.interface_version = XEN_DOMCTL_INTERFACE_VERSION;
     domctl.domain = (domid_t)domid;
     op = &(domctl.u.mem_sharing_op);
-    op->op = XEN_DOMCTL_MEM_SHARING_OP_NOMINATE_GREF;
+    op->op = XEN_DOMCTL_MEM_EVENT_OP_SHARING_NOMINATE_GREF;
     op->u.nominate.u.grant_ref = gref;
 
     ret = do_domctl(xch, &domctl);
@@ -97,7 +97,7 @@ int xc_memshr_share(xc_interface *xch,
     domctl.interface_version = XEN_DOMCTL_INTERFACE_VERSION;
     domctl.domain = 0;
     op = &(domctl.u.mem_sharing_op);
-    op->op = XEN_DOMCTL_MEM_SHARING_OP_SHARE;
+    op->op = XEN_DOMCTL_MEM_EVENT_OP_SHARING_SHARE;
     op->u.share.source_handle = source_handle;
     op->u.share.client_handle = client_handle;
 
@@ -114,7 +114,7 @@ int xc_memshr_domain_resume(xc_interface
     domctl.interface_version = XEN_DOMCTL_INTERFACE_VERSION;
     domctl.domain = (domid_t)domid;
     op = &(domctl.u.mem_sharing_op);
-    op->op = XEN_DOMCTL_MEM_SHARING_OP_RESUME;
+    op->op = XEN_DOMCTL_MEM_EVENT_OP_SHARING_RESUME;
 
     return do_domctl(xch, &domctl);
 }
@@ -130,7 +130,7 @@ int xc_memshr_debug_gfn(xc_interface *xc
     domctl.interface_version = XEN_DOMCTL_INTERFACE_VERSION;
     domctl.domain = (domid_t)domid;
     op = &(domctl.u.mem_sharing_op);
-    op->op = XEN_DOMCTL_MEM_SHARING_OP_DEBUG_GFN;
+    op->op = XEN_DOMCTL_MEM_EVENT_OP_SHARING_DEBUG_GFN;
     op->u.debug.u.gfn = gfn;
 
     return do_domctl(xch, &domctl);
@@ -147,7 +147,7 @@ int xc_memshr_debug_mfn(xc_interface *xc
     domctl.interface_version = XEN_DOMCTL_INTERFACE_VERSION;
     domctl.domain = (domid_t)domid;
     op = &(domctl.u.mem_sharing_op);
-    op->op = XEN_DOMCTL_MEM_SHARING_OP_DEBUG_MFN;
+    op->op = XEN_DOMCTL_MEM_EVENT_OP_SHARING_DEBUG_MFN;
     op->u.debug.u.mfn = mfn;
 
     return do_domctl(xch, &domctl);
@@ -164,7 +164,7 @@ int xc_memshr_debug_gref(xc_interface *x
     domctl.interface_version = XEN_DOMCTL_INTERFACE_VERSION;
     domctl.domain = (domid_t)domid;
     op = &(domctl.u.mem_sharing_op);
-    op->op = XEN_DOMCTL_MEM_SHARING_OP_DEBUG_GREF;
+    op->op = XEN_DOMCTL_MEM_EVENT_OP_SHARING_DEBUG_GREF;
     op->u.debug.u.gref = gref;
 
     return do_domctl(xch, &domctl);
diff -r 95b5a9b92e16 -r ce8a9eaca04a tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1758,16 +1758,19 @@ int xc_mem_event_control(xc_interface *x
                          unsigned int mode, void *shared_page,
                           void *ring_page, unsigned long gfn);
 
-int xc_mem_event_enable(xc_interface *xch, domid_t domain_id,
+int xc_mem_paging_enable(xc_interface *xch, domid_t domain_id,
                         void *shared_page, void *ring_page);
-int xc_mem_event_disable(xc_interface *xch, domid_t domain_id);
-
+int xc_mem_paging_disable(xc_interface *xch, domid_t domain_id);
 int xc_mem_paging_nominate(xc_interface *xch, domid_t domain_id,
                            unsigned long gfn);
 int xc_mem_paging_evict(xc_interface *xch, domid_t domain_id, unsigned long gfn);
 int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn);
 int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id,
                          unsigned long gfn);
+
+int xc_mem_access_enable(xc_interface *xch, domid_t domain_id,
+                        void *shared_page, void *ring_page);
+int xc_mem_access_disable(xc_interface *xch, domid_t domain_id);
 int xc_mem_access_resume(xc_interface *xch, domid_t domain_id,
                          unsigned long gfn);
 
diff -r 95b5a9b92e16 -r ce8a9eaca04a tools/tests/xen-access/xen-access.c
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -241,7 +241,7 @@ xenaccess_t *xenaccess_init(xc_interface
     mem_event_ring_lock_init(&xenaccess->mem_event);
 
     /* Initialise Xen */
-    rc = xc_mem_event_enable(xenaccess->xc_handle, xenaccess->mem_event.domain_id,
+    rc = xc_mem_access_enable(xenaccess->xc_handle, xenaccess->mem_event.domain_id,
                              xenaccess->mem_event.shared_page,
                              xenaccess->mem_event.ring_page);
     if ( rc != 0 )
@@ -351,7 +351,7 @@ int xenaccess_teardown(xc_interface *xch
         return 0;
 
     /* Tear down domain xenaccess in Xen */
-    rc = xc_mem_event_disable(xenaccess->xc_handle, xenaccess->mem_event.domain_id);
+    rc = xc_mem_access_disable(xenaccess->xc_handle, xenaccess->mem_event.domain_id);
     if ( rc != 0 )
     {
         ERROR("Error tearing down domain xenaccess in xen");
diff -r 95b5a9b92e16 -r ce8a9eaca04a tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -234,7 +234,7 @@ static xenpaging_t *xenpaging_init(domid
                    PAGE_SIZE);
     
     /* Initialise Xen */
-    rc = xc_mem_event_enable(xch, paging->mem_event.domain_id,
+    rc = xc_mem_paging_enable(xch, paging->mem_event.domain_id,
                              paging->mem_event.shared_page,
                              paging->mem_event.ring_page);
     if ( rc != 0 )
@@ -353,7 +353,7 @@ static int xenpaging_teardown(xenpaging_
     xch = paging->xc_handle;
     paging->xc_handle = NULL;
     /* Tear down domain paging in Xen */
-    rc = xc_mem_event_disable(xch, paging->mem_event.domain_id);
+    rc = xc_mem_paging_disable(xch, paging->mem_event.domain_id);
     if ( rc != 0 )
     {
         ERROR("Error tearing down domain paging in xen");
diff -r 95b5a9b92e16 -r ce8a9eaca04a xen/arch/ia64/xen/dom0_ops.c
--- a/xen/arch/ia64/xen/dom0_ops.c
+++ b/xen/arch/ia64/xen/dom0_ops.c
@@ -688,7 +688,7 @@ long arch_do_domctl(xen_domctl_t *op, XE
 
         switch(mec->op)
         {
-            case XEN_DOMCTL_MEM_SHARING_OP_CONTROL:
+            case XEN_DOMCTL_MEM_EVENT_OP_SHARING_CONTROL:
             {
                 if (mec->u.enable) {
                     ret = -EINVAL; /* not implemented */
diff -r 95b5a9b92e16 -r ce8a9eaca04a xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4025,7 +4025,7 @@ static int hvm_memory_event_traps(long p
     if ( (p & HVMPME_onchangeonly) && (value == old) )
         return 1;
     
-    rc = mem_event_check_ring(d, &d->mem_event);
+    rc = mem_event_check_ring(d, &d->mem_access);
     if ( rc )
         return rc;
     
@@ -4048,7 +4048,7 @@ static int hvm_memory_event_traps(long p
         req.gla_valid = 1;
     }
     
-    mem_event_put_request(d, &d->mem_event, &req);
+    mem_event_put_request(d, &d->mem_access, &req);
     
     return 1;
 }
diff -r 95b5a9b92e16 -r ce8a9eaca04a xen/arch/x86/mm/mem_event.c
--- a/xen/arch/x86/mm/mem_event.c
+++ b/xen/arch/x86/mm/mem_event.c
@@ -37,24 +37,52 @@
 #define mem_event_ring_lock(_med)       spin_lock(&(_med)->ring_lock)
 #define mem_event_ring_unlock(_med)     spin_unlock(&(_med)->ring_lock)
 
-static int mem_event_enable(struct domain *d, struct mem_event_domain *med, mfn_t ring_mfn, mfn_t shared_mfn)
+static int mem_event_enable(struct domain *d,
+                            xen_domctl_mem_event_op_t *mec,
+                            struct mem_event_domain *med)
 {
     int rc;
+    struct domain *dom_mem_event = current->domain;
+    struct vcpu *v = current;
+    unsigned long ring_addr = mec->ring_addr;
+    unsigned long shared_addr = mec->shared_addr;
+    l1_pgentry_t l1e;
+    unsigned long gfn;
+    p2m_type_t p2mt;
+    mfn_t ring_mfn;
+    mfn_t shared_mfn;
+
+    /* Only one helper at a time. If the helper crashed,
+     * the ring is in an undefined state and so is the guest.
+     */
+    if ( med->ring_page )
+        return -EBUSY;
+
+    /* Get MFN of ring page */
+    guest_get_eff_l1e(v, ring_addr, &l1e);
+    gfn = l1e_get_pfn(l1e);
+    ring_mfn = gfn_to_mfn(dom_mem_event, gfn, &p2mt);
+
+    if ( unlikely(!mfn_valid(mfn_x(ring_mfn))) )
+        return -EINVAL;
+
+    /* Get MFN of shared page */
+    guest_get_eff_l1e(v, shared_addr, &l1e);
+    gfn = l1e_get_pfn(l1e);
+    shared_mfn = gfn_to_mfn(dom_mem_event, gfn, &p2mt);
+
+    if ( unlikely(!mfn_valid(mfn_x(shared_mfn))) )
+        return -EINVAL;
 
     /* Map ring and shared pages */
     med->ring_page = map_domain_page(mfn_x(ring_mfn));
-    if ( med->ring_page == NULL )
-        goto err;
-
     med->shared_page = map_domain_page(mfn_x(shared_mfn));
-    if ( med->shared_page == NULL )
-        goto err_ring;
 
     /* Allocate event channel */
     rc = alloc_unbound_xen_event_channel(d->vcpu[0],
                                          current->domain->domain_id);
     if ( rc < 0 )
-        goto err_shared;
+        goto err;
 
     ((mem_event_shared_page_t *)med->shared_page)->port = rc;
     med->xen_port = rc;
@@ -71,14 +99,14 @@ static int mem_event_enable(struct domai
 
     return 0;
 
- err_shared:
+ err:
     unmap_domain_page(med->shared_page);
     med->shared_page = NULL;
- err_ring:
+
     unmap_domain_page(med->ring_page);
     med->ring_page = NULL;
- err:
-    return 1;
+
+    return rc;
 }
 
 static int mem_event_disable(struct mem_event_domain *med)
@@ -220,86 +248,79 @@ int mem_event_domctl(struct domain *d, x
 
     rc = -ENOSYS;
 
-    switch ( mec-> mode ) 
+    switch ( mec->mode )
     {
-    case 0:
+    case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
     {
+        struct mem_event_domain *med = &d->mem_paging;
+        rc = -ENODEV;
+        /* Only HAP is supported */
+        if ( !hap_enabled(d) )
+            break;
+
+        /* Currently only EPT is supported */
+        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
+            break;
+
         switch( mec->op )
         {
-        case XEN_DOMCTL_MEM_EVENT_OP_ENABLE:
+        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
         {
-            struct domain *dom_mem_event = current->domain;
-            struct vcpu *v = current;
-            struct mem_event_domain *med = &d->mem_event;
-            unsigned long ring_addr = mec->ring_addr;
-            unsigned long shared_addr = mec->shared_addr;
-            l1_pgentry_t l1e;
-            unsigned long gfn;
-            p2m_type_t p2mt;
-            mfn_t ring_mfn;
-            mfn_t shared_mfn;
-
-            /* Only one xenpaging at a time. If xenpaging crashed,
-             * the cache is in an undefined state and so is the guest
-             */
-            rc = -EBUSY;
-            if ( med->ring_page )
-                break;
-
-            /* Currently only EPT is supported */
-            rc = -ENODEV;
-            if ( !(hap_enabled(d) &&
-                  (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)) )
-                break;
-
-            /* Get MFN of ring page */
-            guest_get_eff_l1e(v, ring_addr, &l1e);
-            gfn = l1e_get_pfn(l1e);
-            ring_mfn = gfn_to_mfn(dom_mem_event, gfn, &p2mt);
-
-            rc = -EINVAL;
-            if ( unlikely(!mfn_valid(mfn_x(ring_mfn))) )
-                break;
-
-            /* Get MFN of shared page */
-            guest_get_eff_l1e(v, shared_addr, &l1e);
-            gfn = l1e_get_pfn(l1e);
-            shared_mfn = gfn_to_mfn(dom_mem_event, gfn, &p2mt);
-
-            rc = -EINVAL;
-            if ( unlikely(!mfn_valid(mfn_x(shared_mfn))) )
-                break;
-
-            rc = -EINVAL;
-            if ( mem_event_enable(d, med, ring_mfn, shared_mfn) != 0 )
-                break;
-
-            rc = 0;
+            rc = mem_event_enable(d, mec, med);
         }
         break;
 
-        case XEN_DOMCTL_MEM_EVENT_OP_DISABLE:
+        case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
         {
-            rc = mem_event_disable(&d->mem_event);
+            rc = mem_event_disable(med);
         }
         break;
 
         default:
-            rc = -ENOSYS;
-            break;
+        {
+            if ( med->ring_page )
+                rc = mem_paging_domctl(d, mec, u_domctl);
         }
         break;
+        }
     }
-    case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
-    {
-        rc = mem_paging_domctl(d, mec, u_domctl);
-        break;
-    }
+    break;
+
     case XEN_DOMCTL_MEM_EVENT_OP_ACCESS: 
     {
-        rc = mem_access_domctl(d, mec, u_domctl);
+        struct mem_event_domain *med = &d->mem_access;
+        rc = -ENODEV;
+        /* Only HAP is supported */
+        if ( !hap_enabled(d) )
+            break;
+
+        /* Currently only EPT is supported */
+        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
+            break;
+
+        switch( mec->op )
+        {
+        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
+        {
+            rc = mem_event_enable(d, mec, med);
+        }
         break;
+
+        case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
+        {
+            rc = mem_event_disable(&d->mem_access);
+        }
+        break;
+
+        default:
+        {
+            if ( med->ring_page )
+                rc = mem_access_domctl(d, mec, u_domctl);
+        }
+        break;
+        }
     }
+    break;
     }
 
     return rc;
diff -r 95b5a9b92e16 -r ce8a9eaca04a xen/arch/x86/mm/mem_paging.c
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -28,10 +28,6 @@
 int mem_paging_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
                       XEN_GUEST_HANDLE(void) u_domctl)
 {
-    /* Only HAP is supported */
-    if ( !hap_enabled(d) )
-         return -ENODEV;
-
     switch( mec->op )
     {
     case XEN_DOMCTL_MEM_EVENT_OP_PAGING_NOMINATE:
diff -r 95b5a9b92e16 -r ce8a9eaca04a xen/arch/x86/mm/mem_sharing.c
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -281,12 +281,12 @@ static struct page_info* mem_sharing_all
     vcpu_pause_nosync(v);
     req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
 
-    if(mem_event_check_ring(d, &d->mem_event)) return page;
+    if(mem_event_check_ring(d, &d->mem_share)) return page;
 
     req.gfn = gfn;
     req.p2mt = p2m_ram_shared;
     req.vcpu_id = v->vcpu_id;
-    mem_event_put_request(d, &d->mem_event, &req);
+    mem_event_put_request(d, &d->mem_share, &req);
 
     return page;
 }
@@ -301,7 +301,7 @@ int mem_sharing_sharing_resume(struct do
     mem_event_response_t rsp;
 
     /* Get request off the ring */
-    mem_event_get_response(&d->mem_event, &rsp);
+    mem_event_get_response(&d->mem_share, &rsp);
 
     /* Unpause domain/vcpu */
     if( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
@@ -697,14 +697,14 @@ int mem_sharing_domctl(struct domain *d,
 
     switch(mec->op)
     {
-        case XEN_DOMCTL_MEM_SHARING_OP_CONTROL:
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_CONTROL:
         {
             d->arch.hvm_domain.mem_sharing_enabled = mec->u.enable;
             rc = 0;
         }
         break;
 
-        case XEN_DOMCTL_MEM_SHARING_OP_NOMINATE_GFN:
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_NOMINATE_GFN:
         {
             unsigned long gfn = mec->u.nominate.u.gfn;
             shr_handle_t handle;
@@ -715,7 +715,7 @@ int mem_sharing_domctl(struct domain *d,
         }
         break;
 
-        case XEN_DOMCTL_MEM_SHARING_OP_NOMINATE_GREF:
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_NOMINATE_GREF:
         {
             grant_ref_t gref = mec->u.nominate.u.grant_ref;
             unsigned long gfn;
@@ -730,7 +730,7 @@ int mem_sharing_domctl(struct domain *d,
         }
         break;
 
-        case XEN_DOMCTL_MEM_SHARING_OP_SHARE:
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_SHARE:
         {
             shr_handle_t sh = mec->u.share.source_handle;
             shr_handle_t ch = mec->u.share.client_handle;
@@ -738,7 +738,7 @@ int mem_sharing_domctl(struct domain *d,
         }
         break;
 
-        case XEN_DOMCTL_MEM_SHARING_OP_RESUME:
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_RESUME:
         {
             if(!mem_sharing_enabled(d))
                 return -EINVAL;
@@ -746,21 +746,21 @@ int mem_sharing_domctl(struct domain *d,
         }
         break;
 
-        case XEN_DOMCTL_MEM_SHARING_OP_DEBUG_GFN:
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DEBUG_GFN:
         {
             unsigned long gfn = mec->u.debug.u.gfn;
             rc = mem_sharing_debug_gfn(d, gfn);
         }
         break;
 
-        case XEN_DOMCTL_MEM_SHARING_OP_DEBUG_MFN:
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DEBUG_MFN:
         {
             unsigned long mfn = mec->u.debug.u.mfn;
             rc = mem_sharing_debug_mfn(mfn);
         }
         break;
 
-        case XEN_DOMCTL_MEM_SHARING_OP_DEBUG_GREF:
+        case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DEBUG_GREF:
         {
             grant_ref_t gref = mec->u.debug.u.gref;
             rc = mem_sharing_debug_gref(d, gref);
diff -r 95b5a9b92e16 -r ce8a9eaca04a xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -755,7 +755,7 @@ void p2m_mem_paging_drop_page(struct dom
     mem_event_request_t req;
 
     /* Check that there's space on the ring for this request */
-    if ( mem_event_check_ring(d, &d->mem_event) == 0)
+    if ( mem_event_check_ring(d, &d->mem_paging) == 0)
     {
         /* Send release notification to pager */
         memset(&req, 0, sizeof(req));
@@ -763,7 +763,7 @@ void p2m_mem_paging_drop_page(struct dom
         req.gfn = gfn;
         req.vcpu_id = v->vcpu_id;
 
-        mem_event_put_request(d, &d->mem_event, &req);
+        mem_event_put_request(d, &d->mem_paging, &req);
     }
 }
 
@@ -775,7 +775,7 @@ void p2m_mem_paging_populate(struct doma
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /* Check that there's space on the ring for this request */
-    if ( mem_event_check_ring(d, &d->mem_event) )
+    if ( mem_event_check_ring(d, &d->mem_paging) )
         return;
 
     memset(&req, 0, sizeof(req));
@@ -803,7 +803,7 @@ void p2m_mem_paging_populate(struct doma
     else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
     {
         /* gfn is already on its way back and vcpu is not paused */
-        mem_event_put_req_producers(&d->mem_event);
+        mem_event_put_req_producers(&d->mem_paging);
         return;
     }
 
@@ -812,7 +812,7 @@ void p2m_mem_paging_populate(struct doma
     req.p2mt = p2mt;
     req.vcpu_id = v->vcpu_id;
 
-    mem_event_put_request(d, &d->mem_event, &req);
+    mem_event_put_request(d, &d->mem_paging, &req);
 }
 
 int p2m_mem_paging_prep(struct domain *d, unsigned long gfn)
@@ -842,7 +842,7 @@ void p2m_mem_paging_resume(struct domain
     mfn_t mfn;
 
     /* Pull the response off the ring */
-    mem_event_get_response(&d->mem_event, &rsp);
+    mem_event_get_response(&d->mem_paging, &rsp);
 
     /* Fix p2m entry if the page was not dropped */
     if ( !(rsp.flags & MEM_EVENT_FLAG_DROP_PAGE) )
@@ -889,7 +889,7 @@ void p2m_mem_access_check(unsigned long 
     p2m_unlock(p2m);
 
     /* Otherwise, check if there is a memory event listener, and send the message along */
-    res = mem_event_check_ring(d, &d->mem_event);
+    res = mem_event_check_ring(d, &d->mem_access);
     if ( res < 0 ) 
     {
         /* No listener */
@@ -933,7 +933,7 @@ void p2m_mem_access_check(unsigned long 
     
     req.vcpu_id = v->vcpu_id;
 
-    mem_event_put_request(d, &d->mem_event, &req);
+    mem_event_put_request(d, &d->mem_access, &req);
 
     /* VCPU paused, mem event request sent */
 }
@@ -943,7 +943,7 @@ void p2m_mem_access_resume(struct p2m_do
     struct domain *d = p2m->domain;
     mem_event_response_t rsp;
 
-    mem_event_get_response(&d->mem_event, &rsp);
+    mem_event_get_response(&d->mem_access, &rsp);
 
     /* Unpause domain */
     if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
diff -r 95b5a9b92e16 -r ce8a9eaca04a xen/include/public/domctl.h
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -708,20 +708,18 @@ struct xen_domctl_gdbsx_domstatus {
 
 /* XEN_DOMCTL_mem_event_op */
 
-/* Add and remove memory handlers */
-#define XEN_DOMCTL_MEM_EVENT_OP_ENABLE     0
-#define XEN_DOMCTL_MEM_EVENT_OP_DISABLE    1
-
 /*
+* Domain memory paging
  * Page memory in and out. 
  */
 #define XEN_DOMCTL_MEM_EVENT_OP_PAGING            1
 
-/* Domain memory paging */
-#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_NOMINATE   0
-#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT      1
-#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP       2
-#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME     3
+#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE     0
+#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE    1
+#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_NOMINATE   2
+#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT      3
+#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP       4
+#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME     5
 
 /*
  * Access permissions.
@@ -734,11 +732,14 @@ struct xen_domctl_gdbsx_domstatus {
  * ACCESS_RESUME mode for the following domctl.
  */
 #define XEN_DOMCTL_MEM_EVENT_OP_ACCESS            2
-#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS_RESUME     0 
+
+#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE     0
+#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE    1
+#define XEN_DOMCTL_MEM_EVENT_OP_ACCESS_RESUME     2
 
 struct xen_domctl_mem_event_op {
-    uint32_t       op;           /* XEN_DOMCTL_MEM_EVENT_OP_* */
-    uint32_t       mode;         /* XEN_DOMCTL_MEM_EVENT_ENABLE_* */
+    uint32_t       op;           /* XEN_DOMCTL_MEM_EVENT_OP_*_* */
+    uint32_t       mode;         /* XEN_DOMCTL_MEM_EVENT_OP_* */
 
     /* OP_ENABLE */
     uint64_aligned_t shared_addr;  /* IN:  Virtual address of shared page */
@@ -755,14 +756,16 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_mem_e
  */
 /* XEN_DOMCTL_mem_sharing_op */
 
-#define XEN_DOMCTL_MEM_SHARING_OP_CONTROL        0
-#define XEN_DOMCTL_MEM_SHARING_OP_NOMINATE_GFN   1
-#define XEN_DOMCTL_MEM_SHARING_OP_NOMINATE_GREF  2
-#define XEN_DOMCTL_MEM_SHARING_OP_SHARE          3
-#define XEN_DOMCTL_MEM_SHARING_OP_RESUME         4
-#define XEN_DOMCTL_MEM_SHARING_OP_DEBUG_GFN      5
-#define XEN_DOMCTL_MEM_SHARING_OP_DEBUG_MFN      6
-#define XEN_DOMCTL_MEM_SHARING_OP_DEBUG_GREF     7
+#define XEN_DOMCTL_MEM_EVENT_OP_SHARING                3
+
+#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_CONTROL        0
+#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_NOMINATE_GFN   1
+#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_NOMINATE_GREF  2
+#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_SHARE          3
+#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_RESUME         4
+#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_DEBUG_GFN      5
+#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_DEBUG_MFN      6
+#define XEN_DOMCTL_MEM_EVENT_OP_SHARING_DEBUG_GREF     7
 
 #define XEN_DOMCTL_MEM_SHARING_S_HANDLE_INVALID  (-10)
 #define XEN_DOMCTL_MEM_SHARING_C_HANDLE_INVALID  (-9)
diff -r 95b5a9b92e16 -r ce8a9eaca04a xen/include/xen/sched.h
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -317,8 +317,12 @@ struct domain
     /* Non-migratable and non-restoreable? */
     bool_t disable_migrate;
 
+    /* Memory sharing support */
+    struct mem_event_domain mem_share;
     /* Memory paging support */
-    struct mem_event_domain mem_event;
+    struct mem_event_domain mem_paging;
+    /* Memory access support */
+    struct mem_event_domain mem_access;
 
     /* Currently computed from union of all vcpu cpu-affinity masks. */
     nodemask_t node_affinity;

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable
  2011-09-08  9:57 [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable Olaf Hering
  2011-09-08  9:57 ` [PATCH 1 of 2] mem_event: pass mem_event_domain pointer to mem_event functions Olaf Hering
  2011-09-08  9:57 ` [PATCH 2 of 2] mem_event: use different ringbuffers for share, paging and access Olaf Hering
@ 2011-09-12 13:57 ` Tim Deegan
  2011-09-15 10:21   ` Ian Jackson
  2 siblings, 1 reply; 6+ messages in thread
From: Tim Deegan @ 2011-09-12 13:57 UTC (permalink / raw)
  To: Olaf Hering; +Cc: xen-devel

At 11:57 +0200 on 08 Sep (1315483028), Olaf Hering wrote:
> 
> The following two patches allow the parallel use of memsharing, xenpaging and
> xen-access by using an independent ring buffer for each feature.
> 
> Please review.

As I said, the Xen side looks fine but I'd like the opinion of the tools
maintainers about the API changes before I apply anything.  Maybe this
is something that can happen at the hackathon.

Cheers,

Tim.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable
  2011-09-12 13:57 ` [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable Tim Deegan
@ 2011-09-15 10:21   ` Ian Jackson
  2011-09-16 11:20     ` Tim Deegan
  0 siblings, 1 reply; 6+ messages in thread
From: Ian Jackson @ 2011-09-15 10:21 UTC (permalink / raw)
  To: Tim Deegan; +Cc: Olaf Hering, xen-devel

Tim Deegan writes ("Re: [Xen-devel] [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable"):
> At 11:57 +0200 on 08 Sep (1315483028), Olaf Hering wrote:
> > The following two patches allow the parallel use of memsharing,
> > xenpaging and xen-access by using an independent ring buffer for
> > each feature.
> > 
> > Please review.
> 
> As I said, the Xen side looks fine but I'd like the opinion of the tools
> maintainers about the API changes before I apply anything.  Maybe this
> is something that can happen at the hackathon.

>From the tools point of view:

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Tim, I wasn't sure if your message was an ack from the hypervisor
point of view, so I haven't actually applied these patches.

Ian.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable
  2011-09-15 10:21   ` Ian Jackson
@ 2011-09-16 11:20     ` Tim Deegan
  0 siblings, 0 replies; 6+ messages in thread
From: Tim Deegan @ 2011-09-16 11:20 UTC (permalink / raw)
  To: Ian Jackson; +Cc: Olaf Hering, xen-devel

At 11:21 +0100 on 15 Sep (1316085665), Ian Jackson wrote:
> Tim Deegan writes ("Re: [Xen-devel] [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable"):
> > At 11:57 +0200 on 08 Sep (1315483028), Olaf Hering wrote:
> > > The following two patches allow the parallel use of memsharing,
> > > xenpaging and xen-access by using an independent ring buffer for
> > > each feature.
> > > 
> > > Please review.
> > 
> > As I said, the Xen side looks fine but I'd like the opinion of the tools
> > maintainers about the API changes before I apply anything.  Maybe this
> > is something that can happen at the hackathon.
> 
> >From the tools point of view:
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Tim, I wasn't sure if your message was an ack from the hypervisor
> point of view, so I haven't actually applied these patches.

OK, I've applied them now.  

Cheers,

Tim.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2011-09-16 11:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-09-08  9:57 [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable Olaf Hering
2011-09-08  9:57 ` [PATCH 1 of 2] mem_event: pass mem_event_domain pointer to mem_event functions Olaf Hering
2011-09-08  9:57 ` [PATCH 2 of 2] mem_event: use different ringbuffers for share, paging and access Olaf Hering
2011-09-12 13:57 ` [PATCH 0 of 2] v2: memshare/xenpaging/xen-access fixes for xen-unstable Tim Deegan
2011-09-15 10:21   ` Ian Jackson
2011-09-16 11:20     ` Tim Deegan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).