* [PATCH RFC v2 01/12] xen: Relocate mem_access and mem_event into common.
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-27 14:17 ` Julien Grall
2014-08-28 10:22 ` Tim Deegan
2014-08-27 14:06 ` [PATCH RFC v2 02/12] xen/mem_event: Clean out superflous white-spaces Tamas K Lengyel
` (11 subsequent siblings)
12 siblings, 2 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
In preparation to add support for ARM LPAE mem_event, relocate mem_access,
mem_event and auxiliary functions into common Xen code.
This patch makes no functional changes to the X86 side, for ARM mem_event
and mem_access functions are just defined as placeholder stubs, and are
actually enabled later in the series.
Edits that are only header path adjustments:
xen/arch/x86/domctl.c
xen/arch/x86/mm/hap/nested_ept.c
xen/arch/x86/mm/hap/nested_hap.c
xen/arch/x86/mm/mem_paging.c
xen/arch/x86/mm/mem_sharing.c
xen/arch/x86/mm/p2m-pod.c
xen/arch/x86/mm/p2m-pt.c
xen/arch/x86/mm/p2m.c
xen/arch/x86/x86_64/compat/mm.c
xen/arch/x86/x86_64/mm.c
Makefile adjustments for new/removed code:
xen/common/Makefile
xen/arch/x86/mm/Makefile
Relocated prepare_ring_for_helper and destroy_ring_for_helper functions:
xen/include/xen/mm.h
xen/common/memory.c
xen/include/asm-x86/hvm/hvm.h
xen/arch/x86/hvm/hvm.c
Code movement of mem_event and mem_access with required ifdef wrappers
added to only compile with CONFIG_X86:
xen/arch/x86/mm/mem_access.c -> xen/common/mem_access.c
xen/arch/x86/mm/mem_event.c -> xen/common/mem_event.c
xen/include/asm-x86/mem_access.h -> xen/include/xen/mem_access.h
xen/include/asm-x86/mem_event.h -> xen/include/xen/mem_event.h
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v2: Update MAINTAINERS.
More descriptive commit message to aid in the review process.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
MAINTAINERS | 6 +
xen/arch/x86/domctl.c | 2 +-
xen/arch/x86/hvm/hvm.c | 61 +---
xen/arch/x86/mm/Makefile | 2 -
xen/arch/x86/mm/hap/nested_ept.c | 2 +-
xen/arch/x86/mm/hap/nested_hap.c | 2 +-
xen/arch/x86/mm/mem_access.c | 133 --------
xen/arch/x86/mm/mem_event.c | 705 --------------------------------------
xen/arch/x86/mm/mem_paging.c | 2 +-
xen/arch/x86/mm/mem_sharing.c | 2 +-
xen/arch/x86/mm/p2m-pod.c | 2 +-
xen/arch/x86/mm/p2m-pt.c | 2 +-
xen/arch/x86/mm/p2m.c | 2 +-
xen/arch/x86/x86_64/compat/mm.c | 4 +-
xen/arch/x86/x86_64/mm.c | 4 +-
xen/common/Makefile | 2 +
xen/common/domain.c | 1 +
xen/common/mem_access.c | 137 ++++++++
xen/common/mem_event.c | 707 +++++++++++++++++++++++++++++++++++++++
xen/common/memory.c | 62 ++++
xen/include/asm-arm/mm.h | 1 -
xen/include/asm-x86/hvm/hvm.h | 6 -
xen/include/asm-x86/mem_access.h | 39 ---
xen/include/asm-x86/mem_event.h | 82 -----
xen/include/asm-x86/mm.h | 2 -
xen/include/xen/mem_access.h | 58 ++++
xen/include/xen/mem_event.h | 141 ++++++++
xen/include/xen/mm.h | 6 +
xen/include/xen/sched.h | 1 -
29 files changed, 1134 insertions(+), 1042 deletions(-)
delete mode 100644 xen/arch/x86/mm/mem_access.c
delete mode 100644 xen/arch/x86/mm/mem_event.c
create mode 100644 xen/common/mem_access.c
create mode 100644 xen/common/mem_event.c
delete mode 100644 xen/include/asm-x86/mem_access.h
delete mode 100644 xen/include/asm-x86/mem_event.h
create mode 100644 xen/include/xen/mem_access.h
create mode 100644 xen/include/xen/mem_event.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 266e47b..f659180 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -337,6 +337,12 @@ F: xen/arch/x86/mm/mem_sharing.c
F: xen/arch/x86/mm/mem_paging.c
F: tools/memshr
+MEMORY EVENT AND ACCESS
+M: Tim Deegan <tim@xen.org>
+S: Supported
+F: xen/common/mem_event.c
+F: xen/common/mem_access.c
+
XENTRACE
M: George Dunlap <george.dunlap@eu.citrix.com>
S: Supported
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index d1517c4..3aeb79d 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -30,7 +30,7 @@
#include <xen/hypercall.h> /* for arch_do_domctl */
#include <xsm/xsm.h>
#include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
#include <public/mem_event.h>
#include <asm/mem_sharing.h>
#include <asm/xstate.h>
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d40c48e..14ce761 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -63,8 +63,8 @@
#include <public/hvm/ioreq.h>
#include <public/version.h>
#include <public/memory.h>
-#include <asm/mem_event.h>
-#include <asm/mem_access.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
#include <public/mem_event.h>
#include <xen/rangeset.h>
#include <public/arch-x86/cpuid.h>
@@ -486,19 +486,6 @@ static void hvm_free_ioreq_gmfn(struct domain *d, unsigned long gmfn)
clear_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
}
-void destroy_ring_for_helper(
- void **_va, struct page_info *page)
-{
- void *va = *_va;
-
- if ( va != NULL )
- {
- unmap_domain_page_global(va);
- put_page_and_type(page);
- *_va = NULL;
- }
-}
-
static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
{
struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
@@ -506,50 +493,6 @@ static void hvm_unmap_ioreq_page(struct hvm_ioreq_server *s, bool_t buf)
destroy_ring_for_helper(&iorp->va, iorp->page);
}
-int prepare_ring_for_helper(
- struct domain *d, unsigned long gmfn, struct page_info **_page,
- void **_va)
-{
- struct page_info *page;
- p2m_type_t p2mt;
- void *va;
-
- page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
- if ( p2m_is_paging(p2mt) )
- {
- if ( page )
- put_page(page);
- p2m_mem_paging_populate(d, gmfn);
- return -ENOENT;
- }
- if ( p2m_is_shared(p2mt) )
- {
- if ( page )
- put_page(page);
- return -ENOENT;
- }
- if ( !page )
- return -EINVAL;
-
- if ( !get_page_type(page, PGT_writable_page) )
- {
- put_page(page);
- return -EINVAL;
- }
-
- va = __map_domain_page_global(page);
- if ( va == NULL )
- {
- put_page_and_type(page);
- return -ENOMEM;
- }
-
- *_va = va;
- *_page = page;
-
- return 0;
-}
-
static int hvm_map_ioreq_page(
struct hvm_ioreq_server *s, bool_t buf, unsigned long gmfn)
{
diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
index 73dcdf4..ed4b1f8 100644
--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -6,10 +6,8 @@ obj-y += p2m.o p2m-pt.o p2m-ept.o p2m-pod.o
obj-y += guest_walk_2.o
obj-y += guest_walk_3.o
obj-$(x86_64) += guest_walk_4.o
-obj-$(x86_64) += mem_event.o
obj-$(x86_64) += mem_paging.o
obj-$(x86_64) += mem_sharing.o
-obj-$(x86_64) += mem_access.o
guest_walk_%.o: guest_walk.c Makefile
$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
diff --git a/xen/arch/x86/mm/hap/nested_ept.c b/xen/arch/x86/mm/hap/nested_ept.c
index 0d044bc..704bb66 100644
--- a/xen/arch/x86/mm/hap/nested_ept.c
+++ b/xen/arch/x86/mm/hap/nested_ept.c
@@ -21,7 +21,7 @@
#include <asm/page.h>
#include <asm/paging.h>
#include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
#include <public/mem_event.h>
#include <asm/mem_sharing.h>
#include <xen/event.h>
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 137a87c..f6becd4 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -23,7 +23,7 @@
#include <asm/page.h>
#include <asm/paging.h>
#include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
#include <public/mem_event.h>
#include <asm/mem_sharing.h>
#include <xen/event.h>
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
deleted file mode 100644
index e8465a5..0000000
--- a/xen/arch/x86/mm/mem_access.c
+++ /dev/null
@@ -1,133 +0,0 @@
-/******************************************************************************
- * arch/x86/mm/mem_access.c
- *
- * Memory access support.
- *
- * Copyright (c) 2011 Virtuata, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- */
-
-
-#include <xen/sched.h>
-#include <xen/guest_access.h>
-#include <xen/hypercall.h>
-#include <asm/p2m.h>
-#include <asm/mem_event.h>
-#include <xsm/xsm.h>
-
-
-int mem_access_memop(unsigned long cmd,
- XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
-{
- long rc;
- xen_mem_access_op_t mao;
- struct domain *d;
-
- if ( copy_from_guest(&mao, arg, 1) )
- return -EFAULT;
-
- rc = rcu_lock_live_remote_domain_by_id(mao.domid, &d);
- if ( rc )
- return rc;
-
- rc = -EINVAL;
- if ( !is_hvm_domain(d) )
- goto out;
-
- rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
- if ( rc )
- goto out;
-
- rc = -ENODEV;
- if ( unlikely(!d->mem_event->access.ring_page) )
- goto out;
-
- switch ( mao.op )
- {
- case XENMEM_access_op_resume:
- p2m_mem_access_resume(d);
- rc = 0;
- break;
-
- case XENMEM_access_op_set_access:
- {
- unsigned long start_iter = cmd & ~MEMOP_CMD_MASK;
-
- rc = -EINVAL;
- if ( (mao.pfn != ~0ull) &&
- (mao.nr < start_iter ||
- ((mao.pfn + mao.nr - 1) < mao.pfn) ||
- ((mao.pfn + mao.nr - 1) > domain_get_maximum_gpfn(d))) )
- break;
-
- rc = p2m_set_mem_access(d, mao.pfn, mao.nr, start_iter,
- MEMOP_CMD_MASK, mao.access);
- if ( rc > 0 )
- {
- ASSERT(!(rc & MEMOP_CMD_MASK));
- rc = hypercall_create_continuation(__HYPERVISOR_memory_op, "lh",
- XENMEM_access_op | rc, arg);
- }
- break;
- }
-
- case XENMEM_access_op_get_access:
- {
- xenmem_access_t access;
-
- rc = -EINVAL;
- if ( (mao.pfn > domain_get_maximum_gpfn(d)) && mao.pfn != ~0ull )
- break;
-
- rc = p2m_get_mem_access(d, mao.pfn, &access);
- if ( rc != 0 )
- break;
-
- mao.access = access;
- rc = __copy_field_to_guest(arg, &mao, access) ? -EFAULT : 0;
-
- break;
- }
-
- default:
- rc = -ENOSYS;
- break;
- }
-
- out:
- rcu_unlock_domain(d);
- return rc;
-}
-
-int mem_access_send_req(struct domain *d, mem_event_request_t *req)
-{
- int rc = mem_event_claim_slot(d, &d->mem_event->access);
- if ( rc < 0 )
- return rc;
-
- mem_event_put_request(d, &d->mem_event->access, req);
-
- return 0;
-}
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c
deleted file mode 100644
index ba7e71e..0000000
--- a/xen/arch/x86/mm/mem_event.c
+++ /dev/null
@@ -1,705 +0,0 @@
-/******************************************************************************
- * arch/x86/mm/mem_event.c
- *
- * Memory event support.
- *
- * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- */
-
-
-#include <asm/domain.h>
-#include <xen/event.h>
-#include <xen/wait.h>
-#include <asm/p2m.h>
-#include <asm/mem_event.h>
-#include <asm/mem_paging.h>
-#include <asm/mem_access.h>
-#include <asm/mem_sharing.h>
-#include <xsm/xsm.h>
-
-/* for public/io/ring.h macros */
-#define xen_mb() mb()
-#define xen_rmb() rmb()
-#define xen_wmb() wmb()
-
-#define mem_event_ring_lock_init(_med) spin_lock_init(&(_med)->ring_lock)
-#define mem_event_ring_lock(_med) spin_lock(&(_med)->ring_lock)
-#define mem_event_ring_unlock(_med) spin_unlock(&(_med)->ring_lock)
-
-static int mem_event_enable(
- struct domain *d,
- xen_domctl_mem_event_op_t *mec,
- struct mem_event_domain *med,
- int pause_flag,
- int param,
- xen_event_channel_notification_t notification_fn)
-{
- int rc;
- unsigned long ring_gfn = d->arch.hvm_domain.params[param];
-
- /* Only one helper at a time. If the helper crashed,
- * the ring is in an undefined state and so is the guest.
- */
- if ( med->ring_page )
- return -EBUSY;
-
- /* The parameter defaults to zero, and it should be
- * set to something */
- if ( ring_gfn == 0 )
- return -ENOSYS;
-
- mem_event_ring_lock_init(med);
- mem_event_ring_lock(med);
-
- rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
- &med->ring_page);
- if ( rc < 0 )
- goto err;
-
- /* Set the number of currently blocked vCPUs to 0. */
- med->blocked = 0;
-
- /* Allocate event channel */
- rc = alloc_unbound_xen_event_channel(d->vcpu[0],
- current->domain->domain_id,
- notification_fn);
- if ( rc < 0 )
- goto err;
-
- med->xen_port = mec->port = rc;
-
- /* Prepare ring buffer */
- FRONT_RING_INIT(&med->front_ring,
- (mem_event_sring_t *)med->ring_page,
- PAGE_SIZE);
-
- /* Save the pause flag for this particular ring. */
- med->pause_flag = pause_flag;
-
- /* Initialize the last-chance wait queue. */
- init_waitqueue_head(&med->wq);
-
- mem_event_ring_unlock(med);
- return 0;
-
- err:
- destroy_ring_for_helper(&med->ring_page,
- med->ring_pg_struct);
- mem_event_ring_unlock(med);
-
- return rc;
-}
-
-static unsigned int mem_event_ring_available(struct mem_event_domain *med)
-{
- int avail_req = RING_FREE_REQUESTS(&med->front_ring);
- avail_req -= med->target_producers;
- avail_req -= med->foreign_producers;
-
- BUG_ON(avail_req < 0);
-
- return avail_req;
-}
-
-/*
- * mem_event_wake_blocked() will wakeup vcpus waiting for room in the
- * ring. These vCPUs were paused on their way out after placing an event,
- * but need to be resumed where the ring is capable of processing at least
- * one event from them.
- */
-static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *med)
-{
- struct vcpu *v;
- int online = d->max_vcpus;
- unsigned int avail_req = mem_event_ring_available(med);
-
- if ( avail_req == 0 || med->blocked == 0 )
- return;
-
- /*
- * We ensure that we only have vCPUs online if there are enough free slots
- * for their memory events to be processed. This will ensure that no
- * memory events are lost (due to the fact that certain types of events
- * cannot be replayed, we need to ensure that there is space in the ring
- * for when they are hit).
- * See comment below in mem_event_put_request().
- */
- for_each_vcpu ( d, v )
- if ( test_bit(med->pause_flag, &v->pause_flags) )
- online--;
-
- ASSERT(online == (d->max_vcpus - med->blocked));
-
- /* We remember which vcpu last woke up to avoid scanning always linearly
- * from zero and starving higher-numbered vcpus under high load */
- if ( d->vcpu )
- {
- int i, j, k;
-
- for (i = med->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
- {
- k = i % d->max_vcpus;
- v = d->vcpu[k];
- if ( !v )
- continue;
-
- if ( !(med->blocked) || online >= avail_req )
- break;
-
- if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
- {
- vcpu_unpause(v);
- online++;
- med->blocked--;
- med->last_vcpu_wake_up = k;
- }
- }
- }
-}
-
-/*
- * In the event that a vCPU attempted to place an event in the ring and
- * was unable to do so, it is queued on a wait queue. These are woken as
- * needed, and take precedence over the blocked vCPUs.
- */
-static void mem_event_wake_queued(struct domain *d, struct mem_event_domain *med)
-{
- unsigned int avail_req = mem_event_ring_available(med);
-
- if ( avail_req > 0 )
- wake_up_nr(&med->wq, avail_req);
-}
-
-/*
- * mem_event_wake() will wakeup all vcpus waiting for the ring to
- * become available. If we have queued vCPUs, they get top priority. We
- * are guaranteed that they will go through code paths that will eventually
- * call mem_event_wake() again, ensuring that any blocked vCPUs will get
- * unpaused once all the queued vCPUs have made it through.
- */
-void mem_event_wake(struct domain *d, struct mem_event_domain *med)
-{
- if (!list_empty(&med->wq.list))
- mem_event_wake_queued(d, med);
- else
- mem_event_wake_blocked(d, med);
-}
-
-static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
-{
- if ( med->ring_page )
- {
- struct vcpu *v;
-
- mem_event_ring_lock(med);
-
- if ( !list_empty(&med->wq.list) )
- {
- mem_event_ring_unlock(med);
- return -EBUSY;
- }
-
- /* Free domU's event channel and leave the other one unbound */
- free_xen_event_channel(d->vcpu[0], med->xen_port);
-
- /* Unblock all vCPUs */
- for_each_vcpu ( d, v )
- {
- if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
- {
- vcpu_unpause(v);
- med->blocked--;
- }
- }
-
- destroy_ring_for_helper(&med->ring_page,
- med->ring_pg_struct);
- mem_event_ring_unlock(med);
- }
-
- return 0;
-}
-
-static inline void mem_event_release_slot(struct domain *d,
- struct mem_event_domain *med)
-{
- /* Update the accounting */
- if ( current->domain == d )
- med->target_producers--;
- else
- med->foreign_producers--;
-
- /* Kick any waiters */
- mem_event_wake(d, med);
-}
-
-/*
- * mem_event_mark_and_pause() tags vcpu and put it to sleep.
- * The vcpu will resume execution in mem_event_wake_waiters().
- */
-void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
-{
- if ( !test_and_set_bit(med->pause_flag, &v->pause_flags) )
- {
- vcpu_pause_nosync(v);
- med->blocked++;
- }
-}
-
-/*
- * This must be preceded by a call to claim_slot(), and is guaranteed to
- * succeed. As a side-effect however, the vCPU may be paused if the ring is
- * overly full and its continued execution would cause stalling and excessive
- * waiting. The vCPU will be automatically unpaused when the ring clears.
- */
-void mem_event_put_request(struct domain *d,
- struct mem_event_domain *med,
- mem_event_request_t *req)
-{
- mem_event_front_ring_t *front_ring;
- int free_req;
- unsigned int avail_req;
- RING_IDX req_prod;
-
- if ( current->domain != d )
- {
- req->flags |= MEM_EVENT_FLAG_FOREIGN;
- ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
- }
-
- mem_event_ring_lock(med);
-
- /* Due to the reservations, this step must succeed. */
- front_ring = &med->front_ring;
- free_req = RING_FREE_REQUESTS(front_ring);
- ASSERT(free_req > 0);
-
- /* Copy request */
- req_prod = front_ring->req_prod_pvt;
- memcpy(RING_GET_REQUEST(front_ring, req_prod), req, sizeof(*req));
- req_prod++;
-
- /* Update ring */
- front_ring->req_prod_pvt = req_prod;
- RING_PUSH_REQUESTS(front_ring);
-
- /* We've actually *used* our reservation, so release the slot. */
- mem_event_release_slot(d, med);
-
- /* Give this vCPU a black eye if necessary, on the way out.
- * See the comments above wake_blocked() for more information
- * on how this mechanism works to avoid waiting. */
- avail_req = mem_event_ring_available(med);
- if( current->domain == d && avail_req < d->max_vcpus )
- mem_event_mark_and_pause(current, med);
-
- mem_event_ring_unlock(med);
-
- notify_via_xen_event_channel(d, med->xen_port);
-}
-
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_event_response_t *rsp)
-{
- mem_event_front_ring_t *front_ring;
- RING_IDX rsp_cons;
-
- mem_event_ring_lock(med);
-
- front_ring = &med->front_ring;
- rsp_cons = front_ring->rsp_cons;
-
- if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
- {
- mem_event_ring_unlock(med);
- return 0;
- }
-
- /* Copy response */
- memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
- rsp_cons++;
-
- /* Update ring */
- front_ring->rsp_cons = rsp_cons;
- front_ring->sring->rsp_event = rsp_cons + 1;
-
- /* Kick any waiters -- since we've just consumed an event,
- * there may be additional space available in the ring. */
- mem_event_wake(d, med);
-
- mem_event_ring_unlock(med);
-
- return 1;
-}
-
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
-{
- mem_event_ring_lock(med);
- mem_event_release_slot(d, med);
- mem_event_ring_unlock(med);
-}
-
-static int mem_event_grab_slot(struct mem_event_domain *med, int foreign)
-{
- unsigned int avail_req;
-
- if ( !med->ring_page )
- return -ENOSYS;
-
- mem_event_ring_lock(med);
-
- avail_req = mem_event_ring_available(med);
- if ( avail_req == 0 )
- {
- mem_event_ring_unlock(med);
- return -EBUSY;
- }
-
- if ( !foreign )
- med->target_producers++;
- else
- med->foreign_producers++;
-
- mem_event_ring_unlock(med);
-
- return 0;
-}
-
-/* Simple try_grab wrapper for use in the wait_event() macro. */
-static int mem_event_wait_try_grab(struct mem_event_domain *med, int *rc)
-{
- *rc = mem_event_grab_slot(med, 0);
- return *rc;
-}
-
-/* Call mem_event_grab_slot() until the ring doesn't exist, or is available. */
-static int mem_event_wait_slot(struct mem_event_domain *med)
-{
- int rc = -EBUSY;
- wait_event(med->wq, mem_event_wait_try_grab(med, &rc) != -EBUSY);
- return rc;
-}
-
-bool_t mem_event_check_ring(struct mem_event_domain *med)
-{
- return (med->ring_page != NULL);
-}
-
-/*
- * Determines whether or not the current vCPU belongs to the target domain,
- * and calls the appropriate wait function. If it is a guest vCPU, then we
- * use mem_event_wait_slot() to reserve a slot. As long as there is a ring,
- * this function will always return 0 for a guest. For a non-guest, we check
- * for space and return -EBUSY if the ring is not available.
- *
- * Return codes: -ENOSYS: the ring is not yet configured
- * -EBUSY: the ring is busy
- * 0: a spot has been reserved
- *
- */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
- bool_t allow_sleep)
-{
- if ( (current->domain == d) && allow_sleep )
- return mem_event_wait_slot(med);
- else
- return mem_event_grab_slot(med, (current->domain != d));
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_paging_notification(struct vcpu *v, unsigned int port)
-{
- if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
- p2m_mem_paging_resume(v->domain);
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_access_notification(struct vcpu *v, unsigned int port)
-{
- if ( likely(v->domain->mem_event->access.ring_page != NULL) )
- p2m_mem_access_resume(v->domain);
-}
-
-/* Registered with Xen-bound event channel for incoming notifications. */
-static void mem_sharing_notification(struct vcpu *v, unsigned int port)
-{
- if ( likely(v->domain->mem_event->share.ring_page != NULL) )
- mem_sharing_sharing_resume(v->domain);
-}
-
-int do_mem_event_op(int op, uint32_t domain, void *arg)
-{
- int ret;
- struct domain *d;
-
- ret = rcu_lock_live_remote_domain_by_id(domain, &d);
- if ( ret )
- return ret;
-
- ret = xsm_mem_event_op(XSM_DM_PRIV, d, op);
- if ( ret )
- goto out;
-
- switch (op)
- {
- case XENMEM_paging_op:
- ret = mem_paging_memop(d, (xen_mem_event_op_t *) arg);
- break;
- case XENMEM_sharing_op:
- ret = mem_sharing_memop(d, (xen_mem_sharing_op_t *) arg);
- break;
- default:
- ret = -ENOSYS;
- }
-
- out:
- rcu_unlock_domain(d);
- return ret;
-}
-
-/* Clean up on domain destruction */
-void mem_event_cleanup(struct domain *d)
-{
- if ( d->mem_event->paging.ring_page ) {
- /* Destroying the wait queue head means waking up all
- * queued vcpus. This will drain the list, allowing
- * the disable routine to complete. It will also drop
- * all domain refs the wait-queued vcpus are holding.
- * Finally, because this code path involves previously
- * pausing the domain (domain_kill), unpausing the
- * vcpus causes no harm. */
- destroy_waitqueue_head(&d->mem_event->paging.wq);
- (void)mem_event_disable(d, &d->mem_event->paging);
- }
- if ( d->mem_event->access.ring_page ) {
- destroy_waitqueue_head(&d->mem_event->access.wq);
- (void)mem_event_disable(d, &d->mem_event->access);
- }
- if ( d->mem_event->share.ring_page ) {
- destroy_waitqueue_head(&d->mem_event->share.wq);
- (void)mem_event_disable(d, &d->mem_event->share);
- }
-}
-
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
- XEN_GUEST_HANDLE_PARAM(void) u_domctl)
-{
- int rc;
-
- rc = xsm_mem_event_control(XSM_PRIV, d, mec->mode, mec->op);
- if ( rc )
- return rc;
-
- if ( unlikely(d == current->domain) )
- {
- gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
- return -EINVAL;
- }
-
- if ( unlikely(d->is_dying) )
- {
- gdprintk(XENLOG_INFO, "Ignoring memory event op on dying domain %u\n",
- d->domain_id);
- return 0;
- }
-
- if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )
- {
- gdprintk(XENLOG_INFO,
- "Memory event op on a domain (%u) with no vcpus\n",
- d->domain_id);
- return -EINVAL;
- }
-
- rc = -ENOSYS;
-
- switch ( mec->mode )
- {
- case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
- {
- struct mem_event_domain *med = &d->mem_event->paging;
- rc = -EINVAL;
-
- switch( mec->op )
- {
- case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
- {
- struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
- rc = -EOPNOTSUPP;
- /* pvh fixme: p2m_is_foreign types need addressing */
- if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
- break;
-
- rc = -ENODEV;
- /* Only HAP is supported */
- if ( !hap_enabled(d) )
- break;
-
- /* No paging if iommu is used */
- rc = -EMLINK;
- if ( unlikely(need_iommu(d)) )
- break;
-
- rc = -EXDEV;
- /* Disallow paging in a PoD guest */
- if ( p2m->pod.entry_count )
- break;
-
- rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
- HVM_PARAM_PAGING_RING_PFN,
- mem_paging_notification);
- }
- break;
-
- case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
- {
- if ( med->ring_page )
- rc = mem_event_disable(d, med);
- }
- break;
-
- default:
- rc = -ENOSYS;
- break;
- }
- }
- break;
-
- case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
- {
- struct mem_event_domain *med = &d->mem_event->access;
- rc = -EINVAL;
-
- switch( mec->op )
- {
- case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
- {
- rc = -ENODEV;
- /* Only HAP is supported */
- if ( !hap_enabled(d) )
- break;
-
- /* Currently only EPT is supported */
- if ( !cpu_has_vmx )
- break;
-
- rc = mem_event_enable(d, mec, med, _VPF_mem_access,
- HVM_PARAM_ACCESS_RING_PFN,
- mem_access_notification);
- }
- break;
-
- case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
- {
- if ( med->ring_page )
- rc = mem_event_disable(d, med);
- }
- break;
-
- default:
- rc = -ENOSYS;
- break;
- }
- }
- break;
-
- case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
- {
- struct mem_event_domain *med = &d->mem_event->share;
- rc = -EINVAL;
-
- switch( mec->op )
- {
- case XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE:
- {
- rc = -EOPNOTSUPP;
- /* pvh fixme: p2m_is_foreign types need addressing */
- if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
- break;
-
- rc = -ENODEV;
- /* Only HAP is supported */
- if ( !hap_enabled(d) )
- break;
-
- rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
- HVM_PARAM_SHARING_RING_PFN,
- mem_sharing_notification);
- }
- break;
-
- case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE:
- {
- if ( med->ring_page )
- rc = mem_event_disable(d, med);
- }
- break;
-
- default:
- rc = -ENOSYS;
- break;
- }
- }
- break;
-
- default:
- rc = -ENOSYS;
- }
-
- return rc;
-}
-
-void mem_event_vcpu_pause(struct vcpu *v)
-{
- ASSERT(v == current);
-
- atomic_inc(&v->mem_event_pause_count);
- vcpu_pause_nosync(v);
-}
-
-void mem_event_vcpu_unpause(struct vcpu *v)
-{
- int old, new, prev = v->mem_event_pause_count.counter;
-
- /* All unpause requests as a result of toolstack responses. Prevent
- * underflow of the vcpu pause count. */
- do
- {
- old = prev;
- new = old - 1;
-
- if ( new < 0 )
- {
- printk(XENLOG_G_WARNING
- "%pv mem_event: Too many unpause attempts\n", v);
- return;
- }
-
- prev = cmpxchg(&v->mem_event_pause_count.counter, old, new);
- } while ( prev != old );
-
- vcpu_unpause(v);
-}
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 235776d..65f6a3d 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -22,7 +22,7 @@
#include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
int mem_paging_memop(struct domain *d, xen_mem_event_op_t *mec)
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 79188b9..fa845fd 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -30,7 +30,7 @@
#include <asm/page.h>
#include <asm/string.h>
#include <asm/p2m.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
#include <asm/atomic.h>
#include <xen/rcupdate.h>
#include <asm/event.h>
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index bd4c7c8..881259a 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -26,7 +26,7 @@
#include <asm/p2m.h>
#include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
#include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
#include <public/mem_event.h>
#include <asm/mem_sharing.h>
#include <xen/event.h>
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 085ab6f..46231cf 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -30,7 +30,7 @@
#include <asm/paging.h>
#include <asm/p2m.h>
#include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
#include <public/mem_event.h>
#include <asm/mem_sharing.h>
#include <xen/event.h>
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index bca9f0f..5190cde 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -30,7 +30,7 @@
#include <asm/p2m.h>
#include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
#include <xen/iommu.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
#include <public/mem_event.h>
#include <asm/mem_sharing.h>
#include <xen/event.h>
diff --git a/xen/arch/x86/x86_64/compat/mm.c b/xen/arch/x86/x86_64/compat/mm.c
index 69c6195..203c6b4 100644
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -2,9 +2,9 @@
#include <xen/multicall.h>
#include <compat/memory.h>
#include <compat/xen.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
#include <asm/mem_sharing.h>
-#include <asm/mem_access.h>
+#include <xen/mem_access.h>
int compat_set_gdt(XEN_GUEST_HANDLE_PARAM(uint) frame_list, unsigned int entries)
{
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 4937f9a..1f9702d 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -35,9 +35,9 @@
#include <asm/msr.h>
#include <asm/setup.h>
#include <asm/numa.h>
-#include <asm/mem_event.h>
+#include <xen/mem_event.h>
#include <asm/mem_sharing.h>
-#include <asm/mem_access.h>
+#include <xen/mem_access.h>
#include <public/memory.h>
/* Parameters for PFN/MADDR compression. */
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 3683ae3..a1b6128 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -51,6 +51,8 @@ obj-y += tmem_xen.o
obj-y += radix-tree.o
obj-y += rbtree.o
obj-y += lzo.o
+obj-y += mem_access.o
+obj-y += mem_event.o
obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma unlzo unlz4 earlycpio,$(n).init.o)
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 1952070..6f51311 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -15,6 +15,7 @@
#include <xen/domain.h>
#include <xen/mm.h>
#include <xen/event.h>
+#include <xen/mem_event.h>
#include <xen/time.h>
#include <xen/console.h>
#include <xen/softirq.h>
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
new file mode 100644
index 0000000..84acdf9
--- /dev/null
+++ b/xen/common/mem_access.c
@@ -0,0 +1,137 @@
+/******************************************************************************
+ * mem_access.c
+ *
+ * Memory access support.
+ *
+ * Copyright (c) 2011 Virtuata, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+
+#include <xen/sched.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <asm/p2m.h>
+#include <public/memory.h>
+#include <xen/mem_event.h>
+#include <xsm/xsm.h>
+
+#ifdef CONFIG_X86
+
+int mem_access_memop(unsigned long cmd,
+ XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
+{
+ long rc;
+ xen_mem_access_op_t mao;
+ struct domain *d;
+
+ if ( copy_from_guest(&mao, arg, 1) )
+ return -EFAULT;
+
+ rc = rcu_lock_live_remote_domain_by_id(mao.domid, &d);
+ if ( rc )
+ return rc;
+
+ rc = -EINVAL;
+ if ( !is_hvm_domain(d) )
+ goto out;
+
+ rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
+ if ( rc )
+ goto out;
+
+ rc = -ENODEV;
+ if ( unlikely(!d->mem_event->access.ring_page) )
+ goto out;
+
+ switch ( mao.op )
+ {
+ case XENMEM_access_op_resume:
+ p2m_mem_access_resume(d);
+ rc = 0;
+ break;
+
+ case XENMEM_access_op_set_access:
+ {
+ unsigned long start_iter = cmd & ~MEMOP_CMD_MASK;
+
+ rc = -EINVAL;
+ if ( (mao.pfn != ~0ull) &&
+ (mao.nr < start_iter ||
+ ((mao.pfn + mao.nr - 1) < mao.pfn) ||
+ ((mao.pfn + mao.nr - 1) > domain_get_maximum_gpfn(d))) )
+ break;
+
+ rc = p2m_set_mem_access(d, mao.pfn, mao.nr, start_iter,
+ MEMOP_CMD_MASK, mao.access);
+ if ( rc > 0 )
+ {
+ ASSERT(!(rc & MEMOP_CMD_MASK));
+ rc = hypercall_create_continuation(__HYPERVISOR_memory_op, "lh",
+ XENMEM_access_op | rc, arg);
+ }
+ break;
+ }
+
+ case XENMEM_access_op_get_access:
+ {
+ xenmem_access_t access;
+
+ rc = -EINVAL;
+ if ( (mao.pfn > domain_get_maximum_gpfn(d)) && mao.pfn != ~0ull )
+ break;
+
+ rc = p2m_get_mem_access(d, mao.pfn, &access);
+ if ( rc != 0 )
+ break;
+
+ mao.access = access;
+ rc = __copy_field_to_guest(arg, &mao, access) ? -EFAULT : 0;
+
+ break;
+ }
+
+ default:
+ rc = -ENOSYS;
+ break;
+ }
+
+ out:
+ rcu_unlock_domain(d);
+ return rc;
+}
+
+int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+{
+ int rc = mem_event_claim_slot(d, &d->mem_event->access);
+ if ( rc < 0 )
+ return rc;
+
+ mem_event_put_request(d, &d->mem_event->access, req);
+
+ return 0;
+}
+
+#endif /* CONFIG_X86 */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
new file mode 100644
index 0000000..604f94f
--- /dev/null
+++ b/xen/common/mem_event.c
@@ -0,0 +1,707 @@
+/******************************************************************************
+ * mem_event.c
+ *
+ * Memory event support.
+ *
+ * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifdef CONFIG_X86
+
+#include <asm/domain.h>
+#include <xen/event.h>
+#include <xen/wait.h>
+#include <asm/p2m.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
+#include <asm/mem_paging.h>
+#include <asm/mem_sharing.h>
+#include <xsm/xsm.h>
+
+/* for public/io/ring.h macros */
+#define xen_mb() mb()
+#define xen_rmb() rmb()
+#define xen_wmb() wmb()
+
+#define mem_event_ring_lock_init(_med) spin_lock_init(&(_med)->ring_lock)
+#define mem_event_ring_lock(_med) spin_lock(&(_med)->ring_lock)
+#define mem_event_ring_unlock(_med) spin_unlock(&(_med)->ring_lock)
+
+static int mem_event_enable(
+ struct domain *d,
+ xen_domctl_mem_event_op_t *mec,
+ struct mem_event_domain *med,
+ int pause_flag,
+ int param,
+ xen_event_channel_notification_t notification_fn)
+{
+ int rc;
+ unsigned long ring_gfn = d->arch.hvm_domain.params[param];
+
+ /* Only one helper at a time. If the helper crashed,
+ * the ring is in an undefined state and so is the guest.
+ */
+ if ( med->ring_page )
+ return -EBUSY;
+
+ /* The parameter defaults to zero, and it should be
+ * set to something */
+ if ( ring_gfn == 0 )
+ return -ENOSYS;
+
+ mem_event_ring_lock_init(med);
+ mem_event_ring_lock(med);
+
+ rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
+ &med->ring_page);
+ if ( rc < 0 )
+ goto err;
+
+ /* Set the number of currently blocked vCPUs to 0. */
+ med->blocked = 0;
+
+ /* Allocate event channel */
+ rc = alloc_unbound_xen_event_channel(d->vcpu[0],
+ current->domain->domain_id,
+ notification_fn);
+ if ( rc < 0 )
+ goto err;
+
+ med->xen_port = mec->port = rc;
+
+ /* Prepare ring buffer */
+ FRONT_RING_INIT(&med->front_ring,
+ (mem_event_sring_t *)med->ring_page,
+ PAGE_SIZE);
+
+ /* Save the pause flag for this particular ring. */
+ med->pause_flag = pause_flag;
+
+ /* Initialize the last-chance wait queue. */
+ init_waitqueue_head(&med->wq);
+
+ mem_event_ring_unlock(med);
+ return 0;
+
+ err:
+ destroy_ring_for_helper(&med->ring_page,
+ med->ring_pg_struct);
+ mem_event_ring_unlock(med);
+
+ return rc;
+}
+
+static unsigned int mem_event_ring_available(struct mem_event_domain *med)
+{
+ int avail_req = RING_FREE_REQUESTS(&med->front_ring);
+ avail_req -= med->target_producers;
+ avail_req -= med->foreign_producers;
+
+ BUG_ON(avail_req < 0);
+
+ return avail_req;
+}
+
+/*
+ * mem_event_wake_blocked() will wakeup vcpus waiting for room in the
+ * ring. These vCPUs were paused on their way out after placing an event,
+ * but need to be resumed where the ring is capable of processing at least
+ * one event from them.
+ */
+static void mem_event_wake_blocked(struct domain *d, struct mem_event_domain *med)
+{
+ struct vcpu *v;
+ int online = d->max_vcpus;
+ unsigned int avail_req = mem_event_ring_available(med);
+
+ if ( avail_req == 0 || med->blocked == 0 )
+ return;
+
+ /*
+ * We ensure that we only have vCPUs online if there are enough free slots
+ * for their memory events to be processed. This will ensure that no
+ * memory events are lost (due to the fact that certain types of events
+ * cannot be replayed, we need to ensure that there is space in the ring
+ * for when they are hit).
+ * See comment below in mem_event_put_request().
+ */
+ for_each_vcpu ( d, v )
+ if ( test_bit(med->pause_flag, &v->pause_flags) )
+ online--;
+
+ ASSERT(online == (d->max_vcpus - med->blocked));
+
+ /* We remember which vcpu last woke up to avoid scanning always linearly
+ * from zero and starving higher-numbered vcpus under high load */
+ if ( d->vcpu )
+ {
+ int i, j, k;
+
+ for (i = med->last_vcpu_wake_up + 1, j = 0; j < d->max_vcpus; i++, j++)
+ {
+ k = i % d->max_vcpus;
+ v = d->vcpu[k];
+ if ( !v )
+ continue;
+
+ if ( !(med->blocked) || online >= avail_req )
+ break;
+
+ if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+ {
+ vcpu_unpause(v);
+ online++;
+ med->blocked--;
+ med->last_vcpu_wake_up = k;
+ }
+ }
+ }
+}
+
+/*
+ * In the event that a vCPU attempted to place an event in the ring and
+ * was unable to do so, it is queued on a wait queue. These are woken as
+ * needed, and take precedence over the blocked vCPUs.
+ */
+static void mem_event_wake_queued(struct domain *d, struct mem_event_domain *med)
+{
+ unsigned int avail_req = mem_event_ring_available(med);
+
+ if ( avail_req > 0 )
+ wake_up_nr(&med->wq, avail_req);
+}
+
+/*
+ * mem_event_wake() will wakeup all vcpus waiting for the ring to
+ * become available. If we have queued vCPUs, they get top priority. We
+ * are guaranteed that they will go through code paths that will eventually
+ * call mem_event_wake() again, ensuring that any blocked vCPUs will get
+ * unpaused once all the queued vCPUs have made it through.
+ */
+void mem_event_wake(struct domain *d, struct mem_event_domain *med)
+{
+ if (!list_empty(&med->wq.list))
+ mem_event_wake_queued(d, med);
+ else
+ mem_event_wake_blocked(d, med);
+}
+
+static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
+{
+ if ( med->ring_page )
+ {
+ struct vcpu *v;
+
+ mem_event_ring_lock(med);
+
+ if ( !list_empty(&med->wq.list) )
+ {
+ mem_event_ring_unlock(med);
+ return -EBUSY;
+ }
+
+ /* Free domU's event channel and leave the other one unbound */
+ free_xen_event_channel(d->vcpu[0], med->xen_port);
+
+ /* Unblock all vCPUs */
+ for_each_vcpu ( d, v )
+ {
+ if ( test_and_clear_bit(med->pause_flag, &v->pause_flags) )
+ {
+ vcpu_unpause(v);
+ med->blocked--;
+ }
+ }
+
+ destroy_ring_for_helper(&med->ring_page,
+ med->ring_pg_struct);
+ mem_event_ring_unlock(med);
+ }
+
+ return 0;
+}
+
+static inline void mem_event_release_slot(struct domain *d,
+ struct mem_event_domain *med)
+{
+ /* Update the accounting */
+ if ( current->domain == d )
+ med->target_producers--;
+ else
+ med->foreign_producers--;
+
+ /* Kick any waiters */
+ mem_event_wake(d, med);
+}
+
+/*
+ * mem_event_mark_and_pause() tags vcpu and put it to sleep.
+ * The vcpu will resume execution in mem_event_wake_waiters().
+ */
+void mem_event_mark_and_pause(struct vcpu *v, struct mem_event_domain *med)
+{
+ if ( !test_and_set_bit(med->pause_flag, &v->pause_flags) )
+ {
+ vcpu_pause_nosync(v);
+ med->blocked++;
+ }
+}
+
+/*
+ * This must be preceded by a call to claim_slot(), and is guaranteed to
+ * succeed. As a side-effect however, the vCPU may be paused if the ring is
+ * overly full and its continued execution would cause stalling and excessive
+ * waiting. The vCPU will be automatically unpaused when the ring clears.
+ */
+void mem_event_put_request(struct domain *d,
+ struct mem_event_domain *med,
+ mem_event_request_t *req)
+{
+ mem_event_front_ring_t *front_ring;
+ int free_req;
+ unsigned int avail_req;
+ RING_IDX req_prod;
+
+ if ( current->domain != d )
+ {
+ req->flags |= MEM_EVENT_FLAG_FOREIGN;
+ ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
+ }
+
+ mem_event_ring_lock(med);
+
+ /* Due to the reservations, this step must succeed. */
+ front_ring = &med->front_ring;
+ free_req = RING_FREE_REQUESTS(front_ring);
+ ASSERT(free_req > 0);
+
+ /* Copy request */
+ req_prod = front_ring->req_prod_pvt;
+ memcpy(RING_GET_REQUEST(front_ring, req_prod), req, sizeof(*req));
+ req_prod++;
+
+ /* Update ring */
+ front_ring->req_prod_pvt = req_prod;
+ RING_PUSH_REQUESTS(front_ring);
+
+ /* We've actually *used* our reservation, so release the slot. */
+ mem_event_release_slot(d, med);
+
+ /* Give this vCPU a black eye if necessary, on the way out.
+ * See the comments above wake_blocked() for more information
+ * on how this mechanism works to avoid waiting. */
+ avail_req = mem_event_ring_available(med);
+ if( current->domain == d && avail_req < d->max_vcpus )
+ mem_event_mark_and_pause(current, med);
+
+ mem_event_ring_unlock(med);
+
+ notify_via_xen_event_channel(d, med->xen_port);
+}
+
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med, mem_event_response_t *rsp)
+{
+ mem_event_front_ring_t *front_ring;
+ RING_IDX rsp_cons;
+
+ mem_event_ring_lock(med);
+
+ front_ring = &med->front_ring;
+ rsp_cons = front_ring->rsp_cons;
+
+ if ( !RING_HAS_UNCONSUMED_RESPONSES(front_ring) )
+ {
+ mem_event_ring_unlock(med);
+ return 0;
+ }
+
+ /* Copy response */
+ memcpy(rsp, RING_GET_RESPONSE(front_ring, rsp_cons), sizeof(*rsp));
+ rsp_cons++;
+
+ /* Update ring */
+ front_ring->rsp_cons = rsp_cons;
+ front_ring->sring->rsp_event = rsp_cons + 1;
+
+ /* Kick any waiters -- since we've just consumed an event,
+ * there may be additional space available in the ring. */
+ mem_event_wake(d, med);
+
+ mem_event_ring_unlock(med);
+
+ return 1;
+}
+
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+{
+ mem_event_ring_lock(med);
+ mem_event_release_slot(d, med);
+ mem_event_ring_unlock(med);
+}
+
+static int mem_event_grab_slot(struct mem_event_domain *med, int foreign)
+{
+ unsigned int avail_req;
+
+ if ( !med->ring_page )
+ return -ENOSYS;
+
+ mem_event_ring_lock(med);
+
+ avail_req = mem_event_ring_available(med);
+ if ( avail_req == 0 )
+ {
+ mem_event_ring_unlock(med);
+ return -EBUSY;
+ }
+
+ if ( !foreign )
+ med->target_producers++;
+ else
+ med->foreign_producers++;
+
+ mem_event_ring_unlock(med);
+
+ return 0;
+}
+
+/* Simple try_grab wrapper for use in the wait_event() macro. */
+static int mem_event_wait_try_grab(struct mem_event_domain *med, int *rc)
+{
+ *rc = mem_event_grab_slot(med, 0);
+ return *rc;
+}
+
+/* Call mem_event_grab_slot() until the ring doesn't exist, or is available. */
+static int mem_event_wait_slot(struct mem_event_domain *med)
+{
+ int rc = -EBUSY;
+ wait_event(med->wq, mem_event_wait_try_grab(med, &rc) != -EBUSY);
+ return rc;
+}
+
+bool_t mem_event_check_ring(struct mem_event_domain *med)
+{
+ return (med->ring_page != NULL);
+}
+
+/*
+ * Determines whether or not the current vCPU belongs to the target domain,
+ * and calls the appropriate wait function. If it is a guest vCPU, then we
+ * use mem_event_wait_slot() to reserve a slot. As long as there is a ring,
+ * this function will always return 0 for a guest. For a non-guest, we check
+ * for space and return -EBUSY if the ring is not available.
+ *
+ * Return codes: -ENOSYS: the ring is not yet configured
+ * -EBUSY: the ring is busy
+ * 0: a spot has been reserved
+ *
+ */
+int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+ bool_t allow_sleep)
+{
+ if ( (current->domain == d) && allow_sleep )
+ return mem_event_wait_slot(med);
+ else
+ return mem_event_grab_slot(med, (current->domain != d));
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_paging_notification(struct vcpu *v, unsigned int port)
+{
+ if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
+ p2m_mem_paging_resume(v->domain);
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_access_notification(struct vcpu *v, unsigned int port)
+{
+ if ( likely(v->domain->mem_event->access.ring_page != NULL) )
+ p2m_mem_access_resume(v->domain);
+}
+
+/* Registered with Xen-bound event channel for incoming notifications. */
+static void mem_sharing_notification(struct vcpu *v, unsigned int port)
+{
+ if ( likely(v->domain->mem_event->share.ring_page != NULL) )
+ mem_sharing_sharing_resume(v->domain);
+}
+
+int do_mem_event_op(int op, uint32_t domain, void *arg)
+{
+ int ret;
+ struct domain *d;
+
+ ret = rcu_lock_live_remote_domain_by_id(domain, &d);
+ if ( ret )
+ return ret;
+
+ ret = xsm_mem_event_op(XSM_DM_PRIV, d, op);
+ if ( ret )
+ goto out;
+
+ switch (op)
+ {
+ case XENMEM_paging_op:
+ ret = mem_paging_memop(d, (xen_mem_event_op_t *) arg);
+ break;
+ case XENMEM_sharing_op:
+ ret = mem_sharing_memop(d, (xen_mem_sharing_op_t *) arg);
+ break;
+ default:
+ ret = -ENOSYS;
+ }
+
+ out:
+ rcu_unlock_domain(d);
+ return ret;
+}
+
+/* Clean up on domain destruction */
+void mem_event_cleanup(struct domain *d)
+{
+ if ( d->mem_event->paging.ring_page ) {
+ /* Destroying the wait queue head means waking up all
+ * queued vcpus. This will drain the list, allowing
+ * the disable routine to complete. It will also drop
+ * all domain refs the wait-queued vcpus are holding.
+ * Finally, because this code path involves previously
+ * pausing the domain (domain_kill), unpausing the
+ * vcpus causes no harm. */
+ destroy_waitqueue_head(&d->mem_event->paging.wq);
+ (void)mem_event_disable(d, &d->mem_event->paging);
+ }
+ if ( d->mem_event->access.ring_page ) {
+ destroy_waitqueue_head(&d->mem_event->access.wq);
+ (void)mem_event_disable(d, &d->mem_event->access);
+ }
+ if ( d->mem_event->share.ring_page ) {
+ destroy_waitqueue_head(&d->mem_event->share.wq);
+ (void)mem_event_disable(d, &d->mem_event->share);
+ }
+}
+
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+ XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+{
+ int rc;
+
+ rc = xsm_mem_event_control(XSM_PRIV, d, mec->mode, mec->op);
+ if ( rc )
+ return rc;
+
+ if ( unlikely(d == current->domain) )
+ {
+ gdprintk(XENLOG_INFO, "Tried to do a memory event op on itself.\n");
+ return -EINVAL;
+ }
+
+ if ( unlikely(d->is_dying) )
+ {
+ gdprintk(XENLOG_INFO, "Ignoring memory event op on dying domain %u\n",
+ d->domain_id);
+ return 0;
+ }
+
+ if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )
+ {
+ gdprintk(XENLOG_INFO,
+ "Memory event op on a domain (%u) with no vcpus\n",
+ d->domain_id);
+ return -EINVAL;
+ }
+
+ rc = -ENOSYS;
+
+ switch ( mec->mode )
+ {
+ case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
+ {
+ struct mem_event_domain *med = &d->mem_event->paging;
+ rc = -EINVAL;
+
+ switch( mec->op )
+ {
+ case XEN_DOMCTL_MEM_EVENT_OP_PAGING_ENABLE:
+ {
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+ rc = -EOPNOTSUPP;
+ /* pvh fixme: p2m_is_foreign types need addressing */
+ if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
+ break;
+
+ rc = -ENODEV;
+ /* Only HAP is supported */
+ if ( !hap_enabled(d) )
+ break;
+
+ /* No paging if iommu is used */
+ rc = -EMLINK;
+ if ( unlikely(need_iommu(d)) )
+ break;
+
+ rc = -EXDEV;
+ /* Disallow paging in a PoD guest */
+ if ( p2m->pod.entry_count )
+ break;
+
+ rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
+ HVM_PARAM_PAGING_RING_PFN,
+ mem_paging_notification);
+ }
+ break;
+
+ case XEN_DOMCTL_MEM_EVENT_OP_PAGING_DISABLE:
+ {
+ if ( med->ring_page )
+ rc = mem_event_disable(d, med);
+ }
+ break;
+
+ default:
+ rc = -ENOSYS;
+ break;
+ }
+ }
+ break;
+
+ case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
+ {
+ struct mem_event_domain *med = &d->mem_event->access;
+ rc = -EINVAL;
+
+ switch( mec->op )
+ {
+ case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
+ {
+ rc = -ENODEV;
+ /* Only HAP is supported */
+ if ( !hap_enabled(d) )
+ break;
+
+ /* Currently only EPT is supported */
+ if ( !cpu_has_vmx )
+ break;
+
+ rc = mem_event_enable(d, mec, med, _VPF_mem_access,
+ HVM_PARAM_ACCESS_RING_PFN,
+ mem_access_notification);
+ }
+ break;
+
+ case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_DISABLE:
+ {
+ if ( med->ring_page )
+ rc = mem_event_disable(d, med);
+ }
+ break;
+
+ default:
+ rc = -ENOSYS;
+ break;
+ }
+ }
+ break;
+
+ case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
+ {
+ struct mem_event_domain *med = &d->mem_event->share;
+ rc = -EINVAL;
+
+ switch( mec->op )
+ {
+ case XEN_DOMCTL_MEM_EVENT_OP_SHARING_ENABLE:
+ {
+ rc = -EOPNOTSUPP;
+ /* pvh fixme: p2m_is_foreign types need addressing */
+ if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
+ break;
+
+ rc = -ENODEV;
+ /* Only HAP is supported */
+ if ( !hap_enabled(d) )
+ break;
+
+ rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
+ HVM_PARAM_SHARING_RING_PFN,
+ mem_sharing_notification);
+ }
+ break;
+
+ case XEN_DOMCTL_MEM_EVENT_OP_SHARING_DISABLE:
+ {
+ if ( med->ring_page )
+ rc = mem_event_disable(d, med);
+ }
+ break;
+
+ default:
+ rc = -ENOSYS;
+ break;
+ }
+ }
+ break;
+
+ default:
+ rc = -ENOSYS;
+ }
+
+ return rc;
+}
+
+void mem_event_vcpu_pause(struct vcpu *v)
+{
+ ASSERT(v == current);
+
+ atomic_inc(&v->mem_event_pause_count);
+ vcpu_pause_nosync(v);
+}
+
+void mem_event_vcpu_unpause(struct vcpu *v)
+{
+ int old, new, prev = v->mem_event_pause_count.counter;
+
+ /* All unpause requests as a result of toolstack responses. Prevent
+ * underflow of the vcpu pause count. */
+ do
+ {
+ old = prev;
+ new = old - 1;
+
+ if ( new < 0 )
+ {
+ printk(XENLOG_G_WARNING
+ "%pv mem_event: Too many unpause attempts\n", v);
+ return;
+ }
+
+ prev = cmpxchg(&v->mem_event_pause_count.counter, old, new);
+ } while ( prev != old );
+
+ vcpu_unpause(v);
+}
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index c2dd31b..711aaef 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -977,6 +977,68 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
return rc;
}
+void destroy_ring_for_helper(
+ void **_va, struct page_info *page)
+{
+ void *va = *_va;
+
+ if ( va != NULL )
+ {
+ unmap_domain_page_global(va);
+ put_page_and_type(page);
+ *_va = NULL;
+ }
+}
+
+int prepare_ring_for_helper(
+ struct domain *d, unsigned long gmfn, struct page_info **_page,
+ void **_va)
+{
+ struct page_info *page;
+ p2m_type_t p2mt;
+ void *va;
+
+ page = get_page_from_gfn(d, gmfn, &p2mt, P2M_UNSHARE);
+
+#ifdef CONFIG_X86
+ if ( p2m_is_paging(p2mt) )
+ {
+ if ( page )
+ put_page(page);
+ p2m_mem_paging_populate(d, gmfn);
+ return -ENOENT;
+ }
+ if ( p2m_is_shared(p2mt) )
+ {
+ if ( page )
+ put_page(page);
+ return -ENOENT;
+ }
+#endif
+
+ if ( !page )
+ return -EINVAL;
+
+ if ( !get_page_type(page, PGT_writable_page) )
+ {
+ put_page(page);
+ return -EINVAL;
+ }
+
+ va = __map_domain_page_global(page);
+ if ( va == NULL )
+ {
+ put_page_and_type(page);
+ return -ENOMEM;
+ }
+
+ *_va = va;
+ *_page = page;
+
+ return 0;
+}
+
+
/*
* Local variables:
* mode: C
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 9fa80a4..7fc3b97 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -301,7 +301,6 @@ struct page_info *get_page_from_gva(struct domain *d, vaddr_t va,
})
static inline void put_gfn(struct domain *d, unsigned long gfn) {}
-static inline void mem_event_cleanup(struct domain *d) {}
static inline int relinquish_shared_pages(struct domain *d)
{
return 0;
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 0ebd478..b07400e 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -226,12 +226,6 @@ int hvm_vcpu_cacheattr_init(struct vcpu *v);
void hvm_vcpu_cacheattr_destroy(struct vcpu *v);
void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip);
-/* Prepare/destroy a ring for a dom0 helper. Helper with talk
- * with Xen on behalf of this hvm domain. */
-int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
- struct page_info **_page, void **_va);
-void destroy_ring_for_helper(void **_va, struct page_info *page);
-
bool_t hvm_send_assist_req(ioreq_t *p);
void hvm_broadcast_assist_req(ioreq_t *p);
diff --git a/xen/include/asm-x86/mem_access.h b/xen/include/asm-x86/mem_access.h
deleted file mode 100644
index 5c7c5fd..0000000
--- a/xen/include/asm-x86/mem_access.h
+++ /dev/null
@@ -1,39 +0,0 @@
-/******************************************************************************
- * include/asm-x86/mem_access.h
- *
- * Memory access support.
- *
- * Copyright (c) 2011 Virtuata, Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- */
-
-#ifndef _XEN_ASM_MEM_ACCESS_H
-#define _XEN_ASM_MEM_ACCESS_H
-
-int mem_access_memop(unsigned long cmd,
- XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
-int mem_access_send_req(struct domain *d, mem_event_request_t *req);
-
-#endif /* _XEN_ASM_MEM_ACCESS_H */
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/include/asm-x86/mem_event.h b/xen/include/asm-x86/mem_event.h
deleted file mode 100644
index ed4481a..0000000
--- a/xen/include/asm-x86/mem_event.h
+++ /dev/null
@@ -1,82 +0,0 @@
-/******************************************************************************
- * include/asm-x86/mem_event.h
- *
- * Common interface for memory event support.
- *
- * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- */
-
-
-#ifndef __MEM_EVENT_H__
-#define __MEM_EVENT_H__
-
-/* Returns whether a ring has been set up */
-bool_t mem_event_check_ring(struct mem_event_domain *med);
-
-/* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
- * available space and the caller is a foreign domain. If the guest itself
- * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
- * that the ring does not lose future events.
- *
- * However, the allow_sleep flag can be set to false in cases in which it is ok
- * to lose future events, and thus -EBUSY can be returned to guest vcpus
- * (handle with care!).
- *
- * In general, you must follow a claim_slot() call with either put_request() or
- * cancel_slot(), both of which are guaranteed to
- * succeed.
- */
-int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
- bool_t allow_sleep);
-static inline int mem_event_claim_slot(struct domain *d,
- struct mem_event_domain *med)
-{
- return __mem_event_claim_slot(d, med, 1);
-}
-
-static inline int mem_event_claim_slot_nosleep(struct domain *d,
- struct mem_event_domain *med)
-{
- return __mem_event_claim_slot(d, med, 0);
-}
-
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);
-
-void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
- mem_event_request_t *req);
-
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
- mem_event_response_t *rsp);
-
-int do_mem_event_op(int op, uint32_t domain, void *arg);
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
- XEN_GUEST_HANDLE_PARAM(void) u_domctl);
-
-void mem_event_vcpu_pause(struct vcpu *v);
-void mem_event_vcpu_unpause(struct vcpu *v);
-
-#endif /* __MEM_EVENT_H__ */
-
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index d253117..bafd28c 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -590,8 +590,6 @@ unsigned int domain_clamp_alloc_bitsize(struct domain *d, unsigned int bits);
unsigned long domain_get_maximum_gpfn(struct domain *d);
-void mem_event_cleanup(struct domain *d);
-
extern struct domain *dom_xen, *dom_io, *dom_cow; /* for vmcoreinfo */
/* Definition of an mm lock: spinlock with extra fields for debugging */
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
new file mode 100644
index 0000000..c7dfc48
--- /dev/null
+++ b/xen/include/xen/mem_access.h
@@ -0,0 +1,58 @@
+/******************************************************************************
+ * mem_access.h
+ *
+ * Memory access support.
+ *
+ * Copyright (c) 2011 Virtuata, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef _XEN_ASM_MEM_ACCESS_H
+#define _XEN_ASM_MEM_ACCESS_H
+
+#ifdef CONFIG_X86
+
+int mem_access_memop(unsigned long cmd,
+ XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
+int mem_access_send_req(struct domain *d, mem_event_request_t *req);
+
+#else
+
+static inline
+int mem_access_memop(unsigned long cmd,
+ XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
+{
+ return -ENOSYS;
+}
+
+static inline
+int mem_access_send_req(struct domain *d, mem_event_request_t *req)
+{
+ return -ENOSYS;
+}
+
+#endif /* CONFIG_X86 */
+
+#endif /* _XEN_ASM_MEM_ACCESS_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/mem_event.h
new file mode 100644
index 0000000..ecf9f64
--- /dev/null
+++ b/xen/include/xen/mem_event.h
@@ -0,0 +1,141 @@
+/******************************************************************************
+ * mem_event.h
+ *
+ * Common interface for memory event support.
+ *
+ * Copyright (c) 2009 Citrix Systems, Inc. (Patrick Colp)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+
+#ifndef __MEM_EVENT_H__
+#define __MEM_EVENT_H__
+
+#ifdef CONFIG_X86
+
+/* Clean up on domain destruction */
+void mem_event_cleanup(struct domain *d);
+
+/* Returns whether a ring has been set up */
+bool_t mem_event_check_ring(struct mem_event_domain *med);
+
+/* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
+ * available space and the caller is a foreign domain. If the guest itself
+ * is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
+ * that the ring does not lose future events.
+ *
+ * However, the allow_sleep flag can be set to false in cases in which it is ok
+ * to lose future events, and thus -EBUSY can be returned to guest vcpus
+ * (handle with care!).
+ *
+ * In general, you must follow a claim_slot() call with either put_request() or
+ * cancel_slot(), both of which are guaranteed to
+ * succeed.
+ */
+int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
+ bool_t allow_sleep);
+static inline int mem_event_claim_slot(struct domain *d,
+ struct mem_event_domain *med)
+{
+ return __mem_event_claim_slot(d, med, 1);
+}
+
+static inline int mem_event_claim_slot_nosleep(struct domain *d,
+ struct mem_event_domain *med)
+{
+ return __mem_event_claim_slot(d, med, 0);
+}
+
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med);
+
+void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
+ mem_event_request_t *req);
+
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
+ mem_event_response_t *rsp);
+
+int do_mem_event_op(int op, uint32_t domain, void *arg);
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+ XEN_GUEST_HANDLE_PARAM(void) u_domctl);
+
+void mem_event_vcpu_pause(struct vcpu *v);
+void mem_event_vcpu_unpause(struct vcpu *v);
+
+#else
+
+static inline void mem_event_cleanup(struct domain *d) {}
+
+static inline bool_t mem_event_check_ring(struct mem_event_domain *med)
+{
+ return 0;
+}
+
+static inline int mem_event_claim_slot(struct domain *d,
+ struct mem_event_domain *med)
+{
+ return -ENOSYS;
+}
+
+static inline int mem_event_claim_slot_nosleep(struct domain *d,
+ struct mem_event_domain *med)
+{
+ return -ENOSYS;
+}
+
+static inline
+void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
+{}
+
+static inline
+void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
+ mem_event_request_t *req)
+{}
+
+static inline
+int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
+ mem_event_response_t *rsp)
+{
+ return -ENOSYS;
+}
+
+static inline int do_mem_event_op(int op, uint32_t domain, void *arg)
+{
+ return -ENOSYS;
+}
+
+static inline
+int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
+ XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+{
+ return -ENOSYS;
+}
+
+static inline void mem_event_vcpu_pause(struct vcpu *v) {}
+static inline void mem_event_vcpu_unpause(struct vcpu *v) {}
+
+#endif /* CONFIG_X86 */
+
+#endif /* __MEM_EVENT_H__ */
+
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index b183189..7c0efc7 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -371,4 +371,10 @@ int guest_remove_page(struct domain *d, unsigned long gmfn);
/* TRUE if the whole page at @mfn is of the requested RAM type(s) above. */
int page_is_ram_type(unsigned long mfn, unsigned long mem_type);
+/* Prepare/destroy a ring for a dom0 helper. Helper with talk
+ * with Xen on behalf of this domain. */
+int prepare_ring_for_helper(struct domain *d, unsigned long gmfn,
+ struct page_info **_page, void **_va);
+void destroy_ring_for_helper(void **_va, struct page_info *page);
+
#endif /* __XEN_MM_H__ */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 4575dda..2365fad 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1,4 +1,3 @@
-
#ifndef __SCHED_H__
#define __SCHED_H__
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 01/12] xen: Relocate mem_access and mem_event into common.
2014-08-27 14:06 ` [PATCH RFC v2 01/12] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
@ 2014-08-27 14:17 ` Julien Grall
2014-08-27 14:57 ` Tamas K Lengyel
2014-08-28 10:22 ` Tim Deegan
1 sibling, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-08-27 14:17 UTC (permalink / raw)
To: Tamas K Lengyel, xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra
Hello Tamas,
On 27/08/14 10:06, Tamas K Lengyel wrote:
> In preparation to add support for ARM LPAE mem_event, relocate mem_access,
> mem_event and auxiliary functions into common Xen code.
> This patch makes no functional changes to the X86 side, for ARM mem_event
> and mem_access functions are just defined as placeholder stubs, and are
> actually enabled later in the series.
>
> Edits that are only header path adjustments:
> xen/arch/x86/domctl.c
> xen/arch/x86/mm/hap/nested_ept.c
> xen/arch/x86/mm/hap/nested_hap.c
> xen/arch/x86/mm/mem_paging.c
> xen/arch/x86/mm/mem_sharing.c
> xen/arch/x86/mm/p2m-pod.c
> xen/arch/x86/mm/p2m-pt.c
> xen/arch/x86/mm/p2m.c
> xen/arch/x86/x86_64/compat/mm.c
> xen/arch/x86/x86_64/mm.c
>
> Makefile adjustments for new/removed code:
> xen/common/Makefile
> xen/arch/x86/mm/Makefile
>
> Relocated prepare_ring_for_helper and destroy_ring_for_helper functions:
> xen/include/xen/mm.h
> xen/common/memory.c
> xen/include/asm-x86/hvm/hvm.h
> xen/arch/x86/hvm/hvm.c
>
> Code movement of mem_event and mem_access with required ifdef wrappers
> added to only compile with CONFIG_X86:
As you have created new files. Why not adding a HAS_MEM_ACCESS option
(as we did for the device tree and kexec)?
It will avoid most of you "ifdef CONFIG_X86" in common code and compile
files only when it required.
Regards,
--
Julien Grall
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 01/12] xen: Relocate mem_access and mem_event into common.
2014-08-27 14:17 ` Julien Grall
@ 2014-08-27 14:57 ` Tamas K Lengyel
0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:57 UTC (permalink / raw)
To: Julien Grall
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
Daniel De Graaf
[-- Attachment #1.1: Type: text/plain, Size: 491 bytes --]
> Code movement of mem_event and mem_access with required ifdef wrappers
>> added to only compile with CONFIG_X86:
>>
>
> As you have created new files. Why not adding a HAS_MEM_ACCESS option (as
> we did for the device tree and kexec)?
>
> It will avoid most of you "ifdef CONFIG_X86" in common code and compile
> files only when it required.
>
> Regards,
>
> --
> Julien Grall
>
>
That would make for a cleaner code for sure, you are right, will switch to
that in the next version.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 1048 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 01/12] xen: Relocate mem_access and mem_event into common.
2014-08-27 14:06 ` [PATCH RFC v2 01/12] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
2014-08-27 14:17 ` Julien Grall
@ 2014-08-28 10:22 ` Tim Deegan
1 sibling, 0 replies; 48+ messages in thread
From: Tim Deegan @ 2014-08-28 10:22 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: ian.campbell, ian.jackson, xen-devel, stefano.stabellini, andres,
jbeulich, dgdegra
At 16:06 +0200 on 27 Aug (1409151989), Tamas K Lengyel wrote:
> In preparation to add support for ARM LPAE mem_event, relocate mem_access,
> mem_event and auxiliary functions into common Xen code.
> This patch makes no functional changes to the X86 side, for ARM mem_event
> and mem_access functions are just defined as placeholder stubs, and are
> actually enabled later in the series.
>
> Edits that are only header path adjustments:
> xen/arch/x86/domctl.c
> xen/arch/x86/mm/hap/nested_ept.c
> xen/arch/x86/mm/hap/nested_hap.c
> xen/arch/x86/mm/mem_paging.c
> xen/arch/x86/mm/mem_sharing.c
> xen/arch/x86/mm/p2m-pod.c
> xen/arch/x86/mm/p2m-pt.c
> xen/arch/x86/mm/p2m.c
> xen/arch/x86/x86_64/compat/mm.c
> xen/arch/x86/x86_64/mm.c
>
> Makefile adjustments for new/removed code:
> xen/common/Makefile
> xen/arch/x86/mm/Makefile
>
> Relocated prepare_ring_for_helper and destroy_ring_for_helper functions:
> xen/include/xen/mm.h
> xen/common/memory.c
> xen/include/asm-x86/hvm/hvm.h
> xen/arch/x86/hvm/hvm.c
>
> Code movement of mem_event and mem_access with required ifdef wrappers
> added to only compile with CONFIG_X86:
> xen/arch/x86/mm/mem_access.c -> xen/common/mem_access.c
> xen/arch/x86/mm/mem_event.c -> xen/common/mem_event.c
> xen/include/asm-x86/mem_access.h -> xen/include/xen/mem_access.h
> xen/include/asm-x86/mem_event.h -> xen/include/xen/mem_event.h
>
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Tim Deegan <tim@xen.org>
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH RFC v2 02/12] xen/mem_event: Clean out superflous white-spaces
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
2014-08-27 14:06 ` [PATCH RFC v2 01/12] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-28 10:22 ` Tim Deegan
2014-08-27 14:06 ` [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
` (10 subsequent siblings)
12 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v2: Clean the mem_event header as well.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
xen/common/mem_event.c | 20 ++++++++++----------
xen/include/xen/mem_event.h | 8 ++++----
2 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 604f94f..e22b78e 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -58,7 +58,7 @@ static int mem_event_enable(
if ( med->ring_page )
return -EBUSY;
- /* The parameter defaults to zero, and it should be
+ /* The parameter defaults to zero, and it should be
* set to something */
if ( ring_gfn == 0 )
return -ENOSYS;
@@ -66,7 +66,7 @@ static int mem_event_enable(
mem_event_ring_lock_init(med);
mem_event_ring_lock(med);
- rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
+ rc = prepare_ring_for_helper(d, ring_gfn, &med->ring_pg_struct,
&med->ring_page);
if ( rc < 0 )
goto err;
@@ -98,7 +98,7 @@ static int mem_event_enable(
return 0;
err:
- destroy_ring_for_helper(&med->ring_page,
+ destroy_ring_for_helper(&med->ring_page,
med->ring_pg_struct);
mem_event_ring_unlock(med);
@@ -227,7 +227,7 @@ static int mem_event_disable(struct domain *d, struct mem_event_domain *med)
}
}
- destroy_ring_for_helper(&med->ring_page,
+ destroy_ring_for_helper(&med->ring_page,
med->ring_pg_struct);
mem_event_ring_unlock(med);
}
@@ -480,7 +480,7 @@ void mem_event_cleanup(struct domain *d)
* the disable routine to complete. It will also drop
* all domain refs the wait-queued vcpus are holding.
* Finally, because this code path involves previously
- * pausing the domain (domain_kill), unpausing the
+ * pausing the domain (domain_kill), unpausing the
* vcpus causes no harm. */
destroy_waitqueue_head(&d->mem_event->paging.wq);
(void)mem_event_disable(d, &d->mem_event->paging);
@@ -560,7 +560,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
if ( p2m->pod.entry_count )
break;
- rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
+ rc = mem_event_enable(d, mec, med, _VPF_mem_paging,
HVM_PARAM_PAGING_RING_PFN,
mem_paging_notification);
}
@@ -580,7 +580,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
}
break;
- case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
+ case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
{
struct mem_event_domain *med = &d->mem_event->access;
rc = -EINVAL;
@@ -598,7 +598,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
if ( !cpu_has_vmx )
break;
- rc = mem_event_enable(d, mec, med, _VPF_mem_access,
+ rc = mem_event_enable(d, mec, med, _VPF_mem_access,
HVM_PARAM_ACCESS_RING_PFN,
mem_access_notification);
}
@@ -618,7 +618,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
}
break;
- case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
+ case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
{
struct mem_event_domain *med = &d->mem_event->share;
rc = -EINVAL;
@@ -637,7 +637,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
if ( !hap_enabled(d) )
break;
- rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
+ rc = mem_event_enable(d, mec, med, _VPF_mem_sharing,
HVM_PARAM_SHARING_RING_PFN,
mem_sharing_notification);
}
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/mem_event.h
index ecf9f64..774909e 100644
--- a/xen/include/xen/mem_event.h
+++ b/xen/include/xen/mem_event.h
@@ -35,19 +35,19 @@ bool_t mem_event_check_ring(struct mem_event_domain *med);
/* Returns 0 on success, -ENOSYS if there is no ring, -EBUSY if there is no
* available space and the caller is a foreign domain. If the guest itself
* is the caller, -EBUSY is avoided by sleeping on a wait queue to ensure
- * that the ring does not lose future events.
+ * that the ring does not lose future events.
*
* However, the allow_sleep flag can be set to false in cases in which it is ok
* to lose future events, and thus -EBUSY can be returned to guest vcpus
- * (handle with care!).
+ * (handle with care!).
*
* In general, you must follow a claim_slot() call with either put_request() or
* cancel_slot(), both of which are guaranteed to
- * succeed.
+ * succeed.
*/
int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
bool_t allow_sleep);
-static inline int mem_event_claim_slot(struct domain *d,
+static inline int mem_event_claim_slot(struct domain *d,
struct mem_event_domain *med)
{
return __mem_event_claim_slot(d, med, 1);
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
2014-08-27 14:06 ` [PATCH RFC v2 01/12] xen: Relocate mem_access and mem_event into common Tamas K Lengyel
2014-08-27 14:06 ` [PATCH RFC v2 02/12] xen/mem_event: Clean out superflous white-spaces Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-27 16:39 ` Julien Grall
2014-08-27 17:02 ` Andres Lagar Cavilla
2014-08-27 14:06 ` [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks Tamas K Lengyel
` (9 subsequent siblings)
12 siblings, 2 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
A faulty tool stack can brick a debug hypervisor. Unpleasant while dev/test.
Suggested-by: Andres Lagar Cavilla <andres@lagarcavilla.org>
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
xen/common/mem_event.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index e22b78e..8be32e1 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -279,7 +279,11 @@ void mem_event_put_request(struct domain *d,
if ( current->domain != d )
{
req->flags |= MEM_EVENT_FLAG_FOREIGN;
- ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
+#ifndef NDEBUG
+ if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
+ printk(XENLOG_G_WARNING
+ "VCPU was not paused.\n");
+#endif
}
mem_event_ring_lock(med);
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds
2014-08-27 14:06 ` [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
@ 2014-08-27 16:39 ` Julien Grall
2014-08-27 17:00 ` Tamas K Lengyel
2014-08-27 17:02 ` Andres Lagar Cavilla
1 sibling, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-08-27 16:39 UTC (permalink / raw)
To: Tamas K Lengyel, xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra
Hello Tamas,
On 27/08/14 10:06, Tamas K Lengyel wrote:
> A faulty tool stack can brick a debug hypervisor. Unpleasant while dev/test.
>
> Suggested-by: Andres Lagar Cavilla <andres@lagarcavilla.org>
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> ---
> xen/common/mem_event.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
> index e22b78e..8be32e1 100644
> --- a/xen/common/mem_event.c
> +++ b/xen/common/mem_event.c
> @@ -279,7 +279,11 @@ void mem_event_put_request(struct domain *d,
> if ( current->domain != d )
> {
> req->flags |= MEM_EVENT_FLAG_FOREIGN;
> - ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
> +#ifndef NDEBUG
> + if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
> + printk(XENLOG_G_WARNING
> + "VCPU was not paused.\n");
NIT: Can't your write the message on the previous line?
Regards,
--
Julien Grall
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds
2014-08-27 16:39 ` Julien Grall
@ 2014-08-27 17:00 ` Tamas K Lengyel
0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 17:00 UTC (permalink / raw)
To: Julien Grall
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
Daniel De Graaf, Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 1253 bytes --]
On Wed, Aug 27, 2014 at 6:39 PM, Julien Grall <julien.grall@linaro.org>
wrote:
> Hello Tamas,
>
>
> On 27/08/14 10:06, Tamas K Lengyel wrote:
>
>> A faulty tool stack can brick a debug hypervisor. Unpleasant while
>> dev/test.
>>
>> Suggested-by: Andres Lagar Cavilla <andres@lagarcavilla.org>
>> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
>> ---
>> xen/common/mem_event.c | 6 +++++-
>> 1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
>> index e22b78e..8be32e1 100644
>> --- a/xen/common/mem_event.c
>> +++ b/xen/common/mem_event.c
>> @@ -279,7 +279,11 @@ void mem_event_put_request(struct domain *d,
>> if ( current->domain != d )
>> {
>> req->flags |= MEM_EVENT_FLAG_FOREIGN;
>> - ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
>> +#ifndef NDEBUG
>> + if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
>> + printk(XENLOG_G_WARNING
>> + "VCPU was not paused.\n");
>>
>
> NIT: Can't your write the message on the previous line?
>
> Regards,
>
> --
> Julien Grall
>
>
Ah, yes, that's just an artifact as I had a longer message printed first
and didn't adjust the line after I shortened it.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 2084 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds
2014-08-27 14:06 ` [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
2014-08-27 16:39 ` Julien Grall
@ 2014-08-27 17:02 ` Andres Lagar Cavilla
2014-08-27 21:26 ` Tamas K Lengyel
2014-08-28 6:36 ` Jan Beulich
1 sibling, 2 replies; 48+ messages in thread
From: Andres Lagar Cavilla @ 2014-08-27 17:02 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
stefano.stabellini, Jan Beulich, dgdegra
[-- Attachment #1.1: Type: text/plain, Size: 1194 bytes --]
On Wed, Aug 27, 2014 at 7:06 AM, Tamas K Lengyel <tklengyel@sec.in.tum.de>
wrote:
> A faulty tool stack can brick a debug hypervisor. Unpleasant while
> dev/test.
>
> Suggested-by: Andres Lagar Cavilla <andres@lagarcavilla.org>
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> ---
> xen/common/mem_event.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
> index e22b78e..8be32e1 100644
> --- a/xen/common/mem_event.c
> +++ b/xen/common/mem_event.c
> @@ -279,7 +279,11 @@ void mem_event_put_request(struct domain *d,
> if ( current->domain != d )
> {
> req->flags |= MEM_EVENT_FLAG_FOREIGN;
> - ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
> +#ifndef NDEBUG
> + if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
> + printk(XENLOG_G_WARNING
> + "VCPU was not paused.\n");
>
1. use gdprintk
2. enclose only the gdprintk in #ifdef
3. if the flags contain the improper VCPU_PAUSED value, also clear that
value from flags (regardless from NDEBUG)
Thanks
Andres
+#endif
> }
>
> mem_event_ring_lock(med);
> --
> 2.1.0.rc1
>
>
[-- Attachment #1.2: Type: text/html, Size: 2034 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds
2014-08-27 17:02 ` Andres Lagar Cavilla
@ 2014-08-27 21:26 ` Tamas K Lengyel
2014-08-28 6:36 ` Jan Beulich
1 sibling, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 21:26 UTC (permalink / raw)
To: Andres Lagar Cavilla
Cc: Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
Stefano Stabellini, Jan Beulich, Daniel De Graaf, Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 1530 bytes --]
On Wed, Aug 27, 2014 at 7:02 PM, Andres Lagar Cavilla <
andres@lagarcavilla.org> wrote:
> On Wed, Aug 27, 2014 at 7:06 AM, Tamas K Lengyel <tklengyel@sec.in.tum.de>
> wrote:
>
>> A faulty tool stack can brick a debug hypervisor. Unpleasant while
>> dev/test.
>>
>> Suggested-by: Andres Lagar Cavilla <andres@lagarcavilla.org>
>> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
>> ---
>> xen/common/mem_event.c | 6 +++++-
>> 1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
>> index e22b78e..8be32e1 100644
>> --- a/xen/common/mem_event.c
>> +++ b/xen/common/mem_event.c
>> @@ -279,7 +279,11 @@ void mem_event_put_request(struct domain *d,
>> if ( current->domain != d )
>> {
>> req->flags |= MEM_EVENT_FLAG_FOREIGN;
>> - ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
>> +#ifndef NDEBUG
>> + if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
>> + printk(XENLOG_G_WARNING
>> + "VCPU was not paused.\n");
>>
>
> 1. use gdprintk
>
Ack.
2. enclose only the gdprintk in #ifdef
> 3. if the flags contain the improper VCPU_PAUSED value, also clear that
> value from flags (regardless from NDEBUG)
>
The flags is already cleared for this bit?
>
> Thanks
> Andres
>
> +#endif
>> }
>>
>> mem_event_ring_lock(med);
>> --
>> 2.1.0.rc1
>>
>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>
[-- Attachment #1.2: Type: text/html, Size: 3242 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds
2014-08-27 17:02 ` Andres Lagar Cavilla
2014-08-27 21:26 ` Tamas K Lengyel
@ 2014-08-28 6:36 ` Jan Beulich
2014-08-29 4:20 ` Andres Lagar Cavilla
1 sibling, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-08-28 6:36 UTC (permalink / raw)
To: Andres Lagar Cavilla, Tamas K Lengyel
Cc: Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
stefano.stabellini, dgdegra
>>> On 27.08.14 at 19:02, <andres@lagarcavilla.org> wrote:
> On Wed, Aug 27, 2014 at 7:06 AM, Tamas K Lengyel <tklengyel@sec.in.tum.de>
> wrote:
>
>> A faulty tool stack can brick a debug hypervisor. Unpleasant while
>> dev/test.
>>
>> Suggested-by: Andres Lagar Cavilla <andres@lagarcavilla.org>
>> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
>> ---
>> xen/common/mem_event.c | 6 +++++-
>> 1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
>> index e22b78e..8be32e1 100644
>> --- a/xen/common/mem_event.c
>> +++ b/xen/common/mem_event.c
>> @@ -279,7 +279,11 @@ void mem_event_put_request(struct domain *d,
>> if ( current->domain != d )
>> {
>> req->flags |= MEM_EVENT_FLAG_FOREIGN;
>> - ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
>> +#ifndef NDEBUG
>> + if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
>> + printk(XENLOG_G_WARNING
>> + "VCPU was not paused.\n");
>>
>
> 1. use gdprintk
Maybe, but then the message (which now will include current
domain/vcpu) should also include subject domain and vcpu; perhaps
it should have done so from the beginning to make it half way useful.
> 2. enclose only the gdprintk in #ifdef
> 3. if the flags contain the improper VCPU_PAUSED value, also clear that
> value from flags (regardless from NDEBUG)
How would fixing (actually setting rather than clearing) that flag
help (as this wouldn't put the vcpu in the intended state)?
Jan
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds
2014-08-28 6:36 ` Jan Beulich
@ 2014-08-29 4:20 ` Andres Lagar Cavilla
0 siblings, 0 replies; 48+ messages in thread
From: Andres Lagar Cavilla @ 2014-08-29 4:20 UTC (permalink / raw)
To: Jan Beulich
Cc: Ian Campbell, Tim Deegan, Ian Jackson, Xen-devel,
stefano.stabellini, dgdegra, Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 1769 bytes --]
On Wed, Aug 27, 2014 at 11:36 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 27.08.14 at 19:02, <andres@lagarcavilla.org> wrote:
> > On Wed, Aug 27, 2014 at 7:06 AM, Tamas K Lengyel <
> tklengyel@sec.in.tum.de>
> > wrote:
> >
> >> A faulty tool stack can brick a debug hypervisor. Unpleasant while
> >> dev/test.
> >>
> >> Suggested-by: Andres Lagar Cavilla <andres@lagarcavilla.org>
> >> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> >> ---
> >> xen/common/mem_event.c | 6 +++++-
> >> 1 file changed, 5 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
> >> index e22b78e..8be32e1 100644
> >> --- a/xen/common/mem_event.c
> >> +++ b/xen/common/mem_event.c
> >> @@ -279,7 +279,11 @@ void mem_event_put_request(struct domain *d,
> >> if ( current->domain != d )
> >> {
> >> req->flags |= MEM_EVENT_FLAG_FOREIGN;
> >> - ASSERT( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) );
> >> +#ifndef NDEBUG
> >> + if ( !(req->flags & MEM_EVENT_FLAG_VCPU_PAUSED) )
> >> + printk(XENLOG_G_WARNING
> >> + "VCPU was not paused.\n");
> >>
> >
> > 1. use gdprintk
>
> Maybe, but then the message (which now will include current
> domain/vcpu) should also include subject domain and vcpu; perhaps
> it should have done so from the beginning to make it half way useful.
>
Yes.
>
> > 2. enclose only the gdprintk in #ifdef
> > 3. if the flags contain the improper VCPU_PAUSED value, also clear that
> > value from flags (regardless from NDEBUG)
>
> How would fixing (actually setting rather than clearing) that flag
> help (as this wouldn't put the vcpu in the intended state)?
>
It wouldn't (setting the flag, indeed). I got it wrong.
Andres
>
> Jan
>
>
[-- Attachment #1.2: Type: text/html, Size: 3023 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
` (2 preceding siblings ...)
2014-08-27 14:06 ` [PATCH RFC v2 03/12] xen/mem_event: Relax error condition on debug builds Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-27 15:19 ` Jan Beulich
2014-08-27 14:06 ` [PATCH RFC v2 05/12] xen/mem_access: Abstract architecture specific sanity check Tamas K Lengyel
` (8 subsequent siblings)
12 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
Move architecture specific sanity checks into its own function
which is called when enabling mem_event.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
xen/common/mem_event.c | 25 ++++++++++++++++++-------
1 file changed, 18 insertions(+), 7 deletions(-)
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 8be32e1..8bf0cf1 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -424,6 +424,19 @@ int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
return mem_event_grab_slot(med, (current->domain != d));
}
+static inline bool_t mem_event_sanity_check(struct domain *d)
+{
+ /* Only HAP is supported */
+ if ( !hap_enabled(d) )
+ return 0;
+
+ /* Currently only EPT is supported */
+ if ( !cpu_has_vmx )
+ return 0;
+
+ return 1;
+}
+
/* Registered with Xen-bound event channel for incoming notifications. */
static void mem_paging_notification(struct vcpu *v, unsigned int port)
{
@@ -558,6 +571,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
rc = -EMLINK;
if ( unlikely(need_iommu(d)) )
break;
+ }
rc = -EXDEV;
/* Disallow paging in a PoD guest */
@@ -593,14 +607,11 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
{
case XEN_DOMCTL_MEM_EVENT_OP_ACCESS_ENABLE:
{
- rc = -ENODEV;
- /* Only HAP is supported */
- if ( !hap_enabled(d) )
- break;
-
- /* Currently only EPT is supported */
- if ( !cpu_has_vmx )
+ if ( !mem_event_sanity_check(d) )
+ {
+ rc = -ENODEV;
break;
+ }
rc = mem_event_enable(d, mec, med, _VPF_mem_access,
HVM_PARAM_ACCESS_RING_PFN,
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks
2014-08-27 14:06 ` [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks Tamas K Lengyel
@ 2014-08-27 15:19 ` Jan Beulich
2014-08-27 17:17 ` Tamas K Lengyel
0 siblings, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-08-27 15:19 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: ian.campbell, tim, ian.jackson, xen-devel, stefano.stabellini,
andres, dgdegra
>>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
> --- a/xen/common/mem_event.c
> +++ b/xen/common/mem_event.c
> @@ -424,6 +424,19 @@ int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
> return mem_event_grab_slot(med, (current->domain != d));
> }
>
> +static inline bool_t mem_event_sanity_check(struct domain *d)
> +{
> + /* Only HAP is supported */
> + if ( !hap_enabled(d) )
> + return 0;
> +
> + /* Currently only EPT is supported */
> + if ( !cpu_has_vmx )
> + return 0;
> +
> + return 1;
> +}
So what does it buy us to have this in a separate function, but
still in the same common file?
> @@ -558,6 +571,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
> rc = -EMLINK;
> if ( unlikely(need_iommu(d)) )
> break;
> + }
>
> rc = -EXDEV;
> /* Disallow paging in a PoD guest */
I have a really hard time seeing how this can be a correct change -
does this even build (and if it does, do things build with only patches
1-3 in place)?
Jan
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks
2014-08-27 15:19 ` Jan Beulich
@ 2014-08-27 17:17 ` Tamas K Lengyel
2014-08-27 21:54 ` Tamas K Lengyel
0 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 17:17 UTC (permalink / raw)
To: Jan Beulich
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 1514 bytes --]
On Wed, Aug 27, 2014 at 5:19 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
> > --- a/xen/common/mem_event.c
> > +++ b/xen/common/mem_event.c
> > @@ -424,6 +424,19 @@ int __mem_event_claim_slot(struct domain *d, struct
> mem_event_domain *med,
> > return mem_event_grab_slot(med, (current->domain != d));
> > }
> >
> > +static inline bool_t mem_event_sanity_check(struct domain *d)
> > +{
> > + /* Only HAP is supported */
> > + if ( !hap_enabled(d) )
> > + return 0;
> > +
> > + /* Currently only EPT is supported */
> > + if ( !cpu_has_vmx )
> > + return 0;
> > +
> > + return 1;
> > +}
>
> So what does it buy us to have this in a separate function, but
> still in the same common file?
>
This patch really just sets up the ground for ARM where these checks are
not required and will just return 1.
>
> > @@ -558,6 +571,7 @@ int mem_event_domctl(struct domain *d,
> xen_domctl_mem_event_op_t *mec,
> > rc = -EMLINK;
> > if ( unlikely(need_iommu(d)) )
> > break;
> > + }
> >
> > rc = -EXDEV;
> > /* Disallow paging in a PoD guest */
>
> I have a really hard time seeing how this can be a correct change -
> does this even build (and if it does, do things build with only patches
> 1-3 in place)?
>
> Jan
>
This certainly looks out of place, I need to double check, but just might
just be a typo that creeped into the patch.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 2392 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks
2014-08-27 17:17 ` Tamas K Lengyel
@ 2014-08-27 21:54 ` Tamas K Lengyel
2014-08-28 6:38 ` Jan Beulich
0 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 21:54 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel@lists.xen.org
[-- Attachment #1.1: Type: text/plain, Size: 1941 bytes --]
> On Wed, Aug 27, 2014 at 5:19 PM, Jan Beulich <JBeulich@suse.com> wrote:
>
>> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
>> > --- a/xen/common/mem_event.c
>> > +++ b/xen/common/mem_event.c
>> > @@ -424,6 +424,19 @@ int __mem_event_claim_slot(struct domain *d,
>> struct mem_event_domain *med,
>> > return mem_event_grab_slot(med, (current->domain != d));
>> > }
>> >
>> > +static inline bool_t mem_event_sanity_check(struct domain *d)
>> > +{
>> > + /* Only HAP is supported */
>> > + if ( !hap_enabled(d) )
>> > + return 0;
>> > +
>> > + /* Currently only EPT is supported */
>> > + if ( !cpu_has_vmx )
>> > + return 0;
>> > +
>> > + return 1;
>> > +}
>>
>> So what does it buy us to have this in a separate function, but
>> still in the same common file?
>>
>
> This patch really just sets up the ground for ARM where these checks are
> not required and will just return 1.
>
In the next series I'll actually relocate this function into architecture
specific p2m.h and rename it p2m_mem_event_sanity_check. Same for the
mem_access sanity check function.
>
>
>>
>> > @@ -558,6 +571,7 @@ int mem_event_domctl(struct domain *d,
>> xen_domctl_mem_event_op_t *mec,
>> > rc = -EMLINK;
>> > if ( unlikely(need_iommu(d)) )
>> > break;
>> > + }
>> >
>> > rc = -EXDEV;
>> > /* Disallow paging in a PoD guest */
>>
>> I have a really hard time seeing how this can be a correct change -
>> does this even build (and if it does, do things build with only patches
>> 1-3 in place)?
>>
>> Jan
>>
>
> This certainly looks out of place, I need to double check, but just might
> just be a typo that creeped into the patch.
>
> Tamas
>
Indeed, just a typo and since I was only compile testing on ARM it passed
through the cracks. I'll compile test going forward on both architectures
to avoid issues like this.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 3466 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks
2014-08-27 21:54 ` Tamas K Lengyel
@ 2014-08-28 6:38 ` Jan Beulich
2014-08-28 8:40 ` Tamas K Lengyel
0 siblings, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-08-28 6:38 UTC (permalink / raw)
To: Tamas K Lengyel; +Cc: xen-devel@lists.xen.org
>>> On 27.08.14 at 23:54, <tamas.lengyel@zentific.com> wrote:
>> On Wed, Aug 27, 2014 at 5:19 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>>> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
>>> > --- a/xen/common/mem_event.c
>>> > +++ b/xen/common/mem_event.c
>>> > @@ -424,6 +424,19 @@ int __mem_event_claim_slot(struct domain *d,
>>> struct mem_event_domain *med,
>>> > return mem_event_grab_slot(med, (current->domain != d));
>>> > }
>>> >
>>> > +static inline bool_t mem_event_sanity_check(struct domain *d)
>>> > +{
>>> > + /* Only HAP is supported */
>>> > + if ( !hap_enabled(d) )
>>> > + return 0;
>>> > +
>>> > + /* Currently only EPT is supported */
>>> > + if ( !cpu_has_vmx )
>>> > + return 0;
>>> > +
>>> > + return 1;
>>> > +}
>>>
>>> So what does it buy us to have this in a separate function, but
>>> still in the same common file?
>>>
>>
>> This patch really just sets up the ground for ARM where these checks are
>> not required and will just return 1.
>>
>
> In the next series I'll actually relocate this function into architecture
> specific p2m.h and rename it p2m_mem_event_sanity_check. Same for the
> mem_access sanity check function.
Sounds like suboptimal patch splitting then...
Jan
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks
2014-08-28 6:38 ` Jan Beulich
@ 2014-08-28 8:40 ` Tamas K Lengyel
2014-08-28 8:46 ` Jan Beulich
0 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-28 8:40 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel@lists.xen.org
[-- Attachment #1.1: Type: text/plain, Size: 1651 bytes --]
On Thu, Aug 28, 2014 at 8:38 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 27.08.14 at 23:54, <tamas.lengyel@zentific.com> wrote:
> >> On Wed, Aug 27, 2014 at 5:19 PM, Jan Beulich <JBeulich@suse.com>
> wrote:
> >>
> >>> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
> >>> > --- a/xen/common/mem_event.c
> >>> > +++ b/xen/common/mem_event.c
> >>> > @@ -424,6 +424,19 @@ int __mem_event_claim_slot(struct domain *d,
> >>> struct mem_event_domain *med,
> >>> > return mem_event_grab_slot(med, (current->domain != d));
> >>> > }
> >>> >
> >>> > +static inline bool_t mem_event_sanity_check(struct domain *d)
> >>> > +{
> >>> > + /* Only HAP is supported */
> >>> > + if ( !hap_enabled(d) )
> >>> > + return 0;
> >>> > +
> >>> > + /* Currently only EPT is supported */
> >>> > + if ( !cpu_has_vmx )
> >>> > + return 0;
> >>> > +
> >>> > + return 1;
> >>> > +}
> >>>
> >>> So what does it buy us to have this in a separate function, but
> >>> still in the same common file?
> >>>
> >>
> >> This patch really just sets up the ground for ARM where these checks are
> >> not required and will just return 1.
> >>
> >
> > In the next series I'll actually relocate this function into architecture
> > specific p2m.h and rename it p2m_mem_event_sanity_check. Same for the
> > mem_access sanity check function.
>
> Sounds like suboptimal patch splitting then...
>
> Jan
>
Suboptimal in what sense? From a performance perspective it has no impact
as it is static inline. I could add the ARM side here as well, but the
compilation of this code is not turned on for ARM until the end of the
series.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 2715 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks
2014-08-28 8:40 ` Tamas K Lengyel
@ 2014-08-28 8:46 ` Jan Beulich
2014-08-28 8:52 ` Tamas K Lengyel
0 siblings, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-08-28 8:46 UTC (permalink / raw)
To: Tamas K Lengyel; +Cc: xen-devel@lists.xen.org
>>> On 28.08.14 at 10:40, <tamas.lengyel@zentific.com> wrote:
> On Thu, Aug 28, 2014 at 8:38 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
>> >>> On 27.08.14 at 23:54, <tamas.lengyel@zentific.com> wrote:
>> >> On Wed, Aug 27, 2014 at 5:19 PM, Jan Beulich <JBeulich@suse.com>
>> wrote:
>> >>
>> >>> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
>> >>> > --- a/xen/common/mem_event.c
>> >>> > +++ b/xen/common/mem_event.c
>> >>> > @@ -424,6 +424,19 @@ int __mem_event_claim_slot(struct domain *d,
>> >>> struct mem_event_domain *med,
>> >>> > return mem_event_grab_slot(med, (current->domain != d));
>> >>> > }
>> >>> >
>> >>> > +static inline bool_t mem_event_sanity_check(struct domain *d)
>> >>> > +{
>> >>> > + /* Only HAP is supported */
>> >>> > + if ( !hap_enabled(d) )
>> >>> > + return 0;
>> >>> > +
>> >>> > + /* Currently only EPT is supported */
>> >>> > + if ( !cpu_has_vmx )
>> >>> > + return 0;
>> >>> > +
>> >>> > + return 1;
>> >>> > +}
>> >>>
>> >>> So what does it buy us to have this in a separate function, but
>> >>> still in the same common file?
>> >>>
>> >>
>> >> This patch really just sets up the ground for ARM where these checks are
>> >> not required and will just return 1.
>> >>
>> >
>> > In the next series I'll actually relocate this function into architecture
>> > specific p2m.h and rename it p2m_mem_event_sanity_check. Same for the
>> > mem_access sanity check function.
>>
>> Sounds like suboptimal patch splitting then...
>
> Suboptimal in what sense?
Putting the function in one place first, and then immediately moving
it elsewhere. Why not put it in the final place right away?
Jan
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks
2014-08-28 8:46 ` Jan Beulich
@ 2014-08-28 8:52 ` Tamas K Lengyel
0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-28 8:52 UTC (permalink / raw)
To: Jan Beulich; +Cc: xen-devel@lists.xen.org
[-- Attachment #1.1: Type: text/plain, Size: 1949 bytes --]
On Thu, Aug 28, 2014 at 10:46 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 28.08.14 at 10:40, <tamas.lengyel@zentific.com> wrote:
> > On Thu, Aug 28, 2014 at 8:38 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >> >>> On 27.08.14 at 23:54, <tamas.lengyel@zentific.com> wrote:
> >> >> On Wed, Aug 27, 2014 at 5:19 PM, Jan Beulich <JBeulich@suse.com>
> >> wrote:
> >> >>
> >> >>> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
> >> >>> > --- a/xen/common/mem_event.c
> >> >>> > +++ b/xen/common/mem_event.c
> >> >>> > @@ -424,6 +424,19 @@ int __mem_event_claim_slot(struct domain *d,
> >> >>> struct mem_event_domain *med,
> >> >>> > return mem_event_grab_slot(med, (current->domain != d));
> >> >>> > }
> >> >>> >
> >> >>> > +static inline bool_t mem_event_sanity_check(struct domain *d)
> >> >>> > +{
> >> >>> > + /* Only HAP is supported */
> >> >>> > + if ( !hap_enabled(d) )
> >> >>> > + return 0;
> >> >>> > +
> >> >>> > + /* Currently only EPT is supported */
> >> >>> > + if ( !cpu_has_vmx )
> >> >>> > + return 0;
> >> >>> > +
> >> >>> > + return 1;
> >> >>> > +}
> >> >>>
> >> >>> So what does it buy us to have this in a separate function, but
> >> >>> still in the same common file?
> >> >>>
> >> >>
> >> >> This patch really just sets up the ground for ARM where these checks
> are
> >> >> not required and will just return 1.
> >> >>
> >> >
> >> > In the next series I'll actually relocate this function into
> architecture
> >> > specific p2m.h and rename it p2m_mem_event_sanity_check. Same for the
> >> > mem_access sanity check function.
> >>
> >> Sounds like suboptimal patch splitting then...
> >
> > Suboptimal in what sense?
>
> Putting the function in one place first, and then immediately moving
> it elsewhere. Why not put it in the final place right away?
>
> Jan
>
Oh, sorry, that's what I meant, this patch will move it into p2m.h in the
next iteration.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 3388 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH RFC v2 05/12] xen/mem_access: Abstract architecture specific sanity check
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
` (3 preceding siblings ...)
2014-08-27 14:06 ` [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-27 14:06 ` [PATCH RFC v2 06/12] tools/libxc: Allocate magic page for mem access on ARM Tamas K Lengyel
` (7 subsequent siblings)
12 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
xen/common/mem_access.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 84acdf9..2bb3171 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -30,6 +30,12 @@
#include <xsm/xsm.h>
#ifdef CONFIG_X86
+static inline bool_t mem_access_sanity_check(struct domain *d)
+{
+ if ( !is_hvm_domain(d) )
+ return 0;
+ return 1;
+}
int mem_access_memop(unsigned long cmd,
XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
@@ -45,9 +51,11 @@ int mem_access_memop(unsigned long cmd,
if ( rc )
return rc;
- rc = -EINVAL;
- if ( !is_hvm_domain(d) )
+ if ( !mem_access_sanity_check(d) )
+ {
+ rc = -EINVAL;
goto out;
+ }
rc = xsm_mem_event_op(XSM_DM_PRIV, d, XENMEM_access_op);
if ( rc )
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* [PATCH RFC v2 06/12] tools/libxc: Allocate magic page for mem access on ARM
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
` (4 preceding siblings ...)
2014-08-27 14:06 ` [PATCH RFC v2 05/12] xen/mem_access: Abstract architecture specific sanity check Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-29 20:43 ` Julien Grall
2014-09-04 0:12 ` Stefano Stabellini
2014-08-27 14:06 ` [PATCH RFC v2 07/12] xen/arm: p2m type definitions and changes Tamas K Lengyel
` (6 subsequent siblings)
12 siblings, 2 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
tools/libxc/xc_dom_arm.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index 9b31b1f..13e881e 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -26,9 +26,10 @@
#include "xg_private.h"
#include "xc_dom.h"
-#define NR_MAGIC_PAGES 2
+#define NR_MAGIC_PAGES 3
#define CONSOLE_PFN_OFFSET 0
#define XENSTORE_PFN_OFFSET 1
+#define MEMACCESS_PFN_OFFSET 2
#define LPAE_SHIFT 9
@@ -87,10 +88,13 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
xc_clear_domain_page(dom->xch, dom->guest_domid, dom->console_pfn);
xc_clear_domain_page(dom->xch, dom->guest_domid, dom->xenstore_pfn);
+ xc_clear_domain_page(dom->xch, dom->guest_domid, base + MEMACCESS_PFN_OFFSET);
xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_PFN,
dom->console_pfn);
xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
dom->xenstore_pfn);
+ xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_ACCESS_RING_PFN,
+ base + MEMACCESS_PFN_OFFSET);
/* allocated by toolstack */
xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_EVTCHN,
dom->console_evtchn);
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 06/12] tools/libxc: Allocate magic page for mem access on ARM
2014-08-27 14:06 ` [PATCH RFC v2 06/12] tools/libxc: Allocate magic page for mem access on ARM Tamas K Lengyel
@ 2014-08-29 20:43 ` Julien Grall
2014-09-04 0:12 ` Stefano Stabellini
1 sibling, 0 replies; 48+ messages in thread
From: Julien Grall @ 2014-08-29 20:43 UTC (permalink / raw)
To: Tamas K Lengyel, xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra
Hello Tamas,
On 27/08/14 10:06, Tamas K Lengyel wrote:
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
Regards,
> ---
> tools/libxc/xc_dom_arm.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
> index 9b31b1f..13e881e 100644
> --- a/tools/libxc/xc_dom_arm.c
> +++ b/tools/libxc/xc_dom_arm.c
> @@ -26,9 +26,10 @@
> #include "xg_private.h"
> #include "xc_dom.h"
>
> -#define NR_MAGIC_PAGES 2
> +#define NR_MAGIC_PAGES 3
> #define CONSOLE_PFN_OFFSET 0
> #define XENSTORE_PFN_OFFSET 1
> +#define MEMACCESS_PFN_OFFSET 2
>
> #define LPAE_SHIFT 9
>
> @@ -87,10 +88,13 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
>
> xc_clear_domain_page(dom->xch, dom->guest_domid, dom->console_pfn);
> xc_clear_domain_page(dom->xch, dom->guest_domid, dom->xenstore_pfn);
> + xc_clear_domain_page(dom->xch, dom->guest_domid, base + MEMACCESS_PFN_OFFSET);
> xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_PFN,
> dom->console_pfn);
> xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
> dom->xenstore_pfn);
> + xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_ACCESS_RING_PFN,
> + base + MEMACCESS_PFN_OFFSET);
> /* allocated by toolstack */
> xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_EVTCHN,
> dom->console_evtchn);
>
--
Julien Grall
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 06/12] tools/libxc: Allocate magic page for mem access on ARM
2014-08-27 14:06 ` [PATCH RFC v2 06/12] tools/libxc: Allocate magic page for mem access on ARM Tamas K Lengyel
2014-08-29 20:43 ` Julien Grall
@ 2014-09-04 0:12 ` Stefano Stabellini
1 sibling, 0 replies; 48+ messages in thread
From: Stefano Stabellini @ 2014-09-04 0:12 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: ian.campbell, tim, ian.jackson, xen-devel, stefano.stabellini,
andres, jbeulich, dgdegra
On Wed, 27 Aug 2014, Tamas K Lengyel wrote:
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> tools/libxc/xc_dom_arm.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
> index 9b31b1f..13e881e 100644
> --- a/tools/libxc/xc_dom_arm.c
> +++ b/tools/libxc/xc_dom_arm.c
> @@ -26,9 +26,10 @@
> #include "xg_private.h"
> #include "xc_dom.h"
>
> -#define NR_MAGIC_PAGES 2
> +#define NR_MAGIC_PAGES 3
> #define CONSOLE_PFN_OFFSET 0
> #define XENSTORE_PFN_OFFSET 1
> +#define MEMACCESS_PFN_OFFSET 2
>
> #define LPAE_SHIFT 9
>
> @@ -87,10 +88,13 @@ static int alloc_magic_pages(struct xc_dom_image *dom)
>
> xc_clear_domain_page(dom->xch, dom->guest_domid, dom->console_pfn);
> xc_clear_domain_page(dom->xch, dom->guest_domid, dom->xenstore_pfn);
> + xc_clear_domain_page(dom->xch, dom->guest_domid, base + MEMACCESS_PFN_OFFSET);
> xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_PFN,
> dom->console_pfn);
> xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_STORE_PFN,
> dom->xenstore_pfn);
> + xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_ACCESS_RING_PFN,
> + base + MEMACCESS_PFN_OFFSET);
> /* allocated by toolstack */
> xc_hvm_param_set(dom->xch, dom->guest_domid, HVM_PARAM_CONSOLE_EVTCHN,
> dom->console_evtchn);
> --
> 2.1.0.rc1
>
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH RFC v2 07/12] xen/arm: p2m type definitions and changes
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
` (5 preceding siblings ...)
2014-08-27 14:06 ` [PATCH RFC v2 06/12] tools/libxc: Allocate magic page for mem access on ARM Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-27 14:06 ` [PATCH RFC v2 08/12] xen/arm: Add mem_event domctl and mem_access memop Tamas K Lengyel
` (5 subsequent siblings)
12 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
Define p2m_access_t in ARM and add necessary changes for page table
construction routines to pass the default access information. Also,
define the Radix tree that will hold access permission settings as
the PTE's don't have enough software programmable bits available.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
xen/arch/arm/p2m.c | 38 ++++++++++++++++-------
xen/include/asm-arm/p2m.h | 78 +++++++++++++++++++++++++++++++++++------------
2 files changed, 85 insertions(+), 31 deletions(-)
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 143199b..a6dea5b 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -10,6 +10,9 @@
#include <asm/event.h>
#include <asm/hardirq.h>
#include <asm/page.h>
+#include <xen/mem_event.h>
+#include <public/mem_event.h>
+#include <xen/mem_access.h>
/* First level P2M is 2 consecutive pages */
#define P2M_FIRST_ORDER 1
@@ -458,7 +461,8 @@ static int apply_one_level(struct domain *d,
paddr_t *maddr,
bool_t *flush,
int mattr,
- p2m_type_t t)
+ p2m_type_t t,
+ p2m_access_t a)
{
/* Helpers to lookup the properties of each level */
const paddr_t level_sizes[] =
@@ -690,7 +694,8 @@ static int apply_p2m_changes(struct domain *d,
paddr_t end_gpaddr,
paddr_t maddr,
int mattr,
- p2m_type_t t)
+ p2m_type_t t,
+ p2m_access_t a)
{
int rc, ret;
struct p2m_domain *p2m = &d->arch.p2m;
@@ -755,7 +760,7 @@ static int apply_p2m_changes(struct domain *d,
1, flush_pt, op,
start_gpaddr, end_gpaddr,
&addr, &maddr, &flush,
- mattr, t);
+ mattr, t, a);
if ( ret < 0 ) { rc = ret ; goto out; }
count += ret;
if ( ret != P2M_ONE_DESCEND ) continue;
@@ -776,7 +781,7 @@ static int apply_p2m_changes(struct domain *d,
2, flush_pt, op,
start_gpaddr, end_gpaddr,
&addr, &maddr, &flush,
- mattr, t);
+ mattr, t, a);
if ( ret < 0 ) { rc = ret ; goto out; }
count += ret;
if ( ret != P2M_ONE_DESCEND ) continue;
@@ -795,7 +800,7 @@ static int apply_p2m_changes(struct domain *d,
3, flush_pt, op,
start_gpaddr, end_gpaddr,
&addr, &maddr, &flush,
- mattr, t);
+ mattr, t, a);
if ( ret < 0 ) { rc = ret ; goto out; }
/* L3 had better have done something! We cannot descend any further */
BUG_ON(ret == P2M_ONE_DESCEND);
@@ -837,7 +842,8 @@ int p2m_populate_ram(struct domain *d,
paddr_t end)
{
return apply_p2m_changes(d, ALLOCATE, start, end,
- 0, MATTR_MEM, p2m_ram_rw);
+ 0, MATTR_MEM, p2m_ram_rw,
+ d->arch.p2m.default_access);
}
int map_mmio_regions(struct domain *d,
@@ -849,7 +855,8 @@ int map_mmio_regions(struct domain *d,
pfn_to_paddr(start_gfn),
pfn_to_paddr(start_gfn + nr_mfns),
pfn_to_paddr(mfn),
- MATTR_DEV, p2m_mmio_direct);
+ MATTR_DEV, p2m_mmio_direct,
+ d->arch.p2m.default_access);
}
int guest_physmap_add_entry(struct domain *d,
@@ -861,7 +868,8 @@ int guest_physmap_add_entry(struct domain *d,
return apply_p2m_changes(d, INSERT,
pfn_to_paddr(gpfn),
pfn_to_paddr(gpfn + (1 << page_order)),
- pfn_to_paddr(mfn), MATTR_MEM, t);
+ pfn_to_paddr(mfn), MATTR_MEM, t,
+ d->arch.p2m.default_access);
}
void guest_physmap_remove_page(struct domain *d,
@@ -871,7 +879,8 @@ void guest_physmap_remove_page(struct domain *d,
apply_p2m_changes(d, REMOVE,
pfn_to_paddr(gpfn),
pfn_to_paddr(gpfn + (1<<page_order)),
- pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+ pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid,
+ d->arch.p2m.default_access);
}
int p2m_alloc_table(struct domain *d)
@@ -974,6 +983,8 @@ void p2m_teardown(struct domain *d)
p2m_free_vmid(d);
+ radix_tree_destroy(&p2m->mem_access_settings, NULL);
+
spin_unlock(&p2m->lock);
}
@@ -999,6 +1010,9 @@ int p2m_init(struct domain *d)
p2m->max_mapped_gfn = 0;
p2m->lowest_mapped_gfn = ULONG_MAX;
+ p2m->default_access = p2m_access_rwx;
+ radix_tree_init(&p2m->mem_access_settings);
+
err:
spin_unlock(&p2m->lock);
@@ -1013,7 +1027,8 @@ int relinquish_p2m_mapping(struct domain *d)
pfn_to_paddr(p2m->lowest_mapped_gfn),
pfn_to_paddr(p2m->max_mapped_gfn),
pfn_to_paddr(INVALID_MFN),
- MATTR_MEM, p2m_invalid);
+ MATTR_MEM, p2m_invalid,
+ d->arch.p2m.default_access);
}
int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
@@ -1027,7 +1042,8 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
pfn_to_paddr(start_mfn),
pfn_to_paddr(end_mfn),
pfn_to_paddr(INVALID_MFN),
- MATTR_MEM, p2m_invalid);
+ MATTR_MEM, p2m_invalid,
+ d->arch.p2m.default_access);
}
unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 06c93a0..afdbf84 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -2,9 +2,54 @@
#define _XEN_P2M_H
#include <xen/mm.h>
+#include <xen/radix-tree.h>
+#include <public/memory.h>
+#include <public/mem_event.h>
struct domain;
+/* List of possible type for each page in the p2m entry.
+ * The number of available bit per page in the pte for this purpose is 4 bits.
+ * So it's possible to only have 16 fields. If we run out of value in the
+ * future, it's possible to use higher value for pseudo-type and don't store
+ * them in the p2m entry.
+ */
+typedef enum {
+ p2m_invalid = 0, /* Nothing mapped here */
+ p2m_ram_rw, /* Normal read/write guest RAM */
+ p2m_ram_ro, /* Read-only; writes are silently dropped */
+ p2m_mmio_direct, /* Read/write mapping of genuine MMIO area */
+ p2m_map_foreign, /* Ram pages from foreign domain */
+ p2m_grant_map_rw, /* Read/write grant mapping */
+ p2m_grant_map_ro, /* Read-only grant mapping */
+ /* The types below are only used to decide the page attribute in the P2M */
+ p2m_iommu_map_rw, /* Read/write iommu mapping */
+ p2m_iommu_map_ro, /* Read-only iommu mapping */
+ p2m_max_real_type, /* Types after this won't be store in the p2m */
+} p2m_type_t;
+
+/*
+ * Additional access types, which are used to further restrict
+ * the permissions given by the p2m_type_t memory type. Violations
+ * caused by p2m_access_t restrictions are sent to the mem_event
+ * interface.
+ *
+ * The access permissions are soft state: when any ambigious change of page
+ * type or use occurs, or when pages are flushed, swapped, or at any other
+ * convenient type, the access permissions can get reset to the p2m_domain
+ * default.
+ */
+typedef enum {
+ p2m_access_n = 0, /* No access permissions allowed */
+ p2m_access_r = 1,
+ p2m_access_w = 2,
+ p2m_access_rw = 3,
+ p2m_access_x = 4,
+ p2m_access_rx = 5,
+ p2m_access_wx = 6,
+ p2m_access_rwx = 7
+} p2m_access_t;
+
/* Per-p2m-table state */
struct p2m_domain {
/* Lock that protects updates to the p2m */
@@ -38,27 +83,20 @@ struct p2m_domain {
* at each p2m tree level. */
unsigned long shattered[4];
} stats;
-};
-/* List of possible type for each page in the p2m entry.
- * The number of available bit per page in the pte for this purpose is 4 bits.
- * So it's possible to only have 16 fields. If we run out of value in the
- * future, it's possible to use higher value for pseudo-type and don't store
- * them in the p2m entry.
- */
-typedef enum {
- p2m_invalid = 0, /* Nothing mapped here */
- p2m_ram_rw, /* Normal read/write guest RAM */
- p2m_ram_ro, /* Read-only; writes are silently dropped */
- p2m_mmio_direct, /* Read/write mapping of genuine MMIO area */
- p2m_map_foreign, /* Ram pages from foreign domain */
- p2m_grant_map_rw, /* Read/write grant mapping */
- p2m_grant_map_ro, /* Read-only grant mapping */
- /* The types below are only used to decide the page attribute in the P2M */
- p2m_iommu_map_rw, /* Read/write iommu mapping */
- p2m_iommu_map_ro, /* Read-only iommu mapping */
- p2m_max_real_type, /* Types after this won't be store in the p2m */
-} p2m_type_t;
+ /* Default P2M access type for each page in the the domain: new pages,
+ * swapped in pages, cleared pages, and pages that are ambiquously
+ * retyped get this access type. See definition of p2m_access_t. */
+ p2m_access_t default_access;
+
+ /* If true, and an access fault comes in and there is no mem_event listener,
+ * pause domain. Otherwise, remove access restrictions. */
+ bool_t access_required;
+
+ /* Radix tree to store the p2m_access_t settings as the pte's don't have
+ * enough available bits to store this information. */
+ struct radix_tree_root mem_access_settings;
+};
#define p2m_is_foreign(_t) ((_t) == p2m_map_foreign)
#define p2m_is_ram(_t) ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* [PATCH RFC v2 08/12] xen/arm: Add mem_event domctl and mem_access memop.
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
` (6 preceding siblings ...)
2014-08-27 14:06 ` [PATCH RFC v2 07/12] xen/arm: p2m type definitions and changes Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-29 20:57 ` Julien Grall
2014-08-27 14:06 ` [PATCH RFC v2 09/12] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
` (4 subsequent siblings)
12 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
Define the handling of XEN_DOMCTL_mem_event_op and the handling of
copy-back data. Also setup the memop handling of XENMEM_access_op
with the required masking of the operation with MEMOP_CMD_MASK.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
xen/arch/arm/domctl.c | 34 ++++++++++++++++++++++++++++++----
xen/arch/arm/mm.c | 20 ++++++++++++++++----
2 files changed, 46 insertions(+), 8 deletions(-)
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 45974e7..c6d3cb4 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -11,10 +11,17 @@
#include <xen/sched.h>
#include <xen/hypercall.h>
#include <public/domctl.h>
+#include <asm/guest_access.h>
+#include <xen/mem_event.h>
+#include <public/mem_event.h>
long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
{
+
+ long ret;
+ bool_t copyback = 0;
+
switch ( domctl->cmd )
{
case XEN_DOMCTL_cacheflush:
@@ -23,17 +30,36 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
unsigned long e = s + domctl->u.cacheflush.nr_pfns;
if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
- return -EINVAL;
+ {
+ ret = -EINVAL;
+ break;
+ }
if ( e < s )
- return -EINVAL;
+ {
+ ret = -EINVAL;
+ break;
+ }
- return p2m_cache_flush(d, s, e);
+ ret = p2m_cache_flush(d, s, e);
}
+ break;
+
+ case XEN_DOMCTL_mem_event_op:
+ ret = mem_event_domctl(d, &domctl->u.mem_event_op,
+ guest_handle_cast(u_domctl, void));
+ copyback = 1;
+ break;
default:
- return subarch_do_domctl(domctl, d, u_domctl);
+ ret = subarch_do_domctl(domctl, d, u_domctl);
+ break;
}
+
+ if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+ ret = -EFAULT;
+
+ return ret;
}
void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0a243b0..a42c167 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -35,6 +35,9 @@
#include <asm/current.h>
#include <asm/flushtlb.h>
#include <public/memory.h>
+#include <xen/mem_event.h>
+#include <xen/mem_access.h>
+#include <xen/hypercall.h>
#include <xen/sched.h>
#include <xen/vmap.h>
#include <xsm/xsm.h>
@@ -1111,18 +1114,27 @@ int xenmem_add_to_physmap_one(
long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
{
- switch ( op )
+
+ long rc;
+
+ switch ( op & MEMOP_CMD_MASK )
{
/* XXX: memsharing not working yet */
case XENMEM_get_sharing_shared_pages:
case XENMEM_get_sharing_freed_pages:
- return 0;
+ rc = 0;
+ break;
+
+ case XENMEM_access_op:
+ rc = mem_access_memop(op, guest_handle_cast(arg, xen_mem_access_op_t));
+ break;
default:
- return -ENOSYS;
+ rc = -ENOSYS;
+ break;
}
- return 0;
+ return rc;
}
struct domain *page_get_owner_and_reference(struct page_info *page)
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 08/12] xen/arm: Add mem_event domctl and mem_access memop.
2014-08-27 14:06 ` [PATCH RFC v2 08/12] xen/arm: Add mem_event domctl and mem_access memop Tamas K Lengyel
@ 2014-08-29 20:57 ` Julien Grall
2014-08-30 8:19 ` Tamas K Lengyel
0 siblings, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-08-29 20:57 UTC (permalink / raw)
To: Tamas K Lengyel, xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra
Hello Tamas,
On 27/08/14 10:06, Tamas K Lengyel wrote:
> + case XEN_DOMCTL_mem_event_op:
> + ret = mem_event_domctl(d, &domctl->u.mem_event_op,
> + guest_handle_cast(u_domctl, void));
> + copyback = 1;
> + break;
The code for this domctl is exactly the same on x86. Therefore, I would
move it in common/domctl.c.
Of course, you will have to protected with an #ifdef HAVE_MEMACCESS or
whatever define used to know when mem event is supported for the platform.
> +
> + case XENMEM_access_op:
> + rc = mem_access_memop(op, guest_handle_cast(arg, xen_mem_access_op_t));
> + break;
Same remark here.
--
Julien Grall
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 08/12] xen/arm: Add mem_event domctl and mem_access memop.
2014-08-29 20:57 ` Julien Grall
@ 2014-08-30 8:19 ` Tamas K Lengyel
0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-30 8:19 UTC (permalink / raw)
To: Julien Grall
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
Daniel De Graaf, Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 852 bytes --]
On Fri, Aug 29, 2014 at 10:57 PM, Julien Grall <julien.grall@linaro.org>
wrote:
> Hello Tamas,
>
>
> On 27/08/14 10:06, Tamas K Lengyel wrote:
>
>> + case XEN_DOMCTL_mem_event_op:
>> + ret = mem_event_domctl(d, &domctl->u.mem_event_op,
>> + guest_handle_cast(u_domctl, void));
>> + copyback = 1;
>> + break;
>>
>
> The code for this domctl is exactly the same on x86. Therefore, I would
> move it in common/domctl.c.
>
> Of course, you will have to protected with an #ifdef HAVE_MEMACCESS or
> whatever define used to know when mem event is supported for the platform.
>
> +
>> + case XENMEM_access_op:
>> + rc = mem_access_memop(op, guest_handle_cast(arg,
>> xen_mem_access_op_t));
>> + break;
>>
>
> Same remark here.
>
> --
> Julien Grall
>
That would make sense, thanks!
Tamas
[-- Attachment #1.2: Type: text/html, Size: 1571 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH RFC v2 09/12] xen/arm: Data abort exception (R/W) mem_events.
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
` (7 preceding siblings ...)
2014-08-27 14:06 ` [PATCH RFC v2 08/12] xen/arm: Add mem_event domctl and mem_access memop Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-27 17:01 ` Julien Grall
2014-08-29 21:41 ` Julien Grall
2014-08-27 14:06 ` [PATCH RFC v2 10/12] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
` (3 subsequent siblings)
12 siblings, 2 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
This patch enables to store, set, check and deliver LPAE R/W mem_events.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v2: - Patch been split to ease the review process.
- Add definitions of data abort data fetch status codes (enum dabt_dfsc)
and only call p2m_mem_access_check for traps caused by permission violations.
- Only call p2m_write_pte in p2m_lookup if the PTE permission actually changed.
- Properly save settings in the Radix tree and pause the VCPU with
mem_event_vcpu_pause.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
xen/arch/arm/p2m.c | 406 ++++++++++++++++++++++++++++++++++------
xen/arch/arm/traps.c | 37 +++-
xen/include/asm-arm/p2m.h | 29 ++-
xen/include/asm-arm/processor.h | 30 +++
4 files changed, 439 insertions(+), 63 deletions(-)
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a6dea5b..c18e2ef 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -10,6 +10,7 @@
#include <asm/event.h>
#include <asm/hardirq.h>
#include <asm/page.h>
+#include <xen/radix-tree.h>
#include <xen/mem_event.h>
#include <public/mem_event.h>
#include <xen/mem_access.h>
@@ -148,16 +149,99 @@ static lpae_t *p2m_map_first(struct p2m_domain *p2m, paddr_t addr)
return __map_domain_page(page);
}
+static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
+{
+ /* First apply type permissions */
+ switch (t)
+ {
+ case p2m_ram_rw:
+ e->p2m.xn = 0;
+ e->p2m.write = 1;
+ break;
+
+ case p2m_ram_ro:
+ e->p2m.xn = 0;
+ e->p2m.write = 0;
+ break;
+
+ case p2m_iommu_map_rw:
+ case p2m_map_foreign:
+ case p2m_grant_map_rw:
+ case p2m_mmio_direct:
+ e->p2m.xn = 1;
+ e->p2m.write = 1;
+ break;
+
+ case p2m_iommu_map_ro:
+ case p2m_grant_map_ro:
+ case p2m_invalid:
+ e->p2m.xn = 1;
+ e->p2m.write = 0;
+ break;
+
+ case p2m_max_real_type:
+ BUG();
+ break;
+ }
+
+ /* Then restrict with access permissions */
+ switch(a)
+ {
+ case p2m_access_n:
+ e->p2m.read = e->p2m.write = 0;
+ e->p2m.xn = 1;
+ break;
+ case p2m_access_r:
+ e->p2m.write = 0;
+ e->p2m.xn = 1;
+ break;
+ case p2m_access_x:
+ e->p2m.write = 0;
+ e->p2m.read = 0;
+ break;
+ case p2m_access_rx:
+ e->p2m.write = 0;
+ break;
+ case p2m_access_w:
+ e->p2m.read = 0;
+ e->p2m.xn = 1;
+ break;
+ case p2m_access_rw:
+ e->p2m.xn = 1;
+ break;
+ case p2m_access_wx:
+ e->p2m.read = 0;
+ break;
+ case p2m_access_rwx:
+ break;
+ }
+}
+
+static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool_t flush_cache)
+{
+ write_pte(p, pte);
+ if ( flush_cache )
+ clean_xen_dcache(*p);
+}
+
/*
* Lookup the MFN corresponding to a domain's PFN.
*
* There are no processor functions to do a stage 2 only lookup therefore we
* do a a software walk.
+ *
+ * [IN]: d Domain
+ * [IN]: paddr IPA
+ * [IN]: a (Optional) Update PTE access permission
+ * [OUT]: t (Optional) Return PTE type
*/
-paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
+paddr_t p2m_lookup(struct domain *d,
+ paddr_t paddr,
+ p2m_access_t *a,
+ p2m_type_t *t)
{
struct p2m_domain *p2m = &d->arch.p2m;
- lpae_t pte, *first = NULL, *second = NULL, *third = NULL;
+ lpae_t pte, *pte_loc, *first = NULL, *second = NULL, *third = NULL;
paddr_t maddr = INVALID_PADDR;
paddr_t mask;
p2m_type_t _t;
@@ -167,20 +251,20 @@ paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
*t = p2m_invalid;
- spin_lock(&p2m->lock);
-
first = p2m_map_first(p2m, paddr);
if ( !first )
goto err;
mask = FIRST_MASK;
- pte = first[first_table_offset(paddr)];
+ pte_loc = &first[first_table_offset(paddr)];
+ pte = *pte_loc;
if ( !p2m_table(pte) )
goto done;
mask = SECOND_MASK;
second = map_domain_page(pte.p2m.base);
- pte = second[second_table_offset(paddr)];
+ pte_loc = &second[second_table_offset(paddr)];
+ pte = *pte_loc;
if ( !p2m_table(pte) )
goto done;
@@ -189,7 +273,8 @@ paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
BUILD_BUG_ON(THIRD_MASK != PAGE_MASK);
third = map_domain_page(pte.p2m.base);
- pte = third[third_table_offset(paddr)];
+ pte_loc = &third[third_table_offset(paddr)];
+ pte = *pte_loc;
/* This bit must be one in the level 3 entry */
if ( !p2m_table(pte) )
@@ -200,6 +285,21 @@ done:
{
ASSERT(pte.p2m.type != p2m_invalid);
maddr = (pte.bits & PADDR_MASK & mask) | (paddr & ~mask);
+ ASSERT(mfn_valid(maddr>>PAGE_SHIFT));
+
+ if ( a )
+ {
+ p2m_set_permission(&pte, pte.p2m.type, *a);
+
+ /* Only write the PTE if the access permissions changed */
+ if(pte.p2m.read != pte_loc->p2m.read
+ || pte.p2m.write != pte_loc->p2m.write
+ || pte.p2m.xn != pte_loc->p2m.xn)
+ {
+ p2m_write_pte(pte_loc, pte, 1);
+ }
+ }
+
*t = pte.p2m.type;
}
@@ -208,8 +308,6 @@ done:
if (first) unmap_domain_page(first);
err:
- spin_unlock(&p2m->lock);
-
return maddr;
}
@@ -228,7 +326,7 @@ int p2m_pod_decrease_reservation(struct domain *d,
}
static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
- p2m_type_t t)
+ p2m_type_t t, p2m_access_t a)
{
paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
/* sh, xn and write bit will be defined in the following switches
@@ -258,37 +356,7 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
break;
}
- switch (t)
- {
- case p2m_ram_rw:
- e.p2m.xn = 0;
- e.p2m.write = 1;
- break;
-
- case p2m_ram_ro:
- e.p2m.xn = 0;
- e.p2m.write = 0;
- break;
-
- case p2m_iommu_map_rw:
- case p2m_map_foreign:
- case p2m_grant_map_rw:
- case p2m_mmio_direct:
- e.p2m.xn = 1;
- e.p2m.write = 1;
- break;
-
- case p2m_iommu_map_ro:
- case p2m_grant_map_ro:
- case p2m_invalid:
- e.p2m.xn = 1;
- e.p2m.write = 0;
- break;
-
- case p2m_max_real_type:
- BUG();
- break;
- }
+ p2m_set_permission(&e, t, a);
ASSERT(!(pa & ~PAGE_MASK));
ASSERT(!(pa & ~PADDR_MASK));
@@ -298,13 +366,6 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
return e;
}
-static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool_t flush_cache)
-{
- write_pte(p, pte);
- if ( flush_cache )
- clean_xen_dcache(*p);
-}
-
/*
* Allocate a new page table page and hook it in via the given entry.
* apply_one_level relies on this returning 0 on success
@@ -346,7 +407,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
for ( i=0 ; i < LPAE_ENTRIES; i++ )
{
pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
- MATTR_MEM, t);
+ MATTR_MEM, t, p2m->default_access);
/*
* First and second level super pages set p2m.table = 0, but
@@ -366,7 +427,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
unmap_domain_page(p);
- pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
+ pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid, p2m->default_access);
p2m_write_pte(entry, pte, flush_cache);
@@ -498,7 +559,7 @@ static int apply_one_level(struct domain *d,
page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
if ( page )
{
- pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t);
+ pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
if ( level < 3 )
pte.p2m.table = 0;
p2m_write_pte(entry, pte, flush_cache);
@@ -533,7 +594,7 @@ static int apply_one_level(struct domain *d,
(level == 3 || !p2m_table(orig_pte)) )
{
/* New mapping is superpage aligned, make it */
- pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t);
+ pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
if ( level < 3 )
pte.p2m.table = 0; /* Superpage entry */
@@ -640,6 +701,7 @@ static int apply_one_level(struct domain *d,
memset(&pte, 0x00, sizeof(pte));
p2m_write_pte(entry, pte, flush_cache);
+ radix_tree_delete(&p2m->mem_access_settings, paddr_to_pfn(*addr));
*addr += level_size;
@@ -1048,7 +1110,10 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
{
- paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
+ paddr_t p;
+ spin_lock(&d->arch.p2m.lock);
+ p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL, NULL);
+ spin_unlock(&d->arch.p2m.lock);
return p >> PAGE_SHIFT;
}
@@ -1080,6 +1145,241 @@ err:
return page;
}
+int p2m_mem_access_check(paddr_t gpa, vaddr_t gla,
+ bool_t access_r, bool_t access_w, bool_t access_x,
+ bool_t ptw)
+{
+ struct vcpu *v = current;
+ mem_event_request_t *req = NULL;
+ xenmem_access_t xma;
+ bool_t violation;
+ int rc;
+
+ /* If we have no listener, nothing to do */
+ if( !mem_event_check_ring( &v->domain->mem_event->access ) )
+ {
+ return 1;
+ }
+
+ rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma);
+ if ( rc )
+ return rc;
+
+ switch (xma)
+ {
+ default:
+ case XENMEM_access_n:
+ violation = access_r || access_w || access_x;
+ break;
+ case XENMEM_access_r:
+ violation = access_w || access_x;
+ break;
+ case XENMEM_access_w:
+ violation = access_r || access_x;
+ break;
+ case XENMEM_access_x:
+ violation = access_r || access_w;
+ break;
+ case XENMEM_access_rx:
+ violation = access_w;
+ break;
+ case XENMEM_access_wx:
+ violation = access_r;
+ break;
+ case XENMEM_access_rw:
+ violation = access_x;
+ break;
+ case XENMEM_access_rwx:
+ violation = 0;
+ break;
+ }
+
+ if (!violation)
+ return 1;
+
+ req = xzalloc(mem_event_request_t);
+ if ( req )
+ {
+ req->reason = MEM_EVENT_REASON_VIOLATION;
+ req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
+ req->gfn = gpa >> PAGE_SHIFT;
+ req->offset = gpa & ((1 << PAGE_SHIFT) - 1);
+ req->gla = gla;
+ req->gla_valid = 1;
+ req->access_r = access_r;
+ req->access_w = access_w;
+ req->access_x = access_x;
+ req->vcpu_id = v->vcpu_id;
+
+ mem_event_vcpu_pause(v);
+ mem_access_send_req(v->domain, req);
+
+ xfree(req);
+
+ return 0;
+ }
+
+ return 1;
+}
+
+void p2m_mem_access_resume(struct domain *d)
+{
+ mem_event_response_t rsp;
+
+ /* Pull all responses off the ring */
+ while( mem_event_get_response(d, &d->mem_event->access, &rsp) )
+ {
+ struct vcpu *v;
+
+ if ( rsp.flags & MEM_EVENT_FLAG_DUMMY )
+ continue;
+
+ /* Validate the vcpu_id in the response. */
+ if ( (rsp.vcpu_id >= d->max_vcpus) || !d->vcpu[rsp.vcpu_id] )
+ continue;
+
+ v = d->vcpu[rsp.vcpu_id];
+
+ /* Unpause domain */
+ if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
+ mem_event_vcpu_unpause(v);
+ }
+}
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
+ uint32_t start, uint32_t mask, xenmem_access_t access)
+{
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+ p2m_access_t a;
+ long rc = 0;
+
+ static const p2m_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = p2m_access_##ac
+ ACCESS(n),
+ ACCESS(r),
+ ACCESS(w),
+ ACCESS(rw),
+ ACCESS(x),
+ ACCESS(rx),
+ ACCESS(wx),
+ ACCESS(rwx),
+#undef ACCESS
+ };
+
+ switch ( access )
+ {
+ case 0 ... ARRAY_SIZE(memaccess) - 1:
+ a = memaccess[access];
+ break;
+ case XENMEM_access_default:
+ a = p2m->default_access;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* If request to set default access */
+ if ( pfn == ~0ul )
+ {
+ p2m->default_access = a;
+ return 0;
+ }
+
+ spin_lock(&p2m->lock);
+ for ( pfn += start; nr > start; ++pfn )
+ {
+
+ unsigned long mfn = p2m_lookup(d, pfn_to_paddr(pfn), &a, NULL);
+ mfn >>= PAGE_SHIFT;
+
+ if ( !mfn_valid(mfn) )
+ break;
+
+ rc = radix_tree_insert(&p2m->mem_access_settings, pfn,
+ radix_tree_int_to_ptr(a));
+
+ switch ( rc )
+ {
+ case 0:
+ /* Nothing to do, setting saved successfully */
+ break;
+ case -EEXIST:
+ /* If a setting existed already, change it to the new one */
+ radix_tree_replace_slot(
+ radix_tree_lookup_slot(
+ &p2m->mem_access_settings, pfn),
+ radix_tree_int_to_ptr(a));
+ rc = 0;
+ break;
+ default:
+ /* If we fail to save the setting in the Radix tree, we
+ * need to reset the PTE permissions to default. */
+ p2m_lookup(d, pfn_to_paddr(pfn), &p2m->default_access, NULL);
+ break;
+ }
+
+ /* Check for continuation if it's not the last iteration. */
+ if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
+ {
+ rc = start;
+ break;
+ }
+ }
+
+ /* Flush the TLB of the domain to ensure consistency */
+ flush_tlb_domain(d);
+
+ spin_unlock(&p2m->lock);
+ return rc;
+}
+
+int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
+ xenmem_access_t *access)
+{
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+ void *i;
+ int index;
+
+ static const xenmem_access_t memaccess[] = {
+#define ACCESS(ac) [XENMEM_access_##ac] = XENMEM_access_##ac
+ ACCESS(n),
+ ACCESS(r),
+ ACCESS(w),
+ ACCESS(rw),
+ ACCESS(x),
+ ACCESS(rx),
+ ACCESS(wx),
+ ACCESS(rwx),
+#undef ACCESS
+ };
+
+ /* If request to get default access */
+ if ( gpfn == ~0ull )
+ {
+ *access = memaccess[p2m->default_access];
+ return 0;
+ }
+
+ spin_lock(&p2m->lock);
+
+ i = radix_tree_lookup(&p2m->mem_access_settings, gpfn);
+
+ spin_unlock(&p2m->lock);
+
+ if (!i)
+ return -ESRCH;
+
+ index = radix_tree_ptr_to_int(i);
+
+ if ( (unsigned) index >= ARRAY_SIZE(memaccess) )
+ return -ERANGE;
+
+ *access = memaccess[ (unsigned) index];
+ return 0;
+}
+
/*
* Local variables:
* mode: C
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 76a9586..860905a 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1674,23 +1674,25 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
uint32_t offset;
uint32_t *first = NULL, *second = NULL;
+ spin_lock(&d->arch.p2m.lock);
+
printk("dom%d VA 0x%08"PRIvaddr"\n", d->domain_id, addr);
printk(" TTBCR: 0x%08"PRIregister"\n", ttbcr);
printk(" TTBR0: 0x%016"PRIx64" = 0x%"PRIpaddr"\n",
- ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK, NULL));
+ ttbr0, p2m_lookup(d, ttbr0 & PAGE_MASK, NULL, NULL));
if ( ttbcr & TTBCR_EAE )
{
printk("Cannot handle LPAE guest PT walk\n");
- return;
+ goto err;
}
if ( (ttbcr & TTBCR_N_MASK) != 0 )
{
printk("Cannot handle TTBR1 guest walks\n");
- return;
+ goto err;
}
- paddr = p2m_lookup(d, ttbr0 & PAGE_MASK, NULL);
+ paddr = p2m_lookup(d, ttbr0 & PAGE_MASK, NULL, NULL);
if ( paddr == INVALID_PADDR )
{
printk("Failed TTBR0 maddr lookup\n");
@@ -1705,7 +1707,7 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
!(first[offset] & 0x2) )
goto done;
- paddr = p2m_lookup(d, first[offset] & PAGE_MASK, NULL);
+ paddr = p2m_lookup(d, first[offset] & PAGE_MASK, NULL, NULL);
if ( paddr == INVALID_PADDR )
{
@@ -1720,6 +1722,9 @@ void dump_guest_s1_walk(struct domain *d, vaddr_t addr)
done:
if (second) unmap_domain_page(second);
if (first) unmap_domain_page(first);
+
+err:
+ spin_unlock(&d->arch.p2m.lock);
}
static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
@@ -1749,13 +1754,29 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
info.gva = READ_SYSREG64(FAR_EL2);
#endif
- if (dabt.s1ptw)
- goto bad_data_abort;
-
rc = gva_to_ipa(info.gva, &info.gpa);
if ( rc == -EFAULT )
goto bad_data_abort;
+ rc = 0;
+ switch ( dabt.dfsc )
+ {
+ case DABT_DFSC_PERMISSION_1:
+ case DABT_DFSC_PERMISSION_2:
+ case DABT_DFSC_PERMISSION_3:
+ rc = p2m_mem_access_check(info.gpa, info.gva,
+ 1, info.dabt.write, 0,
+ info.dabt.s1ptw);
+
+ /* Trap was triggered by mem_access, work here is done */
+ if ( !rc )
+ return;
+
+ break;
+ default:
+ break;
+ }
+
/* XXX: Decode the instruction if ISS is not valid */
if ( !dabt.valid )
goto bad_data_abort;
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index afdbf84..0412a60 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -130,7 +130,10 @@ void p2m_restore_state(struct vcpu *n);
void p2m_dump_info(struct domain *d);
/* Look up the MFN corresponding to a domain's PFN. */
-paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
+paddr_t p2m_lookup(struct domain *d,
+ paddr_t gpfn,
+ p2m_access_t *a,
+ p2m_type_t *t);
/* Clean & invalidate caches corresponding to a region of guest address space */
int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
@@ -187,7 +190,7 @@ static inline struct page_info *get_page_from_gfn(
{
struct page_info *page;
p2m_type_t p2mt;
- paddr_t maddr = p2m_lookup(d, pfn_to_paddr(gfn), &p2mt);
+ paddr_t maddr = p2m_lookup(d, pfn_to_paddr(gfn), NULL, &p2mt);
unsigned long mfn = maddr >> PAGE_SHIFT;
if (t)
@@ -233,6 +236,28 @@ static inline int get_page_and_type(struct page_info *page,
return rc;
}
+/* get host p2m table */
+#define p2m_get_hostp2m(d) (&((d)->arch.p2m))
+
+/* Send mem event based on the access (gla is -1ull if not available). Boolean
+ * return value indicates if trap needs to be injected into guest. */
+int p2m_mem_access_check(paddr_t gpa, vaddr_t gla,
+ bool_t access_r, bool_t access_w, bool_t access_x,
+ bool_t ptw);
+
+/* Resumes the running of the VCPU, restarting the last instruction */
+void p2m_mem_access_resume(struct domain *d);
+
+/* Set access type for a region of pfns.
+ * If start_pfn == -1ul, sets the default access type */
+long p2m_set_mem_access(struct domain *d, unsigned long start_pfn, uint32_t nr,
+ uint32_t start, uint32_t mask, xenmem_access_t access);
+
+/* Get access type for a pfn
+ * If pfn == -1ul, gets the default access type */
+int p2m_get_mem_access(struct domain *d, unsigned long pfn,
+ xenmem_access_t *access);
+
#endif /* _XEN_P2M_H */
/*
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 9d230f3..0f1500a 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -259,6 +259,36 @@ enum dabt_size {
DABT_DOUBLE_WORD = 3,
};
+/* Data abort data fetch status codes */
+enum dabt_dfsc {
+ DABT_DFSC_ADDR_SIZE_0 = 0b000000,
+ DABT_DFSC_ADDR_SIZE_1 = 0b000001,
+ DABT_DFSC_ADDR_SIZE_2 = 0b000010,
+ DABT_DFSC_ADDR_SIZE_3 = 0b000011,
+ DABT_DFSC_TRANSLATION_0 = 0b000100,
+ DABT_DFSC_TRANSLATION_1 = 0b000101,
+ DABT_DFSC_TRANSLATION_2 = 0b000110,
+ DABT_DFSC_TRANSLATION_3 = 0b000111,
+ DABT_DFSC_ACCESS_1 = 0b001001,
+ DABT_DFSC_ACCESS_2 = 0b001010,
+ DABT_DFSC_ACCESS_3 = 0b001011,
+ DABT_DFSC_PERMISSION_1 = 0b001101,
+ DABT_DFSC_PERMISSION_2 = 0b001110,
+ DABT_DFSC_PERMISSION_3 = 0b001111,
+ DABT_DFSC_SYNC_EXT = 0b010000,
+ DABT_DFSC_SYNC_PARITY = 0b011000,
+ DABT_DFSC_SYNC_EXT_TTW_0 = 0b010100,
+ DABT_DFSC_SYNC_EXT_TTW_1 = 0b010101,
+ DABT_DFSC_SYNC_EXT_TTW_2 = 0b010110,
+ DABT_DFSC_SYNC_EXT_TTW_3 = 0b010111,
+ DABT_DFSC_SYNC_PARITY_TTW_0 = 0b011100,
+ DABT_DFSC_SYNC_PARITY_TTW_1 = 0b011101,
+ DABT_DFSC_SYNC_PARITY_TTW_2 = 0b011110,
+ DABT_DFSC_SYNC_PARITY_TTW_3 = 0b011111,
+ DABT_DFSC_ALIGNMENT = 0b100001,
+ DABT_DFSC_TLB_CONFLICT = 0b110000,
+};
+
union hsr {
uint32_t bits;
struct {
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 09/12] xen/arm: Data abort exception (R/W) mem_events.
2014-08-27 14:06 ` [PATCH RFC v2 09/12] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
@ 2014-08-27 17:01 ` Julien Grall
2014-08-27 17:22 ` Tamas K Lengyel
2014-08-29 21:41 ` Julien Grall
1 sibling, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-08-27 17:01 UTC (permalink / raw)
To: Tamas K Lengyel, xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra
Hello Tamas,
On 27/08/14 10:06, Tamas K Lengyel wrote:
> /*
> * Lookup the MFN corresponding to a domain's PFN.
> *
> * There are no processor functions to do a stage 2 only lookup therefore we
> * do a a software walk.
> + *
> + * [IN]: d Domain
> + * [IN]: paddr IPA
> + * [IN]: a (Optional) Update PTE access permission
> + * [OUT]: t (Optional) Return PTE type
> */
> -paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
> +paddr_t p2m_lookup(struct domain *d,
> + paddr_t paddr,
> + p2m_access_t *a,
> + p2m_type_t *t)
The indentation looks wrong here.
> {
> struct p2m_domain *p2m = &d->arch.p2m;
> - lpae_t pte, *first = NULL, *second = NULL, *third = NULL;
> + lpae_t pte, *pte_loc, *first = NULL, *second = NULL, *third = NULL;
> paddr_t maddr = INVALID_PADDR;
> paddr_t mask;
> p2m_type_t _t;
> @@ -167,20 +251,20 @@ paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
>
> *t = p2m_invalid;
>
> - spin_lock(&p2m->lock);
> -
If you need a version with the lock already taken, please introduce a
new function (i.e p2m_lookup_locked or smth else) rather than move the
lock in every caller.
Regards,
--
Julien Grall
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 09/12] xen/arm: Data abort exception (R/W) mem_events.
2014-08-27 17:01 ` Julien Grall
@ 2014-08-27 17:22 ` Tamas K Lengyel
0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 17:22 UTC (permalink / raw)
To: Julien Grall
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
Daniel De Graaf, Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 1471 bytes --]
> On 27/08/14 10:06, Tamas K Lengyel wrote:
>
>> /*
>> * Lookup the MFN corresponding to a domain's PFN.
>> *
>> * There are no processor functions to do a stage 2 only lookup
>> therefore we
>> * do a a software walk.
>> + *
>> + * [IN]: d Domain
>> + * [IN]: paddr IPA
>> + * [IN]: a (Optional) Update PTE access permission
>> + * [OUT]: t (Optional) Return PTE type
>> */
>> -paddr_t p2m_lookup(struct domain *d, paddr_t paddr, p2m_type_t *t)
>> +paddr_t p2m_lookup(struct domain *d,
>> + paddr_t paddr,
>> + p2m_access_t *a,
>> + p2m_type_t *t)
>>
>
> The indentation looks wrong here.
Ack.
>
>
> {
>> struct p2m_domain *p2m = &d->arch.p2m;
>> - lpae_t pte, *first = NULL, *second = NULL, *third = NULL;
>> + lpae_t pte, *pte_loc, *first = NULL, *second = NULL, *third = NULL;
>> paddr_t maddr = INVALID_PADDR;
>> paddr_t mask;
>> p2m_type_t _t;
>> @@ -167,20 +251,20 @@ paddr_t p2m_lookup(struct domain *d, paddr_t paddr,
>> p2m_type_t *t)
>>
>> *t = p2m_invalid;
>>
>> - spin_lock(&p2m->lock);
>> -
>>
>
> If you need a version with the lock already taken, please introduce a new
> function (i.e p2m_lookup_locked or smth else) rather than move the lock in
> every caller.
>
> Regards,
>
> --
> Julien Grall
I was debating about doing that but I figured this approach introduces the
least amount of code duplication. If that's not a problem I'm all for it.
Thanks,
Tamas
[-- Attachment #1.2: Type: text/html, Size: 2317 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 09/12] xen/arm: Data abort exception (R/W) mem_events.
2014-08-27 14:06 ` [PATCH RFC v2 09/12] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
2014-08-27 17:01 ` Julien Grall
@ 2014-08-29 21:41 ` Julien Grall
2014-08-30 8:16 ` Tamas K Lengyel
1 sibling, 1 reply; 48+ messages in thread
From: Julien Grall @ 2014-08-29 21:41 UTC (permalink / raw)
To: Tamas K Lengyel, xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra
Hello Tamas,
I've not yet finished to reviewed entirely this patch but I'm at least
send thoses comments.
On 27/08/14 10:06, Tamas K Lengyel wrote:
> /*
> * Lookup the MFN corresponding to a domain's PFN.
> *
> * There are no processor functions to do a stage 2 only lookup therefore we
> * do a a software walk.
> + *
> + * [IN]: d Domain
> + * [IN]: paddr IPA
> + * [IN]: a (Optional) Update PTE access permission
It's very confusing to update the access permission in p2m_lookup. Why
didn't you add a new function?
Hence, you only update the PTE here and not the radix tree.
[..]
> {
> ASSERT(pte.p2m.type != p2m_invalid);
> maddr = (pte.bits & PADDR_MASK & mask) | (paddr & ~mask);
> + ASSERT(mfn_valid(maddr>>PAGE_SHIFT));
> +
> + if ( a )
> + {
> + p2m_set_permission(&pte, pte.p2m.type, *a);
> +
> + /* Only write the PTE if the access permissions changed */
Does this happen often? I'm wondering if we could always write the pte
no matter if the permission as changed or not.
If it happen often, maybe just checking the access type has changed
would be a good solution?
> + if(pte.p2m.read != pte_loc->p2m.read
Coding style
if ( ... )
> + || pte.p2m.write != pte_loc->p2m.write
> + || pte.p2m.xn != pte_loc->p2m.xn)
> + {
> + p2m_write_pte(pte_loc, pte, 1);
> + }
> + }
> +
> *t = pte.p2m.type;
> }
>
> @@ -208,8 +308,6 @@ done:
> if (first) unmap_domain_page(first);
>
> err:
> - spin_unlock(&p2m->lock);
> -
> return maddr;
> }
>
> @@ -228,7 +326,7 @@ int p2m_pod_decrease_reservation(struct domain *d,
> }
>
> static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
> - p2m_type_t t)
> + p2m_type_t t, p2m_access_t a)
> {
> paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
> /* sh, xn and write bit will be defined in the following switches
> @@ -258,37 +356,7 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
> break;
> }
>
> - switch (t)
> - {
> - case p2m_ram_rw:
> - e.p2m.xn = 0;
> - e.p2m.write = 1;
> - break;
> -
> - case p2m_ram_ro:
> - e.p2m.xn = 0;
> - e.p2m.write = 0;
> - break;
> -
> - case p2m_iommu_map_rw:
> - case p2m_map_foreign:
> - case p2m_grant_map_rw:
> - case p2m_mmio_direct:
> - e.p2m.xn = 1;
> - e.p2m.write = 1;
> - break;
> -
> - case p2m_iommu_map_ro:
> - case p2m_grant_map_ro:
> - case p2m_invalid:
> - e.p2m.xn = 1;
> - e.p2m.write = 0;
> - break;
> -
> - case p2m_max_real_type:
> - BUG();
> - break;
> - }
> + p2m_set_permission(&e, t, a);
>
> ASSERT(!(pa & ~PAGE_MASK));
> ASSERT(!(pa & ~PADDR_MASK));
> @@ -298,13 +366,6 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
> return e;
> }
>
> -static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool_t flush_cache)
> -{
> - write_pte(p, pte);
> - if ( flush_cache )
> - clean_xen_dcache(*p);
> -}
> -
> /*
> * Allocate a new page table page and hook it in via the given entry.
> * apply_one_level relies on this returning 0 on success
> @@ -346,7 +407,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
> for ( i=0 ; i < LPAE_ENTRIES; i++ )
> {
> pte = mfn_to_p2m_entry(base_pfn + (i<<(level_shift-LPAE_SHIFT)),
> - MATTR_MEM, t);
> + MATTR_MEM, t, p2m->default_access);
>
> /*
> * First and second level super pages set p2m.table = 0, but
> @@ -366,7 +427,7 @@ static int p2m_create_table(struct domain *d, lpae_t *entry,
>
> unmap_domain_page(p);
>
> - pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
> + pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid, p2m->default_access);
>
> p2m_write_pte(entry, pte, flush_cache);
>
> @@ -498,7 +559,7 @@ static int apply_one_level(struct domain *d,
> page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
> if ( page )
> {
> - pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t);
> + pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
> if ( level < 3 )
> pte.p2m.table = 0;
> p2m_write_pte(entry, pte, flush_cache);
> @@ -533,7 +594,7 @@ static int apply_one_level(struct domain *d,
> (level == 3 || !p2m_table(orig_pte)) )
> {
> /* New mapping is superpage aligned, make it */
> - pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t);
> + pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
> if ( level < 3 )
> pte.p2m.table = 0; /* Superpage entry */
>
> @@ -640,6 +701,7 @@ static int apply_one_level(struct domain *d,
>
> memset(&pte, 0x00, sizeof(pte));
> p2m_write_pte(entry, pte, flush_cache);
> + radix_tree_delete(&p2m->mem_access_settings, paddr_to_pfn(*addr));
>
> *addr += level_size;
>
> @@ -1048,7 +1110,10 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
>
> unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
> {
> - paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
> + paddr_t p;
Missing blank line after the declaration block.
> + spin_lock(&d->arch.p2m.lock);
> + p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL, NULL);
> + spin_unlock(&d->arch.p2m.lock);
> return p >> PAGE_SHIFT;
> }
>
> @@ -1080,6 +1145,241 @@ err:
> return page;
> }
>
> +int p2m_mem_access_check(paddr_t gpa, vaddr_t gla,
> + bool_t access_r, bool_t access_w, bool_t access_x,
> + bool_t ptw)
The 2 arguments lines should be aligned to p. IOW, you need to remove
one space on the both lines.
Also, as the function return always 0 or 1 I would use a bool_t for the
return value.
> +{
> + struct vcpu *v = current;
> + mem_event_request_t *req = NULL;
> + xenmem_access_t xma;
> + bool_t violation;
> + int rc;
> +
> + /* If we have no listener, nothing to do */
> + if( !mem_event_check_ring( &v->domain->mem_event->access ) )
The spaces in the inner () are not necessary.
Also, don't you miss to check p2m->access_required?
> + {
> + return 1;
> + }
> +
> + rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma);
> + if ( rc )
> + return rc;
> +
> + switch (xma)
switch ( ... )
> + {
> + default:
It looks like all the possible case of the enum has been defined below.
Why do you define default as XENMEM_access_n?
> + case XENMEM_access_n:
The "case" is usually aligned to "{". Such as
{
case ...:
> + violation = access_r || access_w || access_x;
Silly question, where doess access_* comes from? I can't find any
definition with CTAGS.
[..]
> + if (!violation)
if ( ... )
> + return 1;
> +
> + req = xzalloc(mem_event_request_t);
> + if ( req )
> + {
> + req->reason = MEM_EVENT_REASON_VIOLATION;
> + req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
> + req->gfn = gpa >> PAGE_SHIFT;
> + req->offset = gpa & ((1 << PAGE_SHIFT) - 1);
> + req->gla = gla;
> + req->gla_valid = 1;
> + req->access_r = access_r;
> + req->access_w = access_w;
> + req->access_x = access_x;
> + req->vcpu_id = v->vcpu_id;
> +
> + mem_event_vcpu_pause(v);
> + mem_access_send_req(v->domain, req);
> +
> + xfree(req);
> +
> + return 0;
> + }
Ignoring the access when Xen fails to allocate req sounds strange.
Shouldn't you at least print a warning?
> +
> + return 1;
> +}
[..]
> +int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
> + xenmem_access_t *access)
> +{
[..]
> + if ( (unsigned) index >= ARRAY_SIZE(memaccess) )
> + return -ERANGE;
> +
> + *access = memaccess[ (unsigned) index];
Spurious space at after [ ?
Regards,
--
Julien Grall
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 09/12] xen/arm: Data abort exception (R/W) mem_events.
2014-08-29 21:41 ` Julien Grall
@ 2014-08-30 8:16 ` Tamas K Lengyel
0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-30 8:16 UTC (permalink / raw)
To: Julien Grall
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
Daniel De Graaf, Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 10503 bytes --]
On Fri, Aug 29, 2014 at 11:41 PM, Julien Grall <julien.grall@linaro.org>
wrote:
> Hello Tamas,
>
> I've not yet finished to reviewed entirely this patch but I'm at least
> send thoses comments.
>
>
> On 27/08/14 10:06, Tamas K Lengyel wrote:
>
>> /*
>> * Lookup the MFN corresponding to a domain's PFN.
>> *
>> * There are no processor functions to do a stage 2 only lookup
>> therefore we
>> * do a a software walk.
>> + *
>> + * [IN]: d Domain
>> + * [IN]: paddr IPA
>> + * [IN]: a (Optional) Update PTE access permission
>>
>
> It's very confusing to update the access permission in p2m_lookup. Why
> didn't you add a new function?
>
> Hence, you only update the PTE here and not the radix tree.
>
The Radix tree is only updated in the caller in if this function returns 1.
In the next iteration of the series I already added a separate function for
this, p2m_set_entry which doesn't lock so p2m_lookup will be unchanged.
> [..]
>
>
> {
>> ASSERT(pte.p2m.type != p2m_invalid);
>> maddr = (pte.bits & PADDR_MASK & mask) | (paddr & ~mask);
>> + ASSERT(mfn_valid(maddr>>PAGE_SHIFT));
>> +
>> + if ( a )
>> + {
>> + p2m_set_permission(&pte, pte.p2m.type, *a);
>> +
>> + /* Only write the PTE if the access permissions changed */
>>
>
> Does this happen often? I'm wondering if we could always write the pte no
> matter if the permission as changed or not.
>
> If it happen often, maybe just checking the access type has changed would
> be a good solution?
It really only happened because of large pages in the table, as I was
passing paddr inputs based on 4k page boundaries. I already started working
on shattering the large pages as I'm setting the permissions so this will
go away in the next iteration.
>
>
> + if(pte.p2m.read != pte_loc->p2m.read
>>
>
> Coding style
>
> if ( ... )
Ack.
>
>
> + || pte.p2m.write != pte_loc->p2m.write
>> + || pte.p2m.xn != pte_loc->p2m.xn)
>> + {
>> + p2m_write_pte(pte_loc, pte, 1);
>> + }
>> + }
>> +
>>
>
> *t = pte.p2m.type;
>> }
>>
>> @@ -208,8 +308,6 @@ done:
>> if (first) unmap_domain_page(first);
>>
>> err:
>> - spin_unlock(&p2m->lock);
>> -
>> return maddr;
>> }
>>
>> @@ -228,7 +326,7 @@ int p2m_pod_decrease_reservation(struct domain *d,
>> }
>>
>> static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
>> - p2m_type_t t)
>> + p2m_type_t t, p2m_access_t a)
>> {
>> paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
>> /* sh, xn and write bit will be defined in the following switches
>> @@ -258,37 +356,7 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn,
>> unsigned int mattr,
>> break;
>> }
>>
>> - switch (t)
>> - {
>> - case p2m_ram_rw:
>> - e.p2m.xn = 0;
>> - e.p2m.write = 1;
>> - break;
>> -
>> - case p2m_ram_ro:
>> - e.p2m.xn = 0;
>> - e.p2m.write = 0;
>> - break;
>> -
>> - case p2m_iommu_map_rw:
>> - case p2m_map_foreign:
>> - case p2m_grant_map_rw:
>> - case p2m_mmio_direct:
>> - e.p2m.xn = 1;
>> - e.p2m.write = 1;
>> - break;
>> -
>> - case p2m_iommu_map_ro:
>> - case p2m_grant_map_ro:
>> - case p2m_invalid:
>> - e.p2m.xn = 1;
>> - e.p2m.write = 0;
>> - break;
>> -
>> - case p2m_max_real_type:
>> - BUG();
>> - break;
>> - }
>> + p2m_set_permission(&e, t, a);
>>
>> ASSERT(!(pa & ~PAGE_MASK));
>> ASSERT(!(pa & ~PADDR_MASK));
>> @@ -298,13 +366,6 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn,
>> unsigned int mattr,
>> return e;
>> }
>>
>> -static inline void p2m_write_pte(lpae_t *p, lpae_t pte, bool_t
>> flush_cache)
>> -{
>> - write_pte(p, pte);
>> - if ( flush_cache )
>> - clean_xen_dcache(*p);
>> -}
>> -
>> /*
>> * Allocate a new page table page and hook it in via the given entry.
>> * apply_one_level relies on this returning 0 on success
>> @@ -346,7 +407,7 @@ static int p2m_create_table(struct domain *d, lpae_t
>> *entry,
>> for ( i=0 ; i < LPAE_ENTRIES; i++ )
>> {
>> pte = mfn_to_p2m_entry(base_pfn +
>> (i<<(level_shift-LPAE_SHIFT)),
>> - MATTR_MEM, t);
>> + MATTR_MEM, t, p2m->default_access);
>>
>> /*
>> * First and second level super pages set p2m.table = 0,
>> but
>> @@ -366,7 +427,7 @@ static int p2m_create_table(struct domain *d, lpae_t
>> *entry,
>>
>> unmap_domain_page(p);
>>
>> - pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
>> + pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid,
>> p2m->default_access);
>>
>> p2m_write_pte(entry, pte, flush_cache);
>>
>> @@ -498,7 +559,7 @@ static int apply_one_level(struct domain *d,
>> page = alloc_domheap_pages(d, level_shift - PAGE_SHIFT, 0);
>> if ( page )
>> {
>> - pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t);
>> + pte = mfn_to_p2m_entry(page_to_mfn(page), mattr, t, a);
>> if ( level < 3 )
>> pte.p2m.table = 0;
>> p2m_write_pte(entry, pte, flush_cache);
>> @@ -533,7 +594,7 @@ static int apply_one_level(struct domain *d,
>> (level == 3 || !p2m_table(orig_pte)) )
>> {
>> /* New mapping is superpage aligned, make it */
>> - pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t);
>> + pte = mfn_to_p2m_entry(*maddr >> PAGE_SHIFT, mattr, t, a);
>> if ( level < 3 )
>> pte.p2m.table = 0; /* Superpage entry */
>>
>> @@ -640,6 +701,7 @@ static int apply_one_level(struct domain *d,
>>
>> memset(&pte, 0x00, sizeof(pte));
>> p2m_write_pte(entry, pte, flush_cache);
>> + radix_tree_delete(&p2m->mem_access_settings,
>> paddr_to_pfn(*addr));
>>
>> *addr += level_size;
>>
>> @@ -1048,7 +1110,10 @@ int p2m_cache_flush(struct domain *d, xen_pfn_t
>> start_mfn, xen_pfn_t end_mfn)
>>
>> unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
>> {
>> - paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
>> + paddr_t p;
>>
>
> Missing blank line after the declaration block.
Ack.
>
>
> + spin_lock(&d->arch.p2m.lock);
>> + p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL, NULL);
>> + spin_unlock(&d->arch.p2m.lock);
>> return p >> PAGE_SHIFT;
>> }
>>
>> @@ -1080,6 +1145,241 @@ err:
>> return page;
>> }
>>
>> +int p2m_mem_access_check(paddr_t gpa, vaddr_t gla,
>> + bool_t access_r, bool_t access_w, bool_t
>> access_x,
>> + bool_t ptw)
>>
>
> The 2 arguments lines should be aligned to p. IOW, you need to remove one
> space on the both lines.
>
> Also, as the function return always 0 or 1 I would use a bool_t for the
> return value.
I would need to adjust this function in the x86 side as well where I think
it may be returning -errno if something is wrong.
>
>
> +{
>> + struct vcpu *v = current;
>> + mem_event_request_t *req = NULL;
>> + xenmem_access_t xma;
>> + bool_t violation;
>> + int rc;
>> +
>> + /* If we have no listener, nothing to do */
>> + if( !mem_event_check_ring( &v->domain->mem_event->access ) )
>>
>
> The spaces in the inner () are not necessary.
>
> Also, don't you miss to check p2m->access_required?
I didn't add it in this iteration yet (and n2rwx, rx2rw are also missing).
I have them already in the next iteration which I will be sending in Monday
after I go back to my office to reset my hung arndale to test it :-)
>
>
> + {
>> + return 1;
>> + }
>> +
>> + rc = p2m_get_mem_access(v->domain, paddr_to_pfn(gpa), &xma);
>> + if ( rc )
>> + return rc;
>> +
>> + switch (xma)
>>
>
> switch ( ... )
>
Ack.
>
> + {
>> + default:
>>
>
> It looks like all the possible case of the enum has been defined below.
> Why do you define default as XENMEM_access_n?
>
Just in case I guess? I'm not entirely certain, this is pretty much just
copy-pasta from the x86 side.
>
> + case XENMEM_access_n:
>>
>
> The "case" is usually aligned to "{". Such as
>
> {
> case ...:
Ack.
>
>
> + violation = access_r || access_w || access_x;
>>
>
> Silly question, where doess access_* comes from? I can't find any
> definition with CTAGS.
>
They were bool_t inputs into the function passed from the trap handler in
traps.c. In the next iteration they are going away and will be using the
new combined struct npfec input from my other patch that just landed in
staging.
>
> [..]
>
> + if (!violation)
>>
>
> if ( ... )
Ack.
>
>
> + return 1;
>> +
>> + req = xzalloc(mem_event_request_t);
>> + if ( req )
>> + {
>> + req->reason = MEM_EVENT_REASON_VIOLATION;
>> + req->flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>> + req->gfn = gpa >> PAGE_SHIFT;
>> + req->offset = gpa & ((1 << PAGE_SHIFT) - 1);
>> + req->gla = gla;
>> + req->gla_valid = 1;
>> + req->access_r = access_r;
>> + req->access_w = access_w;
>> + req->access_x = access_x;
>> + req->vcpu_id = v->vcpu_id;
>> +
>> + mem_event_vcpu_pause(v);
>> + mem_access_send_req(v->domain, req);
>> +
>> + xfree(req);
>> +
>> + return 0;
>> + }
>>
>
> Ignoring the access when Xen fails to allocate req sounds strange.
> Shouldn't you at least print a warning?
I'm not to happy with this either but that's what the x86 side is doing
too. I would prefer it retrying to deliver the notification but keeping the
offending domain paused in the meanwhile. Not entirely sure how to go about
that though as I assume if xzalloc fails we have other things to worry
about already.
>
>
> +
>> + return 1;
>> +}
>>
>
> [..]
>
>
> +int p2m_get_mem_access(struct domain *d, unsigned long gpfn,
>> + xenmem_access_t *access)
>> +{
>>
>
> [..]
>
>
> + if ( (unsigned) index >= ARRAY_SIZE(memaccess) )
>> + return -ERANGE;
>> +
>> + *access = memaccess[ (unsigned) index];
>>
>
> Spurious space at after [ ?
>
Ack.
>
> Regards,
>
> --
> Julien Grall
>
Thanks!
Tamas
[-- Attachment #1.2: Type: text/html, Size: 16536 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH RFC v2 10/12] xen/arm: Instruction prefetch abort (X) mem_event handling
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
` (8 preceding siblings ...)
2014-08-27 14:06 ` [PATCH RFC v2 09/12] xen/arm: Data abort exception (R/W) mem_events Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-27 14:06 ` [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
` (2 subsequent siblings)
12 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
Add missing structure definition for iabt and update the trap handling
mechanism to only inject the exception if the mem_access checker
decides to do so.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v2: Add definition for instruction abort instruction fetch status codes (enum iabt_ifsc)
and only call p2m_mem_access_check for traps triggered for permission violations.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
xen/arch/arm/traps.c | 36 +++++++++++++++++++++++++++++++++++-
xen/include/asm-arm/processor.h | 40 +++++++++++++++++++++++++++++++++++++++-
2 files changed, 74 insertions(+), 2 deletions(-)
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 860905a..0191d70 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1730,7 +1730,41 @@ err:
static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
union hsr hsr)
{
- register_t addr = READ_SYSREG(FAR_EL2);
+ struct hsr_iabt iabt = hsr.iabt;
+ int rc;
+ register_t addr;
+ vaddr_t gva;
+ paddr_t gpa;
+
+#ifdef CONFIG_ARM_32
+ gva = READ_CP32(HIFAR);
+#else
+ gva = READ_SYSREG64(FAR_EL2);
+#endif
+
+ rc = gva_to_ipa(gva, &gpa);
+ if ( rc == -EFAULT )
+ return;
+
+ switch ( iabt.ifsc )
+ {
+ case IABT_IFSC_PERMISSION_1:
+ case IABT_IFSC_PERMISSION_2:
+ case IABT_IFSC_PERMISSION_3:
+ rc = p2m_mem_access_check(gpa, gva,
+ 1, 0, 1,
+ iabt.s1ptw);
+
+ /* Trap was triggered by mem_access, work here is done */
+ if ( !rc )
+ return;
+ break;
+
+ default:
+ break;
+ }
+
+ addr = READ_SYSREG(FAR_EL2);
inject_iabt_exception(regs, addr, hsr.len);
}
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 0f1500a..c12ccca 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -289,6 +289,36 @@ enum dabt_dfsc {
DABT_DFSC_TLB_CONFLICT = 0b110000,
};
+/* Instruction abort instruction fault status codes */
+enum iabt_ifsc {
+ IABT_IFSC_ADDR_SIZE_0 = 0b000000,
+ IABT_IFSC_ADDR_SIZE_1 = 0b000001,
+ IABT_IFSC_ADDR_SIZE_2 = 0b000010,
+ IABT_IFSC_ADDR_SIZE_3 = 0b000011,
+ IABT_IFSC_TRANSLATION_0 = 0b000100,
+ IABT_IFSC_TRANSLATION_1 = 0b000101,
+ IABT_IFSC_TRANSLATION_2 = 0b000110,
+ IABT_IFSC_TRANSLATION_3 = 0b000111,
+ IABT_IFSC_ACCESS_1 = 0b001001,
+ IABT_IFSC_ACCESS_2 = 0b001010,
+ IABT_IFSC_ACCESS_3 = 0b001011,
+ IABT_IFSC_PERMISSION_1 = 0b001101,
+ IABT_IFSC_PERMISSION_2 = 0b001110,
+ IABT_IFSC_PERMISSION_3 = 0b001111,
+ IABT_IFSC_SYNC_EXT = 0b010000,
+ IABT_IFSC_SYNC_PARITY = 0b011000,
+ IABT_IFSC_SYNC_EXT_TTW_0 = 0b010100,
+ IABT_IFSC_SYNC_EXT_TTW_1 = 0b010101,
+ IABT_IFSC_SYNC_EXT_TTW_2 = 0b010110,
+ IABT_IFSC_SYNC_EXT_TTW_3 = 0b010111,
+ IABT_IFSC_SYNC_PARITY_TTW_0 = 0b011100,
+ IABT_IFSC_SYNC_PARITY_TTW_1 = 0b011101,
+ IABT_IFSC_SYNC_PARITY_TTW_2 = 0b011110,
+ IABT_IFSC_SYNC_PARITY_TTW_3 = 0b011111,
+ IABT_IFSC_ALIGNMENT = 0b100001,
+ IABT_IFSC_TLB_CONFLICT = 0b110000,
+};
+
union hsr {
uint32_t bits;
struct {
@@ -368,10 +398,18 @@ union hsr {
} sysreg; /* HSR_EC_SYSREG */
#endif
+ struct hsr_iabt {
+ unsigned long ifsc:6; /* Instruction fault status code */
+ unsigned long res0:1;
+ unsigned long s1ptw:1; /* Fault during a stage 1 translation table walk */
+ unsigned long res1:1;
+ unsigned long ea:1; /* External abort type */
+ } iabt; /* HSR_EC_INSTR_ABORT_* */
+
struct hsr_dabt {
unsigned long dfsc:6; /* Data Fault Status Code */
unsigned long write:1; /* Write / not Read */
- unsigned long s1ptw:1; /* */
+ unsigned long s1ptw:1; /* Fault during a stage 1 translation table walk */
unsigned long cache:1; /* Cache Maintenance */
unsigned long eat:1; /* External Abort Type */
#ifdef CONFIG_ARM_32
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
` (9 preceding siblings ...)
2014-08-27 14:06 ` [PATCH RFC v2 10/12] xen/arm: Instruction prefetch abort (X) mem_event handling Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-27 15:24 ` Jan Beulich
2014-08-27 17:05 ` Daniel De Graaf
2014-08-27 14:06 ` [PATCH RFC v2 12/12] tools/tests: Enable xen-access " Tamas K Lengyel
2014-08-27 15:46 ` [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Andrii Tseglytskyi
12 siblings, 2 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
This patch sets up the infrastructure to support mem_access and mem_event
on ARM and turns on compilation. We define the required XSM functions.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
v2: Add CONFIG_MEM_PAGING and CONFIG_MEM_SHARING definitions and
use them instead of CONFIG_X86.
Split domctl copy-back and p2m type definitions into separate
patches and move this patch to the end of the series.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
xen/common/mem_access.c | 7 +++++-
xen/common/mem_event.c | 28 ++++++++++++++++++---
xen/include/asm-arm/config.h | 3 +++
xen/include/asm-x86/config.h | 3 +++
xen/include/xen/mem_access.h | 19 ---------------
xen/include/xen/mem_event.h | 58 ++++----------------------------------------
xen/include/xsm/dummy.h | 24 +++++++++---------
xen/include/xsm/xsm.h | 24 +++++++++---------
xen/xsm/dummy.c | 4 +--
9 files changed, 68 insertions(+), 102 deletions(-)
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 2bb3171..421f150 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -36,6 +36,12 @@ static inline bool_t mem_access_sanity_check(struct domain *d)
return 0;
return 1;
}
+#elif CONFIG_ARM
+static inline bool_t mem_access_sanity_check(struct domain *d)
+{
+ return 1;
+}
+#endif
int mem_access_memop(unsigned long cmd,
XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
@@ -133,7 +139,6 @@ int mem_access_send_req(struct domain *d, mem_event_request_t *req)
return 0;
}
-#endif /* CONFIG_X86 */
/*
* Local variables:
diff --git a/xen/common/mem_event.c b/xen/common/mem_event.c
index 8bf0cf1..1d8a281 100644
--- a/xen/common/mem_event.c
+++ b/xen/common/mem_event.c
@@ -20,16 +20,22 @@
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
-#ifdef CONFIG_X86
-
+#include <xen/sched.h>
#include <asm/domain.h>
#include <xen/event.h>
#include <xen/wait.h>
#include <asm/p2m.h>
#include <xen/mem_event.h>
#include <xen/mem_access.h>
+
+#if CONFIG_MEM_PAGING==1
#include <asm/mem_paging.h>
+#endif
+
+#if CONFIG_MEM_SHARING==1
#include <asm/mem_sharing.h>
+#endif
+
#include <xsm/xsm.h>
/* for public/io/ring.h macros */
@@ -424,6 +430,7 @@ int __mem_event_claim_slot(struct domain *d, struct mem_event_domain *med,
return mem_event_grab_slot(med, (current->domain != d));
}
+#ifdef CONFIG_X86
static inline bool_t mem_event_sanity_check(struct domain *d)
{
/* Only HAP is supported */
@@ -436,13 +443,21 @@ static inline bool_t mem_event_sanity_check(struct domain *d)
return 1;
}
+#elif CONFIG_ARM
+static inline bool_t mem_event_sanity_check(struct domain *d)
+{
+ return 1;
+}
+#endif
+#if CONFIG_MEM_PAGING==1
/* Registered with Xen-bound event channel for incoming notifications. */
static void mem_paging_notification(struct vcpu *v, unsigned int port)
{
if ( likely(v->domain->mem_event->paging.ring_page != NULL) )
p2m_mem_paging_resume(v->domain);
}
+#endif
/* Registered with Xen-bound event channel for incoming notifications. */
static void mem_access_notification(struct vcpu *v, unsigned int port)
@@ -451,13 +466,16 @@ static void mem_access_notification(struct vcpu *v, unsigned int port)
p2m_mem_access_resume(v->domain);
}
+#if CONFIG_MEM_SHARING==1
/* Registered with Xen-bound event channel for incoming notifications. */
static void mem_sharing_notification(struct vcpu *v, unsigned int port)
{
if ( likely(v->domain->mem_event->share.ring_page != NULL) )
mem_sharing_sharing_resume(v->domain);
}
+#endif
+#if CONFIG_MEM_PAGING==1 || CONFIG_MEM_SHARING==1
int do_mem_event_op(int op, uint32_t domain, void *arg)
{
int ret;
@@ -487,6 +505,7 @@ int do_mem_event_op(int op, uint32_t domain, void *arg)
rcu_unlock_domain(d);
return ret;
}
+#endif
/* Clean up on domain destruction */
void mem_event_cleanup(struct domain *d)
@@ -546,6 +565,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
switch ( mec->mode )
{
+#if CONFIG_MEM_PAGING==1
case XEN_DOMCTL_MEM_EVENT_OP_PAGING:
{
struct mem_event_domain *med = &d->mem_event->paging;
@@ -597,6 +617,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
}
}
break;
+#endif
case XEN_DOMCTL_MEM_EVENT_OP_ACCESS:
{
@@ -633,6 +654,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
}
break;
+#if CONFIG_MEM_SHARING==1
case XEN_DOMCTL_MEM_EVENT_OP_SHARING:
{
struct mem_event_domain *med = &d->mem_event->share;
@@ -671,6 +693,7 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
}
}
break;
+#endif
default:
rc = -ENOSYS;
@@ -710,7 +733,6 @@ void mem_event_vcpu_unpause(struct vcpu *v)
vcpu_unpause(v);
}
-#endif
/*
* Local variables:
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 1c3abcf..537727a 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -55,6 +55,9 @@
#define __LINUX_ARM_ARCH__ 7
#define CONFIG_AEABI
+#define CONFIG_MEM_SHARING 0
+#define CONFIG_MEM_PAGING 0
+
/* Linkage for ARM */
#define __ALIGN .align 2
#define __ALIGN_STR ".align 2"
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 210ff57..525ac44 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -57,6 +57,9 @@
#define CONFIG_LATE_HWDOM 1
#endif
+#define CONFIG_MEM_SHARING 1
+#define CONFIG_MEM_PAGING 1
+
#define HZ 100
#define OPT_CONSOLE_STR "vga"
diff --git a/xen/include/xen/mem_access.h b/xen/include/xen/mem_access.h
index c7dfc48..22e469f 100644
--- a/xen/include/xen/mem_access.h
+++ b/xen/include/xen/mem_access.h
@@ -23,29 +23,10 @@
#ifndef _XEN_ASM_MEM_ACCESS_H
#define _XEN_ASM_MEM_ACCESS_H
-#ifdef CONFIG_X86
-
int mem_access_memop(unsigned long cmd,
XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg);
int mem_access_send_req(struct domain *d, mem_event_request_t *req);
-#else
-
-static inline
-int mem_access_memop(unsigned long cmd,
- XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg)
-{
- return -ENOSYS;
-}
-
-static inline
-int mem_access_send_req(struct domain *d, mem_event_request_t *req)
-{
- return -ENOSYS;
-}
-
-#endif /* CONFIG_X86 */
-
#endif /* _XEN_ASM_MEM_ACCESS_H */
/*
diff --git a/xen/include/xen/mem_event.h b/xen/include/xen/mem_event.h
index 774909e..cb68463 100644
--- a/xen/include/xen/mem_event.h
+++ b/xen/include/xen/mem_event.h
@@ -24,8 +24,6 @@
#ifndef __MEM_EVENT_H__
#define __MEM_EVENT_H__
-#ifdef CONFIG_X86
-
/* Clean up on domain destruction */
void mem_event_cleanup(struct domain *d);
@@ -67,66 +65,20 @@ void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
mem_event_response_t *rsp);
+#if CONFIG_MEM_PAGING==1 || CONFIG_MEM_SHARING==1
int do_mem_event_op(int op, uint32_t domain, void *arg);
-int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
- XEN_GUEST_HANDLE_PARAM(void) u_domctl);
-
-void mem_event_vcpu_pause(struct vcpu *v);
-void mem_event_vcpu_unpause(struct vcpu *v);
-
#else
-
-static inline void mem_event_cleanup(struct domain *d) {}
-
-static inline bool_t mem_event_check_ring(struct mem_event_domain *med)
-{
- return 0;
-}
-
-static inline int mem_event_claim_slot(struct domain *d,
- struct mem_event_domain *med)
-{
- return -ENOSYS;
-}
-
-static inline int mem_event_claim_slot_nosleep(struct domain *d,
- struct mem_event_domain *med)
-{
- return -ENOSYS;
-}
-
-static inline
-void mem_event_cancel_slot(struct domain *d, struct mem_event_domain *med)
-{}
-
-static inline
-void mem_event_put_request(struct domain *d, struct mem_event_domain *med,
- mem_event_request_t *req)
-{}
-
-static inline
-int mem_event_get_response(struct domain *d, struct mem_event_domain *med,
- mem_event_response_t *rsp)
-{
- return -ENOSYS;
-}
-
static inline int do_mem_event_op(int op, uint32_t domain, void *arg)
{
return -ENOSYS;
}
+#endif
-static inline
int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec,
- XEN_GUEST_HANDLE_PARAM(void) u_domctl)
-{
- return -ENOSYS;
-}
-
-static inline void mem_event_vcpu_pause(struct vcpu *v) {}
-static inline void mem_event_vcpu_unpause(struct vcpu *v) {}
+ XEN_GUEST_HANDLE_PARAM(void) u_domctl);
-#endif /* CONFIG_X86 */
+void mem_event_vcpu_pause(struct vcpu *v);
+void mem_event_vcpu_unpause(struct vcpu *v);
#endif /* __MEM_EVENT_H__ */
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index c5aa316..61677ea 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -507,6 +507,18 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
return xsm_default_action(action, current->domain, d);
}
+static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
+{
+ XSM_ASSERT_ACTION(XSM_PRIV);
+ return xsm_default_action(action, current->domain, d);
+}
+
+static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+{
+ XSM_ASSERT_ACTION(XSM_DM_PRIV);
+ return xsm_default_action(action, current->domain, d);
+}
+
#ifdef CONFIG_X86
static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
{
@@ -550,18 +562,6 @@ static XSM_INLINE int xsm_hvm_ioreq_server(XSM_DEFAULT_ARG struct domain *d, int
return xsm_default_action(action, current->domain, d);
}
-static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
-{
- XSM_ASSERT_ACTION(XSM_PRIV);
- return xsm_default_action(action, current->domain, d);
-}
-
-static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
-{
- XSM_ASSERT_ACTION(XSM_DM_PRIV);
- return xsm_default_action(action, current->domain, d);
-}
-
static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
{
XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index a85045d..0b77a4b 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -139,6 +139,8 @@ struct xsm_operations {
int (*hvm_param) (struct domain *d, unsigned long op);
int (*hvm_control) (struct domain *d, unsigned long op);
int (*hvm_param_nested) (struct domain *d);
+ int (*mem_event_control) (struct domain *d, int mode, int op);
+ int (*mem_event_op) (struct domain *d, int op);
#ifdef CONFIG_X86
int (*do_mca) (void);
@@ -148,8 +150,6 @@ struct xsm_operations {
int (*hvm_set_pci_link_route) (struct domain *d);
int (*hvm_inject_msi) (struct domain *d);
int (*hvm_ioreq_server) (struct domain *d, int op);
- int (*mem_event_control) (struct domain *d, int mode, int op);
- int (*mem_event_op) (struct domain *d, int op);
int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
int (*apic) (struct domain *d, int cmd);
int (*memtype) (uint32_t access);
@@ -534,6 +534,16 @@ static inline int xsm_hvm_param_nested (xsm_default_t def, struct domain *d)
return xsm_ops->hvm_param_nested(d);
}
+static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
+{
+ return xsm_ops->mem_event_control(d, mode, op);
+}
+
+static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
+{
+ return xsm_ops->mem_event_op(d, op);
+}
+
#ifdef CONFIG_X86
static inline int xsm_do_mca(xsm_default_t def)
{
@@ -570,16 +580,6 @@ static inline int xsm_hvm_ioreq_server (xsm_default_t def, struct domain *d, int
return xsm_ops->hvm_ioreq_server(d, op);
}
-static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
-{
- return xsm_ops->mem_event_control(d, mode, op);
-}
-
-static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
-{
- return xsm_ops->mem_event_op(d, op);
-}
-
static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
{
return xsm_ops->mem_sharing_op(d, cd, op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index c95c803..9df9d81 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -116,6 +116,8 @@ void xsm_fixup_ops (struct xsm_operations *ops)
set_to_dummy_if_null(ops, add_to_physmap);
set_to_dummy_if_null(ops, remove_from_physmap);
set_to_dummy_if_null(ops, map_gmfn_foreign);
+ set_to_dummy_if_null(ops, mem_event_control);
+ set_to_dummy_if_null(ops, mem_event_op);
#ifdef CONFIG_X86
set_to_dummy_if_null(ops, do_mca);
@@ -125,8 +127,6 @@ void xsm_fixup_ops (struct xsm_operations *ops)
set_to_dummy_if_null(ops, hvm_set_pci_link_route);
set_to_dummy_if_null(ops, hvm_inject_msi);
set_to_dummy_if_null(ops, hvm_ioreq_server);
- set_to_dummy_if_null(ops, mem_event_control);
- set_to_dummy_if_null(ops, mem_event_op);
set_to_dummy_if_null(ops, mem_sharing_op);
set_to_dummy_if_null(ops, apic);
set_to_dummy_if_null(ops, platform_op);
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
2014-08-27 14:06 ` [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
@ 2014-08-27 15:24 ` Jan Beulich
2014-08-27 17:12 ` Tamas K Lengyel
2014-08-27 17:05 ` Daniel De Graaf
1 sibling, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-08-27 15:24 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: ian.campbell, tim, ian.jackson, xen-devel, stefano.stabellini,
andres, dgdegra
>>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
> --- a/xen/common/mem_event.c
> +++ b/xen/common/mem_event.c
> @@ -20,16 +20,22 @@
> * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
> */
>
> -#ifdef CONFIG_X86
> -
> +#include <xen/sched.h>
> #include <asm/domain.h>
> #include <xen/event.h>
> #include <xen/wait.h>
> #include <asm/p2m.h>
> #include <xen/mem_event.h>
> #include <xen/mem_access.h>
This already is quite a mishmash of asm/ and xen/ includes - please
don't make it even worse.
> +
> +#if CONFIG_MEM_PAGING==1
#ifdef please.
Jan
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
2014-08-27 15:24 ` Jan Beulich
@ 2014-08-27 17:12 ` Tamas K Lengyel
2014-08-28 6:39 ` Jan Beulich
0 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 17:12 UTC (permalink / raw)
To: Jan Beulich
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 1490 bytes --]
On Wed, Aug 27, 2014 at 5:24 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
> > --- a/xen/common/mem_event.c
> > +++ b/xen/common/mem_event.c
> > @@ -20,16 +20,22 @@
> > * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> 02111-1307 USA
> > */
> >
> > -#ifdef CONFIG_X86
> > -
> > +#include <xen/sched.h>
> > #include <asm/domain.h>
> > #include <xen/event.h>
> > #include <xen/wait.h>
> > #include <asm/p2m.h>
> > #include <xen/mem_event.h>
> > #include <xen/mem_access.h>
>
> This already is quite a mishmash of asm/ and xen/ includes - please
> don't make it even worse.
>
Adding xen/sched.h is required here unfortunately as without it the
compilation breaks on ARM:
In file included from
/home/odroid/workspace/xen/xen/include/asm/domain.h:6:0,
from mem_event.c:23:
/home/odroid/workspace/xen/xen/include/xen/sched.h:251:22: error: field
'arch' has incomplete type
struct arch_vcpu arch;
^
/home/odroid/workspace/xen/xen/include/xen/sched.h:405:24: error: field
'arch' has incomplete type
struct arch_domain arch;
^
make[4]: *** [mem_event.o] Error 1
I can put that include into a #ifdef CONFIG_ARM if that helps.
>
> > +
> > +#if CONFIG_MEM_PAGING==1
>
> #ifdef please.
>
Ack.
>
> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
[-- Attachment #1.2: Type: text/html, Size: 2688 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
2014-08-27 17:12 ` Tamas K Lengyel
@ 2014-08-28 6:39 ` Jan Beulich
2014-08-28 8:42 ` Tamas K Lengyel
0 siblings, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-08-28 6:39 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
Tamas K Lengyel
>>> On 27.08.14 at 19:12, <tamas.lengyel@zentific.com> wrote:
> On Wed, Aug 27, 2014 at 5:24 PM, Jan Beulich <JBeulich@suse.com> wrote:
>
>> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
>> > --- a/xen/common/mem_event.c
>> > +++ b/xen/common/mem_event.c
>> > @@ -20,16 +20,22 @@
>> > * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
>> 02111-1307 USA
>> > */
>> >
>> > -#ifdef CONFIG_X86
>> > -
>> > +#include <xen/sched.h>
>> > #include <asm/domain.h>
>> > #include <xen/event.h>
>> > #include <xen/wait.h>
>> > #include <asm/p2m.h>
>> > #include <xen/mem_event.h>
>> > #include <xen/mem_access.h>
>>
>> This already is quite a mishmash of asm/ and xen/ includes - please
>> don't make it even worse.
>>
>
> Adding xen/sched.h is required here unfortunately as without it the
> compilation breaks on ARM:
>
> In file included from
> /home/odroid/workspace/xen/xen/include/asm/domain.h:6:0,
> from mem_event.c:23:
> /home/odroid/workspace/xen/xen/include/xen/sched.h:251:22: error: field
> 'arch' has incomplete type
> struct arch_vcpu arch;
> ^
> /home/odroid/workspace/xen/xen/include/xen/sched.h:405:24: error: field
> 'arch' has incomplete type
> struct arch_domain arch;
> ^
> make[4]: *** [mem_event.o] Error 1
>
> I can put that include into a #ifdef CONFIG_ARM if that helps.
My point wasn't the addition of the include, just the place where
you put it among the already present ones.
Jan
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
2014-08-28 6:39 ` Jan Beulich
@ 2014-08-28 8:42 ` Tamas K Lengyel
2014-08-28 8:54 ` Jan Beulich
0 siblings, 1 reply; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-28 8:42 UTC (permalink / raw)
To: Jan Beulich
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 1780 bytes --]
On Thu, Aug 28, 2014 at 8:39 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 27.08.14 at 19:12, <tamas.lengyel@zentific.com> wrote:
> > On Wed, Aug 27, 2014 at 5:24 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
> >> > --- a/xen/common/mem_event.c
> >> > +++ b/xen/common/mem_event.c
> >> > @@ -20,16 +20,22 @@
> >> > * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> >> 02111-1307 USA
> >> > */
> >> >
> >> > -#ifdef CONFIG_X86
> >> > -
> >> > +#include <xen/sched.h>
> >> > #include <asm/domain.h>
> >> > #include <xen/event.h>
> >> > #include <xen/wait.h>
> >> > #include <asm/p2m.h>
> >> > #include <xen/mem_event.h>
> >> > #include <xen/mem_access.h>
> >>
> >> This already is quite a mishmash of asm/ and xen/ includes - please
> >> don't make it even worse.
> >>
> >
> > Adding xen/sched.h is required here unfortunately as without it the
> > compilation breaks on ARM:
> >
> > In file included from
> > /home/odroid/workspace/xen/xen/include/asm/domain.h:6:0,
> > from mem_event.c:23:
> > /home/odroid/workspace/xen/xen/include/xen/sched.h:251:22: error: field
> > 'arch' has incomplete type
> > struct arch_vcpu arch;
> > ^
> > /home/odroid/workspace/xen/xen/include/xen/sched.h:405:24: error: field
> > 'arch' has incomplete type
> > struct arch_domain arch;
> > ^
> > make[4]: *** [mem_event.o] Error 1
> >
> > I can put that include into a #ifdef CONFIG_ARM if that helps.
>
> My point wasn't the addition of the include, just the place where
> you put it among the already present ones.
>
> Jan
>
I see, I did need to include it before asm/domain.h or the above compile
time error is triggered.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 2830 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
2014-08-28 8:42 ` Tamas K Lengyel
@ 2014-08-28 8:54 ` Jan Beulich
2014-08-28 9:00 ` Tamas K Lengyel
0 siblings, 1 reply; 48+ messages in thread
From: Jan Beulich @ 2014-08-28 8:54 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
Tamas K Lengyel
>>> On 28.08.14 at 10:42, <tamas.lengyel@zentific.com> wrote:
> On Thu, Aug 28, 2014 at 8:39 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
>> >>> On 27.08.14 at 19:12, <tamas.lengyel@zentific.com> wrote:
>> > On Wed, Aug 27, 2014 at 5:24 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> >
>> >> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
>> >> > --- a/xen/common/mem_event.c
>> >> > +++ b/xen/common/mem_event.c
>> >> > @@ -20,16 +20,22 @@
>> >> > * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
>> >> 02111-1307 USA
>> >> > */
>> >> >
>> >> > -#ifdef CONFIG_X86
>> >> > -
>> >> > +#include <xen/sched.h>
>> >> > #include <asm/domain.h>
>> >> > #include <xen/event.h>
>> >> > #include <xen/wait.h>
>> >> > #include <asm/p2m.h>
>> >> > #include <xen/mem_event.h>
>> >> > #include <xen/mem_access.h>
>> >>
>> >> This already is quite a mishmash of asm/ and xen/ includes - please
>> >> don't make it even worse.
>> >>
>> >
>> > Adding xen/sched.h is required here unfortunately as without it the
>> > compilation breaks on ARM:
>> >
>> > In file included from
>> > /home/odroid/workspace/xen/xen/include/asm/domain.h:6:0,
>> > from mem_event.c:23:
>> > /home/odroid/workspace/xen/xen/include/xen/sched.h:251:22: error: field
>> > 'arch' has incomplete type
>> > struct arch_vcpu arch;
>> > ^
>> > /home/odroid/workspace/xen/xen/include/xen/sched.h:405:24: error: field
>> > 'arch' has incomplete type
>> > struct arch_domain arch;
>> > ^
>> > make[4]: *** [mem_event.o] Error 1
>> >
>> > I can put that include into a #ifdef CONFIG_ARM if that helps.
>>
>> My point wasn't the addition of the include, just the place where
>> you put it among the already present ones.
>
> I see, I did need to include it before asm/domain.h or the above compile
> time error is triggered.
So in the end this shows that ARM including xen/sched.h from its
asm/domain.h is bogus - this just can't work. I.e. a first step would
be to clean that up. Furthermore I'm sure you would achieve a
working build even if you simply replaced the inclusion of
asm/domain.h with that of xen/sched.h (including the former from
other than xen/domain.h or arch-specific code is suspicious anyway,
i.e. should be taken care of properly in the course of making the
source file common).
Jan
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
2014-08-28 8:54 ` Jan Beulich
@ 2014-08-28 9:00 ` Tamas K Lengyel
0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-28 9:00 UTC (permalink / raw)
To: Jan Beulich
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Daniel De Graaf,
Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 2665 bytes --]
On Thu, Aug 28, 2014 at 10:54 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> On 28.08.14 at 10:42, <tamas.lengyel@zentific.com> wrote:
> > On Thu, Aug 28, 2014 at 8:39 AM, Jan Beulich <JBeulich@suse.com> wrote:
> >
> >> >>> On 27.08.14 at 19:12, <tamas.lengyel@zentific.com> wrote:
> >> > On Wed, Aug 27, 2014 at 5:24 PM, Jan Beulich <JBeulich@suse.com>
> wrote:
> >> >
> >> >> >>> On 27.08.14 at 16:06, <tklengyel@sec.in.tum.de> wrote:
> >> >> > --- a/xen/common/mem_event.c
> >> >> > +++ b/xen/common/mem_event.c
> >> >> > @@ -20,16 +20,22 @@
> >> >> > * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
> >> >> 02111-1307 USA
> >> >> > */
> >> >> >
> >> >> > -#ifdef CONFIG_X86
> >> >> > -
> >> >> > +#include <xen/sched.h>
> >> >> > #include <asm/domain.h>
> >> >> > #include <xen/event.h>
> >> >> > #include <xen/wait.h>
> >> >> > #include <asm/p2m.h>
> >> >> > #include <xen/mem_event.h>
> >> >> > #include <xen/mem_access.h>
> >> >>
> >> >> This already is quite a mishmash of asm/ and xen/ includes - please
> >> >> don't make it even worse.
> >> >>
> >> >
> >> > Adding xen/sched.h is required here unfortunately as without it the
> >> > compilation breaks on ARM:
> >> >
> >> > In file included from
> >> > /home/odroid/workspace/xen/xen/include/asm/domain.h:6:0,
> >> > from mem_event.c:23:
> >> > /home/odroid/workspace/xen/xen/include/xen/sched.h:251:22: error:
> field
> >> > 'arch' has incomplete type
> >> > struct arch_vcpu arch;
> >> > ^
> >> > /home/odroid/workspace/xen/xen/include/xen/sched.h:405:24: error:
> field
> >> > 'arch' has incomplete type
> >> > struct arch_domain arch;
> >> > ^
> >> > make[4]: *** [mem_event.o] Error 1
> >> >
> >> > I can put that include into a #ifdef CONFIG_ARM if that helps.
> >>
> >> My point wasn't the addition of the include, just the place where
> >> you put it among the already present ones.
> >
> > I see, I did need to include it before asm/domain.h or the above compile
> > time error is triggered.
>
> So in the end this shows that ARM including xen/sched.h from its
> asm/domain.h is bogus - this just can't work. I.e. a first step would
> be to clean that up. Furthermore I'm sure you would achieve a
> working build even if you simply replaced the inclusion of
> asm/domain.h with that of xen/sched.h (including the former from
> other than xen/domain.h or arch-specific code is suspicious anyway,
> i.e. should be taken care of properly in the course of making the
> source file common).
>
> Jan
>
You are right, including asm/domain.h is not needed any longer, it compiles
fine without it.
Tamas
[-- Attachment #1.2: Type: text/html, Size: 4186 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
2014-08-27 14:06 ` [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
2014-08-27 15:24 ` Jan Beulich
@ 2014-08-27 17:05 ` Daniel De Graaf
2014-08-27 17:13 ` Tamas K Lengyel
1 sibling, 1 reply; 48+ messages in thread
From: Daniel De Graaf @ 2014-08-27 17:05 UTC (permalink / raw)
To: Tamas K Lengyel, xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich
On 08/27/2014 10:06 AM, Tamas K Lengyel wrote:
> This patch sets up the infrastructure to support mem_access and mem_event
> on ARM and turns on compilation. We define the required XSM functions.
>
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> ---
> v2: Add CONFIG_MEM_PAGING and CONFIG_MEM_SHARING definitions and
> use them instead of CONFIG_X86.
> Split domctl copy-back and p2m type definitions into separate
> patches and move this patch to the end of the series.
>
> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Since you are moving hooks out of #ifdef CONFIG_X86, can you also move the
implementations in xen/xsm/flask/hooks.c so they are wired up properly on
ARM? It sounds like all this will be changing to HAS_MEM_ACCESS anyway, so
I'll wait on that for an Ack.
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM.
2014-08-27 17:05 ` Daniel De Graaf
@ 2014-08-27 17:13 ` Tamas K Lengyel
0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 17:13 UTC (permalink / raw)
To: Daniel De Graaf
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 936 bytes --]
On Wed, Aug 27, 2014 at 7:05 PM, Daniel De Graaf <dgdegra@tycho.nsa.gov>
wrote:
> On 08/27/2014 10:06 AM, Tamas K Lengyel wrote:
>
>> This patch sets up the infrastructure to support mem_access and mem_event
>> on ARM and turns on compilation. We define the required XSM functions.
>>
>> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
>> ---
>> v2: Add CONFIG_MEM_PAGING and CONFIG_MEM_SHARING definitions and
>> use them instead of CONFIG_X86.
>> Split domctl copy-back and p2m type definitions into separate
>> patches and move this patch to the end of the series.
>>
>> Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
>>
>
> Since you are moving hooks out of #ifdef CONFIG_X86, can you also move the
> implementations in xen/xsm/flask/hooks.c so they are wired up properly on
> ARM? It sounds like all this will be changing to HAS_MEM_ACCESS anyway, so
> I'll wait on that for an Ack.
OK.
[-- Attachment #1.2: Type: text/html, Size: 1564 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH RFC v2 12/12] tools/tests: Enable xen-access on ARM
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
` (10 preceding siblings ...)
2014-08-27 14:06 ` [PATCH RFC v2 11/12] xen/arm: Enable the compilation of mem_access and mem_event on ARM Tamas K Lengyel
@ 2014-08-27 14:06 ` Tamas K Lengyel
2014-08-27 15:46 ` [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Andrii Tseglytskyi
12 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 14:06 UTC (permalink / raw)
To: xen-devel
Cc: ian.campbell, tim, ian.jackson, stefano.stabellini, andres,
jbeulich, dgdegra, Tamas K Lengyel
On ARM the guest memory doesn't start from 0, thus we include the
required headers and define GUEST_RAM_BASE_PFN in both architecture
to be passed to mem_access as the starting pfn.
We also define the ARM specific test_and_set_bit function.
Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
---
tools/tests/xen-access/Makefile | 4 +--
tools/tests/xen-access/xen-access.c | 55 +++++++++++++++++++++++++++++--------
2 files changed, 45 insertions(+), 14 deletions(-)
diff --git a/tools/tests/xen-access/Makefile b/tools/tests/xen-access/Makefile
index 65eef99..698355c 100644
--- a/tools/tests/xen-access/Makefile
+++ b/tools/tests/xen-access/Makefile
@@ -7,9 +7,7 @@ CFLAGS += $(CFLAGS_libxenctrl)
CFLAGS += $(CFLAGS_libxenguest)
CFLAGS += $(CFLAGS_xeninclude)
-TARGETS-y :=
-TARGETS-$(CONFIG_X86) += xen-access
-TARGETS := $(TARGETS-y)
+TARGETS := xen-access
.PHONY: all
all: build
diff --git a/tools/tests/xen-access/xen-access.c b/tools/tests/xen-access/xen-access.c
index 090df5f..6af6ac3 100644
--- a/tools/tests/xen-access/xen-access.c
+++ b/tools/tests/xen-access/xen-access.c
@@ -41,22 +41,16 @@
#include <xenctrl.h>
#include <xen/mem_event.h>
-#define DPRINTF(a, b...) fprintf(stderr, a, ## b)
-#define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
-#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
-
-/* Spinlock and mem event definitions */
-
-#define SPIN_LOCK_UNLOCKED 0
+#ifdef CONFIG_X86
+#define GUEST_RAM_BASE_PFN 0ULL
#define ADDR (*(volatile long *) addr)
+
/**
* test_and_set_bit - Set a bit and return its old value
* @nr: Bit to set
* @addr: Address to count from
*
- * This operation is atomic and cannot be reordered.
- * It also implies a memory barrier.
*/
static inline int test_and_set_bit(int nr, volatile void *addr)
{
@@ -69,6 +63,43 @@ static inline int test_and_set_bit(int nr, volatile void *addr)
return oldbit;
}
+#else /* CONFIG_X86 */
+
+#include <xen/arch-arm.h>
+
+#define PAGE_SHIFT 12
+#define GUEST_RAM_BASE_PFN GUEST_RAM_BASE >> PAGE_SHIFT
+#define BITS_PER_WORD 32
+#define BIT_MASK(nr) (1UL << ((nr) % BITS_PER_WORD))
+#define BIT_WORD(nr) ((nr) / BITS_PER_WORD)
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ */
+static inline int test_and_set_bit(int nr, volatile void *addr)
+{
+ unsigned int mask = BIT_MASK(nr);
+ volatile unsigned int *p =
+ ((volatile unsigned int *)addr) + BIT_WORD(nr);
+ unsigned int old = *p;
+
+ *p = old | mask;
+ return (old & mask) != 0;
+}
+
+#endif
+
+#define DPRINTF(a, b...) fprintf(stderr, a, ## b)
+#define ERROR(a, b...) fprintf(stderr, a "\n", ## b)
+#define PERROR(a, b...) fprintf(stderr, a ": %s\n", ## b, strerror(errno))
+
+/* Spinlock and mem event definitions */
+
+#define SPIN_LOCK_UNLOCKED 0
+
typedef int spinlock_t;
static inline void spin_lock(spinlock_t *lock)
@@ -476,6 +507,7 @@ int main(int argc, char *argv[])
sigaction(SIGINT, &act, NULL);
sigaction(SIGALRM, &act, NULL);
+#ifdef CONFIG_X86
/* Set whether the access listener is required */
rc = xc_domain_set_access_required(xch, domain_id, required);
if ( rc < 0 )
@@ -483,6 +515,7 @@ int main(int argc, char *argv[])
ERROR("Error %d setting mem_access listener required\n", rc);
goto exit;
}
+#endif
/* Set the default access type and convert all pages to it */
rc = xc_set_mem_access(xch, domain_id, default_access, ~0ull, 0);
@@ -492,7 +525,7 @@ int main(int argc, char *argv[])
goto exit;
}
- rc = xc_set_mem_access(xch, domain_id, default_access, 0,
+ rc = xc_set_mem_access(xch, domain_id, default_access, GUEST_RAM_BASE_PFN,
xenaccess->domain_info->max_pages);
if ( rc < 0 )
{
@@ -520,7 +553,7 @@ int main(int argc, char *argv[])
/* Unregister for every event */
rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, ~0ull, 0);
- rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, 0,
+ rc = xc_set_mem_access(xch, domain_id, XENMEM_access_rwx, GUEST_RAM_BASE_PFN,
xenaccess->domain_info->max_pages);
rc = xc_hvm_param_set(xch, domain_id, HVM_PARAM_MEMORY_EVENT_INT3, HVMPME_mode_disabled);
--
2.1.0.rc1
^ permalink raw reply related [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 00/12] Mem_event and mem_access for ARM
2014-08-27 14:06 [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Tamas K Lengyel
` (11 preceding siblings ...)
2014-08-27 14:06 ` [PATCH RFC v2 12/12] tools/tests: Enable xen-access " Tamas K Lengyel
@ 2014-08-27 15:46 ` Andrii Tseglytskyi
2014-08-27 17:05 ` Tamas K Lengyel
12 siblings, 1 reply; 48+ messages in thread
From: Andrii Tseglytskyi @ 2014-08-27 15:46 UTC (permalink / raw)
To: Tamas K Lengyel
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, andres, Jan Beulich, dgdegra
Hi,
Which Xen version are targeting ? Is this series for Xen 4.5 or later?
Regards,
Andrii
On Wed, Aug 27, 2014 at 5:06 PM, Tamas K Lengyel
<tklengyel@sec.in.tum.de> wrote:
> The ARM virtualization extension provides 2-stage paging, a similar mechanisms
> to Intel's EPT, which can be used to trace the memory accesses performed by
> the guest systems. This series moves the mem_access and mem_event codebase
> into Xen common, performs some code cleanup and architecture specific division
> of components, then sets up the necessary infrastructure in the ARM code
> to deliver the event on R/W/X traps. Lastly, we turn on the compilation of
> the xen-access test tool.
>
> This version of the series has been fully tested and is functional on an
> Arndale board.
>
> Known missing parts:
> - Page-granularity adjustments (PAGE_ORDER_*) to shatter
> large LPAE pages if necessary.
> - Listener required feautre to crash the domain if no listener found
> - n2rwx, rx2rw
>
> This PATCH RFC version is also available at:
> https://github.com/tklengyel/xen/tree/arm_memaccess_rfc2
>
> Tamas K Lengyel (12):
> xen: Relocate mem_access and mem_event into common.
> xen/mem_event: Clean out superflous white-spaces
> xen/mem_event: Relax error condition on debug builds
> xen/mem_event: Abstract architecture specific sanity checks
> xen/mem_access: Abstract architecture specific sanity check
> tools/libxc: Allocate magic page for mem access on ARM
> xen/arm: p2m type definitions and changes
> xen/arm: Add mem_event domctl and mem_access memop.
> xen/arm: Data abort exception (R/W) mem_events.
> xen/arm: Instruction prefetch abort (X) mem_event handling
> xen/arm: Enable the compilation of mem_access and mem_event on ARM.
> tools/tests: Enable xen-access on ARM
>
> MAINTAINERS | 6 +
> tools/libxc/xc_dom_arm.c | 6 +-
> tools/tests/xen-access/Makefile | 4 +-
> tools/tests/xen-access/xen-access.c | 55 ++-
> xen/arch/arm/domctl.c | 34 +-
> xen/arch/arm/mm.c | 20 +-
> xen/arch/arm/p2m.c | 444 +++++++++++++++++----
> xen/arch/arm/traps.c | 73 +++-
> xen/arch/x86/domctl.c | 2 +-
> xen/arch/x86/hvm/hvm.c | 61 +--
> xen/arch/x86/mm/Makefile | 2 -
> xen/arch/x86/mm/hap/nested_ept.c | 2 +-
> xen/arch/x86/mm/hap/nested_hap.c | 2 +-
> xen/arch/x86/mm/mem_access.c | 133 -------
> xen/arch/x86/mm/mem_event.c | 705 ----------------------------------
> xen/arch/x86/mm/mem_paging.c | 2 +-
> xen/arch/x86/mm/mem_sharing.c | 2 +-
> xen/arch/x86/mm/p2m-pod.c | 2 +-
> xen/arch/x86/mm/p2m-pt.c | 2 +-
> xen/arch/x86/mm/p2m.c | 2 +-
> xen/arch/x86/x86_64/compat/mm.c | 4 +-
> xen/arch/x86/x86_64/mm.c | 4 +-
> xen/common/Makefile | 2 +
> xen/common/domain.c | 1 +
> xen/common/mem_access.c | 150 ++++++++
> xen/common/mem_event.c | 744 ++++++++++++++++++++++++++++++++++++
> xen/common/memory.c | 62 +++
> xen/include/asm-arm/config.h | 3 +
> xen/include/asm-arm/mm.h | 1 -
> xen/include/asm-arm/p2m.h | 107 ++++--
> xen/include/asm-arm/processor.h | 70 +++-
> xen/include/asm-x86/config.h | 3 +
> xen/include/asm-x86/hvm/hvm.h | 6 -
> xen/include/asm-x86/mem_access.h | 39 --
> xen/include/asm-x86/mem_event.h | 82 ----
> xen/include/asm-x86/mm.h | 2 -
> xen/include/xen/mem_access.h | 39 ++
> xen/include/xen/mem_event.h | 93 +++++
> xen/include/xen/mm.h | 6 +
> xen/include/xen/sched.h | 1 -
> xen/include/xsm/dummy.h | 24 +-
> xen/include/xsm/xsm.h | 24 +-
> xen/xsm/dummy.c | 4 +-
> 43 files changed, 1843 insertions(+), 1187 deletions(-)
> delete mode 100644 xen/arch/x86/mm/mem_access.c
> delete mode 100644 xen/arch/x86/mm/mem_event.c
> create mode 100644 xen/common/mem_access.c
> create mode 100644 xen/common/mem_event.c
> delete mode 100644 xen/include/asm-x86/mem_access.h
> delete mode 100644 xen/include/asm-x86/mem_event.h
> create mode 100644 xen/include/xen/mem_access.h
> create mode 100644 xen/include/xen/mem_event.h
>
> --
> 2.1.0.rc1
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
--
Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com
^ permalink raw reply [flat|nested] 48+ messages in thread* Re: [PATCH RFC v2 00/12] Mem_event and mem_access for ARM
2014-08-27 15:46 ` [PATCH RFC v2 00/12] Mem_event and mem_access for ARM Andrii Tseglytskyi
@ 2014-08-27 17:05 ` Tamas K Lengyel
0 siblings, 0 replies; 48+ messages in thread
From: Tamas K Lengyel @ 2014-08-27 17:05 UTC (permalink / raw)
To: Andrii Tseglytskyi
Cc: Ian Campbell, Tim Deegan, Ian Jackson, xen-devel@lists.xen.org,
Stefano Stabellini, Andres Lagar-Cavilla, Jan Beulich,
Daniel De Graaf, Tamas K Lengyel
[-- Attachment #1.1: Type: text/plain, Size: 537 bytes --]
On Wed, Aug 27, 2014 at 5:46 PM, Andrii Tseglytskyi <
andrii.tseglytskyi@globallogic.com> wrote:
> Hi,
>
> Which Xen version are targeting ? Is this series for Xen 4.5 or later?
>
> Regards,
> Andrii
>
>
I was hoping I can get this in before the feature freeze for 4.5, but that
largely depends on how the review process go. It's certainly not the end of
the world to push this for 4.6. The series is functional as is already, so
only minor issues/features are left that I would say are blockers (from my
point of view that is).
Tamas
[-- Attachment #1.2: Type: text/html, Size: 939 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 48+ messages in thread