xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM
@ 2014-09-29 15:55 Tamas K Lengyel
  2014-09-29 15:55 ` [PATCH for-4.5 v12 1/3] xen/xsm: Wrap mem_access blocks into HAS_MEM_ACCESS ifdefs Tamas K Lengyel
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 15:55 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, dgdegra, stefano.stabellini, ian.campbell, Tamas K Lengyel

These patches are the refactoring and minor patches that will be required
for future mem_access support on ARM that could be merged now.

This PATCH series is also available at:
https://github.com/tklengyel/xen/tree/arm_memaccess_12-for-4.5

Julien Grall (1):
  xen/arm: Implement domain_get_maximum_gpfn

Tamas K Lengyel (2):
  xen/xsm: Wrap mem_access blocks into HAS_MEM_ACCESS ifdefs
  xen/arm: Add p2m_set_permission and p2m_shatter_page helpers.

 xen/arch/arm/mm.c       |   2 +-
 xen/arch/arm/p2m.c      | 137 +++++++++++++++++++++++++++++++++---------------
 xen/include/xsm/dummy.h |  26 ++++-----
 xen/include/xsm/xsm.h   |  29 +++++-----
 xen/xsm/dummy.c         |   7 ++-
 xen/xsm/flask/hooks.c   |  33 +++++++-----
 6 files changed, 151 insertions(+), 83 deletions(-)

-- 
2.1.0

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH for-4.5 v12 1/3] xen/xsm: Wrap mem_access blocks into HAS_MEM_ACCESS ifdefs
  2014-09-29 15:55 [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM Tamas K Lengyel
@ 2014-09-29 15:55 ` Tamas K Lengyel
  2014-09-29 15:55 ` [PATCH for-4.5 v12 2/3] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 15:55 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, dgdegra, stefano.stabellini, ian.campbell, Tamas K Lengyel

This patch wraps the XSM code corresponding to the mem_access and
mem_event code-paths into HAS_MEM_ACCESS ifdefs.

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
---
v12: Split enabling mem_access on ARM from this patch into separate patch
     so this can be merged independently.

v8: All MEM_* flags have been converted to HAS_* and moved into config/*.mk

v3: Wrap mem_event related functions in XSM into #ifdef HAS_MEM_ACCESS
       blocks.
    Update XSM hooks in flask to properly wire it up on ARM.

v2: Add CONFIG_MEM_PAGING and CONFIG_MEM_SHARING definitions and
       use them instead of CONFIG_X86.
    Split domctl copy-back and p2m type definitions into separate
       patches and move this patch to the end of the series.
---
 xen/include/xsm/dummy.h | 26 ++++++++++++++------------
 xen/include/xsm/xsm.h   | 29 +++++++++++++++++------------
 xen/xsm/dummy.c         |  7 +++++--
 xen/xsm/flask/hooks.c   | 33 ++++++++++++++++++++-------------
 4 files changed, 56 insertions(+), 39 deletions(-)

diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index df55e70..f20e89c 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -513,6 +513,20 @@ static XSM_INLINE int xsm_hvm_param_nested(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
+#ifdef HAS_MEM_ACCESS
+static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+
+static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+#endif
+
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
 {
@@ -556,18 +570,6 @@ static XSM_INLINE int xsm_hvm_ioreq_server(XSM_DEFAULT_ARG struct domain *d, int
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_mem_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
-{
-    XSM_ASSERT_ACTION(XSM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
-static XSM_INLINE int xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 6c1c079..4ce089f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -141,6 +141,11 @@ struct xsm_operations {
     int (*hvm_param_nested) (struct domain *d);
     int (*get_vnumainfo) (struct domain *d);
 
+#ifdef HAS_MEM_ACCESS
+    int (*mem_event_control) (struct domain *d, int mode, int op);
+    int (*mem_event_op) (struct domain *d, int op);
+#endif
+
 #ifdef CONFIG_X86
     int (*do_mca) (void);
     int (*shadow_control) (struct domain *d, uint32_t op);
@@ -149,8 +154,6 @@ struct xsm_operations {
     int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
     int (*hvm_ioreq_server) (struct domain *d, int op);
-    int (*mem_event_control) (struct domain *d, int mode, int op);
-    int (*mem_event_op) (struct domain *d, int op);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
     int (*memtype) (uint32_t access);
@@ -540,6 +543,18 @@ static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
     return xsm_ops->get_vnumainfo(d);
 }
 
+#ifdef HAS_MEM_ACCESS
+static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
+{
+    return xsm_ops->mem_event_control(d, mode, op);
+}
+
+static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
+{
+    return xsm_ops->mem_event_op(d, op);
+}
+#endif
+
 #ifdef CONFIG_X86
 static inline int xsm_do_mca(xsm_default_t def)
 {
@@ -576,16 +591,6 @@ static inline int xsm_hvm_ioreq_server (xsm_default_t def, struct domain *d, int
     return xsm_ops->hvm_ioreq_server(d, op);
 }
 
-static inline int xsm_mem_event_control (xsm_default_t def, struct domain *d, int mode, int op)
-{
-    return xsm_ops->mem_event_control(d, mode, op);
-}
-
-static inline int xsm_mem_event_op (xsm_default_t def, struct domain *d, int op)
-{
-    return xsm_ops->mem_event_op(d, op);
-}
-
 static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
 {
     return xsm_ops->mem_sharing_op(d, cd, op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 0826a8b..8eb3050 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -118,6 +118,11 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, remove_from_physmap);
     set_to_dummy_if_null(ops, map_gmfn_foreign);
 
+#ifdef HAS_MEM_ACCESS
+    set_to_dummy_if_null(ops, mem_event_control);
+    set_to_dummy_if_null(ops, mem_event_op);
+#endif
+
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
     set_to_dummy_if_null(ops, shadow_control);
@@ -126,8 +131,6 @@ void xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
     set_to_dummy_if_null(ops, hvm_ioreq_server);
-    set_to_dummy_if_null(ops, mem_event_control);
-    set_to_dummy_if_null(ops, mem_event_op);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, platform_op);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index df05566..8de5e49 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -577,6 +577,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_iomem_permission:
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_set_target:
+#ifdef HAS_MEM_ACCESS
+    case XEN_DOMCTL_mem_event_op:
+#endif
 #ifdef CONFIG_X86
     /* These have individual XSM hooks (arch/x86/domctl.c) */
     case XEN_DOMCTL_shadow_op:
@@ -584,7 +587,6 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_bind_pt_irq:
     case XEN_DOMCTL_unbind_pt_irq:
     case XEN_DOMCTL_ioport_mapping:
-    case XEN_DOMCTL_mem_event_op:
     /* These have individual XSM hooks (drivers/passthrough/iommu.c) */
     case XEN_DOMCTL_get_device_group:
     case XEN_DOMCTL_test_assign_device:
@@ -1189,6 +1191,18 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 }
 #endif /* HAS_PASSTHROUGH && HAS_PCI */
 
+#ifdef HAS_MEM_ACCESS
+static int flask_mem_event_control(struct domain *d, int mode, int op)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+
+static int flask_mem_event_op(struct domain *d, int op)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
+}
+#endif /* HAS_MEM_ACCESS */
+
 #ifdef CONFIG_X86
 static int flask_do_mca(void)
 {
@@ -1299,16 +1313,6 @@ static int flask_hvm_ioreq_server(struct domain *d, int op)
     return current_has_perm(d, SECCLASS_HVM, HVM__HVMCTL);
 }
 
-static int flask_mem_event_control(struct domain *d, int mode, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
-}
-
-static int flask_mem_event_op(struct domain *d, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__MEM_EVENT);
-}
-
 static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 {
     int rc = current_has_perm(cd, SECCLASS_HVM, HVM__MEM_SHARING);
@@ -1577,6 +1581,11 @@ static struct xsm_operations flask_ops = {
     .deassign_device = flask_deassign_device,
 #endif
 
+#ifdef HAS_MEM_ACCESS
+    .mem_event_control = flask_mem_event_control,
+    .mem_event_op = flask_mem_event_op,
+#endif
+
 #ifdef CONFIG_X86
     .do_mca = flask_do_mca,
     .shadow_control = flask_shadow_control,
@@ -1585,8 +1594,6 @@ static struct xsm_operations flask_ops = {
     .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
     .hvm_inject_msi = flask_hvm_inject_msi,
     .hvm_ioreq_server = flask_hvm_ioreq_server,
-    .mem_event_control = flask_mem_event_control,
-    .mem_event_op = flask_mem_event_op,
     .mem_sharing_op = flask_mem_sharing_op,
     .apic = flask_apic,
     .platform_op = flask_platform_op,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH for-4.5 v12 2/3] xen/arm: Implement domain_get_maximum_gpfn
  2014-09-29 15:55 [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM Tamas K Lengyel
  2014-09-29 15:55 ` [PATCH for-4.5 v12 1/3] xen/xsm: Wrap mem_access blocks into HAS_MEM_ACCESS ifdefs Tamas K Lengyel
@ 2014-09-29 15:55 ` Tamas K Lengyel
  2014-09-29 15:55 ` [PATCH for-4.5 v12 3/3] xen/arm: Add p2m_set_permission and p2m_shatter_page helpers Tamas K Lengyel
  2014-10-01 10:59 ` [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM Ian Campbell
  3 siblings, 0 replies; 7+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 15:55 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, dgdegra, Julien Grall, stefano.stabellini, ian.campbell

From: Julien Grall <julien.grall@linaro.org>

The function domain_get_maximum_gpfn is returning the maximum gpfn ever
mapped in the guest. We can use d->arch.p2m.max_mapped_gfn for this purpose.

We use this in xenaccess as to avoid the user attempting to set page
permissions on pages which don't exist for the domain, as a non-arch specific
sanity check.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/mm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index c5b48ef..439cb01 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -971,7 +971,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    return -ENOSYS;
+    return d->arch.p2m.max_mapped_gfn;
 }
 
 void share_xen_page_with_guest(struct page_info *page,
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH for-4.5 v12 3/3] xen/arm: Add p2m_set_permission and p2m_shatter_page helpers.
  2014-09-29 15:55 [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM Tamas K Lengyel
  2014-09-29 15:55 ` [PATCH for-4.5 v12 1/3] xen/xsm: Wrap mem_access blocks into HAS_MEM_ACCESS ifdefs Tamas K Lengyel
  2014-09-29 15:55 ` [PATCH for-4.5 v12 2/3] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
@ 2014-09-29 15:55 ` Tamas K Lengyel
  2014-10-01 10:59 ` [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM Ian Campbell
  3 siblings, 0 replies; 7+ messages in thread
From: Tamas K Lengyel @ 2014-09-29 15:55 UTC (permalink / raw)
  To: xen-devel; +Cc: tim, dgdegra, stefano.stabellini, ian.campbell, Tamas K Lengyel

Signed-off-by: Tamas K Lengyel <tklengyel@sec.in.tum.de>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: Julien Grall <julien.grall@linaro.org>
---
v8: Determine level_shift in p2m_shatter_page instead of passing as argument.
---
 xen/arch/arm/p2m.c | 137 ++++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 94 insertions(+), 43 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 70929fc..1a72ea7 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -227,6 +227,76 @@ int p2m_pod_decrease_reservation(struct domain *d,
     return -ENOSYS;
 }
 
+static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
+{
+    /* First apply type permissions */
+    switch ( t )
+    {
+    case p2m_ram_rw:
+        e->p2m.xn = 0;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_ram_ro:
+        e->p2m.xn = 0;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_iommu_map_rw:
+    case p2m_map_foreign:
+    case p2m_grant_map_rw:
+    case p2m_mmio_direct:
+        e->p2m.xn = 1;
+        e->p2m.write = 1;
+        break;
+
+    case p2m_iommu_map_ro:
+    case p2m_grant_map_ro:
+    case p2m_invalid:
+        e->p2m.xn = 1;
+        e->p2m.write = 0;
+        break;
+
+    case p2m_max_real_type:
+        BUG();
+        break;
+    }
+
+    /* Then restrict with access permissions */
+    switch ( a )
+    {
+    case p2m_access_rwx:
+        break;
+    case p2m_access_wx:
+        e->p2m.read = 0;
+        break;
+    case p2m_access_rw:
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_w:
+        e->p2m.read = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_rx:
+    case p2m_access_rx2rw:
+        e->p2m.write = 0;
+        break;
+    case p2m_access_x:
+        e->p2m.write = 0;
+        e->p2m.read = 0;
+        break;
+    case p2m_access_r:
+        e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    case p2m_access_n:
+    case p2m_access_n2rwx:
+        e->p2m.read = e->p2m.write = 0;
+        e->p2m.xn = 1;
+        break;
+    }
+}
+
 static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
                                p2m_type_t t)
 {
@@ -258,37 +328,8 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
         break;
     }
 
-    switch (t)
-    {
-    case p2m_ram_rw:
-        e.p2m.xn = 0;
-        e.p2m.write = 1;
-        break;
-
-    case p2m_ram_ro:
-        e.p2m.xn = 0;
-        e.p2m.write = 0;
-        break;
-
-    case p2m_iommu_map_rw:
-    case p2m_map_foreign:
-    case p2m_grant_map_rw:
-    case p2m_mmio_direct:
-        e.p2m.xn = 1;
-        e.p2m.write = 1;
-        break;
-
-    case p2m_iommu_map_ro:
-    case p2m_grant_map_ro:
-    case p2m_invalid:
-        e.p2m.xn = 1;
-        e.p2m.write = 0;
-        break;
-
-    case p2m_max_real_type:
-        BUG();
-        break;
-    }
+    /* We pass p2m_access_rwx as a placeholder for now. */
+    p2m_set_permission(&e, t, p2m_access_rwx);
 
     ASSERT(!(pa & ~PAGE_MASK));
     ASSERT(!(pa & ~PADDR_MASK));
@@ -451,6 +492,26 @@ static const paddr_t level_masks[] =
 static const paddr_t level_shifts[] =
     { ZEROETH_SHIFT, FIRST_SHIFT, SECOND_SHIFT, THIRD_SHIFT };
 
+static int p2m_shatter_page(struct domain *d,
+                            lpae_t *entry,
+                            unsigned int level,
+                            bool_t flush_cache)
+{
+    const paddr_t level_shift = level_shifts[level];
+    int rc = p2m_create_table(d, entry,
+                              level_shift - PAGE_SHIFT, flush_cache);
+
+    if ( !rc )
+    {
+        struct p2m_domain *p2m = &d->arch.p2m;
+        p2m->stats.shattered[level]++;
+        p2m->stats.mappings[level]--;
+        p2m->stats.mappings[level+1] += LPAE_ENTRIES;
+    }
+
+    return rc;
+}
+
 /*
  * 0   == (P2M_ONE_DESCEND) continue to descend the tree
  * +ve == (P2M_ONE_PROGRESS_*) handled at this level, continue, flush,
@@ -582,14 +643,9 @@ static int apply_one_level(struct domain *d,
             if ( p2m_mapping(orig_pte) )
             {
                 *flush = true;
-                rc = p2m_create_table(d, entry,
-                                      level_shift - PAGE_SHIFT, flush_cache);
+                rc = p2m_shatter_page(d, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
-
-                p2m->stats.shattered[level]++;
-                p2m->stats.mappings[level]--;
-                p2m->stats.mappings[level+1] += LPAE_ENTRIES;
             } /* else: an existing table mapping -> descend */
 
             BUG_ON(!p2m_table(*entry));
@@ -624,15 +680,10 @@ static int apply_one_level(struct domain *d,
                  * and descend.
                  */
                 *flush = true;
-                rc = p2m_create_table(d, entry,
-                                      level_shift - PAGE_SHIFT, flush_cache);
+                rc = p2m_shatter_page(d, entry, level, flush_cache);
                 if ( rc < 0 )
                     return rc;
 
-                p2m->stats.shattered[level]++;
-                p2m->stats.mappings[level]--;
-                p2m->stats.mappings[level+1] += LPAE_ENTRIES;
-
                 return P2M_ONE_DESCEND;
             }
         }
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM
  2014-09-29 15:55 [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2014-09-29 15:55 ` [PATCH for-4.5 v12 3/3] xen/arm: Add p2m_set_permission and p2m_shatter_page helpers Tamas K Lengyel
@ 2014-10-01 10:59 ` Ian Campbell
  2014-10-02 15:50   ` Konrad Rzeszutek Wilk
  3 siblings, 1 reply; 7+ messages in thread
From: Ian Campbell @ 2014-10-01 10:59 UTC (permalink / raw)
  To: Tamas K Lengyel, Konrad Rzeszutek Wilk
  Cc: tim, dgdegra, stefano.stabellini, xen-devel

On Mon, 2014-09-29 at 17:55 +0200, Tamas K Lengyel wrote:
> These patches are the refactoring and minor patches that will be required
> for future mem_access support on ARM that could be merged now.

Needs a release exception.

Konrad,

We'd like to get some of these into 4.5 so as to ease life for people
who want to backport ARM xenaccess and/or continue developing it for
4.6.

I'm in favour of only "xen/xsm: Wrap mem_access blocks into
HAS_MEM_ACCESS ifdefs" and "xen/arm: Add p2m_set_permission and
p2m_shatter_page helpers." for now IMO, the new hypercall can/should
wait and come along with the user in 4.6.

Those two patches are a slight refactoring and a refactoring+ifdeffing
(refactorings/movements painful to rebase without introducing errors).
IMHO if they build then the risk of other regressions is slight.

Ian.

> 
> This PATCH series is also available at:
> https://github.com/tklengyel/xen/tree/arm_memaccess_12-for-4.5
> 
> Julien Grall (1):
>   xen/arm: Implement domain_get_maximum_gpfn
> 
> Tamas K Lengyel (2):
>   xen/xsm: Wrap mem_access blocks into HAS_MEM_ACCESS ifdefs
>   xen/arm: Add p2m_set_permission and p2m_shatter_page helpers.
> 
>  xen/arch/arm/mm.c       |   2 +-
>  xen/arch/arm/p2m.c      | 137 +++++++++++++++++++++++++++++++++---------------
>  xen/include/xsm/dummy.h |  26 ++++-----
>  xen/include/xsm/xsm.h   |  29 +++++-----
>  xen/xsm/dummy.c         |   7 ++-
>  xen/xsm/flask/hooks.c   |  33 +++++++-----
>  6 files changed, 151 insertions(+), 83 deletions(-)
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM
  2014-10-01 10:59 ` [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM Ian Campbell
@ 2014-10-02 15:50   ` Konrad Rzeszutek Wilk
  2014-10-06 13:50     ` Ian Campbell
  0 siblings, 1 reply; 7+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-10-02 15:50 UTC (permalink / raw)
  To: Ian Campbell; +Cc: tim, dgdegra, stefano.stabellini, Tamas K Lengyel, xen-devel

On Wed, Oct 01, 2014 at 11:59:07AM +0100, Ian Campbell wrote:
> On Mon, 2014-09-29 at 17:55 +0200, Tamas K Lengyel wrote:
> > These patches are the refactoring and minor patches that will be required
> > for future mem_access support on ARM that could be merged now.
> 
> Needs a release exception.
> 
> Konrad,
> 
> We'd like to get some of these into 4.5 so as to ease life for people
> who want to backport ARM xenaccess and/or continue developing it for
> 4.6.

Right.
> 
> I'm in favour of only "xen/xsm: Wrap mem_access blocks into
> HAS_MEM_ACCESS ifdefs" and "xen/arm: Add p2m_set_permission and
> p2m_shatter_page helpers." for now IMO, the new hypercall can/should
> wait and come along with the user in 4.6.

OK.
> 
> Those two patches are a slight refactoring and a refactoring+ifdeffing
> (refactorings/movements painful to rebase without introducing errors).
> IMHO if they build then the risk of other regressions is slight.

Release-Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> Ian.
> 
> > 
> > This PATCH series is also available at:
> > https://github.com/tklengyel/xen/tree/arm_memaccess_12-for-4.5
> > 
> > Julien Grall (1):
> >   xen/arm: Implement domain_get_maximum_gpfn
> > 
> > Tamas K Lengyel (2):
> >   xen/xsm: Wrap mem_access blocks into HAS_MEM_ACCESS ifdefs
> >   xen/arm: Add p2m_set_permission and p2m_shatter_page helpers.
> > 
> >  xen/arch/arm/mm.c       |   2 +-
> >  xen/arch/arm/p2m.c      | 137 +++++++++++++++++++++++++++++++++---------------
> >  xen/include/xsm/dummy.h |  26 ++++-----
> >  xen/include/xsm/xsm.h   |  29 +++++-----
> >  xen/xsm/dummy.c         |   7 ++-
> >  xen/xsm/flask/hooks.c   |  33 +++++++-----
> >  6 files changed, 151 insertions(+), 83 deletions(-)
> > 
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM
  2014-10-02 15:50   ` Konrad Rzeszutek Wilk
@ 2014-10-06 13:50     ` Ian Campbell
  0 siblings, 0 replies; 7+ messages in thread
From: Ian Campbell @ 2014-10-06 13:50 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: stefano.stabellini, dgdegra, tim, Tamas K Lengyel, xen-devel

On Thu, 2014-10-02 at 11:50 -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Oct 01, 2014 at 11:59:07AM +0100, Ian Campbell wrote:
> > On Mon, 2014-09-29 at 17:55 +0200, Tamas K Lengyel wrote:
> > > These patches are the refactoring and minor patches that will be required
> > > for future mem_access support on ARM that could be merged now.
> > 
> > Needs a release exception.
> > 
> > Konrad,
> > 
> > We'd like to get some of these into 4.5 so as to ease life for people
> > who want to backport ARM xenaccess and/or continue developing it for
> > 4.6.
> 
> Right.
> > 
> > I'm in favour of only "xen/xsm: Wrap mem_access blocks into
> > HAS_MEM_ACCESS ifdefs" and "xen/arm: Add p2m_set_permission and
> > p2m_shatter_page helpers." for now IMO, the new hypercall can/should
> > wait and come along with the user in 4.6.
> 
> OK.
> > 
> > Those two patches are a slight refactoring and a refactoring+ifdeffing
> > (refactorings/movements painful to rebase without introducing errors).
> > IMHO if they build then the risk of other regressions is slight.
> 
> Release-Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Thanks. I applied the p2m_set_perms one. Turns out the HAS_MEM_ACCESS
one was already in courtesy of Jan.

Ian.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2014-10-06 13:50 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-09-29 15:55 [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM Tamas K Lengyel
2014-09-29 15:55 ` [PATCH for-4.5 v12 1/3] xen/xsm: Wrap mem_access blocks into HAS_MEM_ACCESS ifdefs Tamas K Lengyel
2014-09-29 15:55 ` [PATCH for-4.5 v12 2/3] xen/arm: Implement domain_get_maximum_gpfn Tamas K Lengyel
2014-09-29 15:55 ` [PATCH for-4.5 v12 3/3] xen/arm: Add p2m_set_permission and p2m_shatter_page helpers Tamas K Lengyel
2014-10-01 10:59 ` [PATCH for-4.5 v12 0/3] Refactoring for future mem_event and mem_access support on ARM Ian Campbell
2014-10-02 15:50   ` Konrad Rzeszutek Wilk
2014-10-06 13:50     ` Ian Campbell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).