From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xen.org
Cc: Stefano Stabellini <stefanos@xilinx.com>,
sstabellini@kernel.org, wei.liu2@citrix.com,
blackskygg@gmail.com, ian.jackson@eu.citrix.com,
julien.grall@arm.com, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
Date: Tue, 31 Jul 2018 11:23:03 -0700 [thread overview]
Message-ID: <1533061389-6504-1-git-send-email-sstabellini@kernel.org> (raw)
In-Reply-To: <alpine.DEB.2.10.1807311119160.26773@sstabellini-ThinkPad-X260>
From: Zhongze Liu <blackskygg@gmail.com>
Author: Zhongze Liu <blackskygg@gmail.com>
The existing XENMAPSPACE_gmfn_foreign subop of XENMEM_add_to_physmap forbids
a Dom0 to map memory pages from one DomU to another, which restricts some useful
yet not dangerous use cases -- such as sharing pages among DomU's so that they
can do shm-based communication.
This patch introduces XENMAPSPACE_gmfn_share to address this inconvenience,
which is mostly the same as XENMAPSPACE_gmfn_foreign but has its own xsm check.
Specifically, the patch:
* Introduces a new av permission MMU__SHARE_MEM to denote if two domains can
share memory by using the new subop;
* Introduces xsm_map_gmfn_share() to check if (current) has proper permission
over (t) AND MMU__SHARE_MEM is allowed between (d) and (t);
* Modify the default xen.te to allow MMU__SHARE_MEM for normal domains that
allow grant mapping/event channels.
The new subop is marked unsupported for x86 because calling p2m_add_foregin
on two DomU's is currently not supported on x86.
This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).
[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- fix coding style
- remove useless x86 hypervisor message for the unimplemented op
- code style
- improve/add comments
---
tools/flask/policy/modules/xen.if | 2 ++
xen/arch/arm/mm.c | 7 ++++++-
xen/include/public/memory.h | 8 ++++++++
xen/include/xsm/dummy.h | 15 +++++++++++++++
xen/include/xsm/xsm.h | 6 ++++++
xen/xsm/dummy.c | 1 +
xen/xsm/flask/hooks.c | 12 ++++++++++++
xen/xsm/flask/policy/access_vectors | 5 +++++
8 files changed, 55 insertions(+), 1 deletion(-)
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 7aefd00..f841125 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -128,6 +128,8 @@ define(`domain_comms', `
domain_event_comms($1, $2)
allow $1 $2:grant { map_read map_write copy unmap };
allow $2 $1:grant { map_read map_write copy unmap };
+ allow $1 $2:mmu share_mem;
+ allow $2 $1:mmu share_mem;
')
# domain_self_comms(domain)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index d234c46..aa2e067 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1245,6 +1245,7 @@ int xenmem_add_to_physmap_one(
break;
case XENMAPSPACE_gmfn_foreign:
+ case XENMAPSPACE_gmfn_share:
{
struct domain *od;
p2m_type_t p2mt;
@@ -1259,7 +1260,11 @@ int xenmem_add_to_physmap_one(
return -EINVAL;
}
- rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+ if ( space == XENMAPSPACE_gmfn_foreign )
+ rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+ else
+ rc = xsm_map_gmfn_share(XSM_TARGET, d, od);
+
if ( rc )
{
rcu_unlock_domain(od);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index bf2f81f..a706e3c 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -227,6 +227,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_machphys_mapping_t);
Stage-2 using the Normal Memory
Inner/Outer Write-Back Cacheable
memory attribute. */
+#define XENMAPSPACE_gmfn_share 6 /* GMFN from another dom,
+ XENMEM_add_to_physmap_batch (and
+ currently ARM) only. Unlike
+ XENMAPSPACE_gmfn_foreign, it
+ requires current to have mapping
+ privileges instead of the
+ destination domain. */
+
/* ` } */
/*
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index ff6b2db..5064fce 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -535,6 +535,21 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
return xsm_default_action(action, d, t);
}
+/*
+ * This action also requires that @current targets @d, but it has already been
+ * checked somewhere higher in the call stack.
+ *
+ * Be aware that this is not an exact default equivalence of its flask variant
+ * which also checks if @d and @t "are allowed to share memory pages", for we
+ * don't have a proper default equivalence of such a check.
+ */
+static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
+ struct domain *t)
+{
+ XSM_ASSERT_ACTION(XSM_TARGET);
+ return xsm_default_action(action, current->domain, t);
+}
+
static XSM_INLINE int xsm_hvm_param(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
{
XSM_ASSERT_ACTION(XSM_TARGET);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index f0c6fc7..8873253 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -86,6 +86,7 @@ struct xsm_operations {
int (*add_to_physmap) (struct domain *d1, struct domain *d2);
int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
int (*map_gmfn_foreign) (struct domain *d, struct domain *t);
+ int (*map_gmfn_share) (struct domain *d, struct domain *t);
int (*claim_pages) (struct domain *d);
int (*console_io) (struct domain *d, int cmd);
@@ -376,6 +377,11 @@ static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, str
return xsm_ops->map_gmfn_foreign(d, t);
}
+static inline int xsm_map_gmfn_share (xsm_default_t def, struct domain *d, struct domain *t)
+{
+ return xsm_ops->map_gmfn_share(d, t);
+}
+
static inline int xsm_claim_pages(xsm_default_t def, struct domain *d)
{
return xsm_ops->claim_pages(d);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 6e75119..04e91d3 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -124,6 +124,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
set_to_dummy_if_null(ops, add_to_physmap);
set_to_dummy_if_null(ops, remove_from_physmap);
set_to_dummy_if_null(ops, map_gmfn_foreign);
+ set_to_dummy_if_null(ops, map_gmfn_share);
set_to_dummy_if_null(ops, vm_event_control);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 78bc326..250d476 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1198,6 +1198,17 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t)
return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
}
+/*
+ * This action also requires that @current has MMU__MAP_READ/WRITE over @d,
+ * but that has already been checked somewhere higher in the call stack (for
+ * example, by flask_add_to_physmap()).
+ */
+static int flask_map_gmfn_share(struct domain *d, struct domain *t)
+{
+ return current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) ?:
+ domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM);
+}
+
static int flask_hvm_param(struct domain *d, unsigned long op)
{
u32 perm;
@@ -1822,6 +1833,7 @@ static struct xsm_operations flask_ops = {
.add_to_physmap = flask_add_to_physmap,
.remove_from_physmap = flask_remove_from_physmap,
.map_gmfn_foreign = flask_map_gmfn_foreign,
+ .map_gmfn_share = flask_map_gmfn_share,
#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
.get_device_group = flask_get_device_group,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c5d8548..4a92252 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -385,6 +385,11 @@ class mmu
# Allow a privileged domain to install a map of a page it does not own. Used
# for stub domain device models with the PV framebuffer.
target_hack
+# Checked when using XENMEM_add_to_physmap with XENMAPSPACE_gmfn_share
+# to share memory between two domains:
+# source = domain whose memory is being shared
+# target = client domain
+ share_mem
}
# control of the paging_domctl split by subop
--
1.9.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-07-31 18:23 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-31 18:22 [PATCH v6 0/7] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
2018-07-31 18:23 ` Stefano Stabellini [this message]
2018-08-01 8:39 ` [PATCH v6 1/7] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Jan Beulich
2018-08-01 22:21 ` Stefano Stabellini
2018-08-02 6:47 ` Jan Beulich
2018-08-02 20:57 ` Stefano Stabellini
2018-08-03 6:37 ` Jan Beulich
2018-08-03 20:17 ` Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 2/7] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 3/7] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 4/7] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 5/7] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 6/7] docs: documentation about static shared memory regions Stefano Stabellini
2018-07-31 18:23 ` [PATCH v6 7/7] xen/arm: export shared memory regions as reserved-memory on device tree Stefano Stabellini
2018-08-01 9:26 ` Julien Grall
2018-08-01 22:25 ` Stefano Stabellini
2018-08-03 9:12 ` Julien Grall
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1533061389-6504-1-git-send-email-sstabellini@kernel.org \
--to=sstabellini@kernel.org \
--cc=blackskygg@gmail.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=ian.jackson@eu.citrix.com \
--cc=julien.grall@arm.com \
--cc=stefanos@xilinx.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).