* [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files
@ 2018-05-25 0:55 Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 1/6] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
` (7 more replies)
0 siblings, 8 replies; 13+ messages in thread
From: Stefano Stabellini @ 2018-05-25 0:55 UTC (permalink / raw)
To: xen-devel; +Cc: ian.jackson, julien.grall, sstabellini, wei.liu2, blackskygg
Hi,
This series implements a new xl config entry. Users can use the new
config entry to statically setup shared memory areas among VMs that
don't have grant table support so that they could communicate with each
other through the static shared memory areas.
It was originally developed by Zhongze, I am just updating the last few
issued that were address during the last round of reviews in January. I
tested the feature on ARM and works fine.
Cheers,
Stefano
Zhongze Liu (6):
xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
libxl: introduce a new structure to represent static shared memory regions
libxl: support mapping static shared memory areas during domain creation
libxl: support unmapping static shared memory areas during domain destruction
libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
docs: documentation about static shared memory regions
docs/man/xl-static-shm-configuration.pod.5 | 257 +++++++++++++++
docs/man/xl.cfg.pod.5.in | 8 +
docs/misc/xenstore-paths.markdown | 47 +++
tools/flask/policy/modules/xen.if | 2 +
tools/libxl/Makefile | 4 +-
tools/libxl/libxl.h | 6 +
tools/libxl/libxl_arch.h | 6 +
tools/libxl/libxl_arm.c | 15 +
tools/libxl/libxl_create.c | 27 ++
tools/libxl/libxl_domain.c | 5 +
tools/libxl/libxl_internal.h | 16 +
tools/libxl/libxl_sshm.c | 512 +++++++++++++++++++++++++++++
tools/libxl/libxl_types.idl | 32 +-
tools/libxl/libxl_x86.c | 19 ++
tools/libxl/libxlu_sshm.c | 207 ++++++++++++
tools/libxl/libxlutil.h | 6 +
tools/xl/xl_parse.c | 25 +-
xen/arch/arm/mm.c | 7 +-
xen/include/public/memory.h | 8 +
xen/include/xsm/dummy.h | 15 +
xen/include/xsm/xsm.h | 6 +
xen/xsm/dummy.c | 1 +
xen/xsm/flask/hooks.c | 12 +
xen/xsm/flask/policy/access_vectors | 5 +
24 files changed, 1242 insertions(+), 6 deletions(-)
create mode 100644 docs/man/xl-static-shm-configuration.pod.5
create mode 100644 tools/libxl/libxl_sshm.c
create mode 100644 tools/libxl/libxlu_sshm.c
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v5 1/6] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
2018-05-25 0:55 [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
@ 2018-05-25 0:55 ` Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 2/6] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
` (6 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Stefano Stabellini @ 2018-05-25 0:55 UTC (permalink / raw)
To: xen-devel
Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
ian.jackson, julien.grall, Daniel De Graaf
From: Zhongze Liu <blackskygg@gmail.com>
Author: Zhongze Liu <blackskygg@gmail.com>
The existing XENMAPSPACE_gmfn_foreign subop of XENMEM_add_to_physmap forbids
a Dom0 to map memory pages from one DomU to another, which restricts some useful
yet not dangerous use cases -- such as sharing pages among DomU's so that they
can do shm-based communication.
This patch introduces XENMAPSPACE_gmfn_share to address this inconvenience,
which is mostly the same as XENMAPSPACE_gmfn_foreign but has its own xsm check.
Specifically, the patch:
* Introduces a new av permission MMU__SHARE_MEM to denote if two domains can
share memory by using the new subop;
* Introduces xsm_map_gmfn_share() to check if (current) has proper permission
over (t) AND MMU__SHARE_MEM is allowed between (d) and (t);
* Modify the default xen.te to allow MMU__SHARE_MEM for normal domains that
allow grant mapping/event channels.
The new subop is marked unsupported for x86 because calling p2m_add_foregin
on two DomU's is currently not supported on x86.
This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).
[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- fix coding style
- remove useless x86 hypervisor message for the unimplemented op
- code style
- improve/add comments
---
tools/flask/policy/modules/xen.if | 2 ++
xen/arch/arm/mm.c | 7 ++++++-
xen/include/public/memory.h | 8 ++++++++
xen/include/xsm/dummy.h | 15 +++++++++++++++
xen/include/xsm/xsm.h | 6 ++++++
xen/xsm/dummy.c | 1 +
xen/xsm/flask/hooks.c | 12 ++++++++++++
xen/xsm/flask/policy/access_vectors | 5 +++++
8 files changed, 55 insertions(+), 1 deletion(-)
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 7aefd00..f841125 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -128,6 +128,8 @@ define(`domain_comms', `
domain_event_comms($1, $2)
allow $1 $2:grant { map_read map_write copy unmap };
allow $2 $1:grant { map_read map_write copy unmap };
+ allow $1 $2:mmu share_mem;
+ allow $2 $1:mmu share_mem;
')
# domain_self_comms(domain)
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index a6de77c..e5eb222 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1245,6 +1245,7 @@ int xenmem_add_to_physmap_one(
break;
case XENMAPSPACE_gmfn_foreign:
+ case XENMAPSPACE_gmfn_share:
{
struct domain *od;
p2m_type_t p2mt;
@@ -1259,7 +1260,11 @@ int xenmem_add_to_physmap_one(
return -EINVAL;
}
- rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+ if ( space == XENMAPSPACE_gmfn_foreign )
+ rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+ else
+ rc = xsm_map_gmfn_share(XSM_TARGET, d, od);
+
if ( rc )
{
rcu_unlock_domain(od);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index bf2f81f..a706e3c 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -227,6 +227,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_machphys_mapping_t);
Stage-2 using the Normal Memory
Inner/Outer Write-Back Cacheable
memory attribute. */
+#define XENMAPSPACE_gmfn_share 6 /* GMFN from another dom,
+ XENMEM_add_to_physmap_batch (and
+ currently ARM) only. Unlike
+ XENMAPSPACE_gmfn_foreign, it
+ requires current to have mapping
+ privileges instead of the
+ destination domain. */
+
/* ` } */
/*
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index ff6b2db..5064fce 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -535,6 +535,21 @@ static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, str
return xsm_default_action(action, d, t);
}
+/*
+ * This action also requires that @current targets @d, but it has already been
+ * checked somewhere higher in the call stack.
+ *
+ * Be aware that this is not an exact default equivalence of its flask variant
+ * which also checks if @d and @t "are allowed to share memory pages", for we
+ * don't have a proper default equivalence of such a check.
+ */
+static XSM_INLINE int xsm_map_gmfn_share(XSM_DEFAULT_ARG struct domain *d,
+ struct domain *t)
+{
+ XSM_ASSERT_ACTION(XSM_TARGET);
+ return xsm_default_action(action, current->domain, t);
+}
+
static XSM_INLINE int xsm_hvm_param(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
{
XSM_ASSERT_ACTION(XSM_TARGET);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index f0c6fc7..8873253 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -86,6 +86,7 @@ struct xsm_operations {
int (*add_to_physmap) (struct domain *d1, struct domain *d2);
int (*remove_from_physmap) (struct domain *d1, struct domain *d2);
int (*map_gmfn_foreign) (struct domain *d, struct domain *t);
+ int (*map_gmfn_share) (struct domain *d, struct domain *t);
int (*claim_pages) (struct domain *d);
int (*console_io) (struct domain *d, int cmd);
@@ -376,6 +377,11 @@ static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, str
return xsm_ops->map_gmfn_foreign(d, t);
}
+static inline int xsm_map_gmfn_share (xsm_default_t def, struct domain *d, struct domain *t)
+{
+ return xsm_ops->map_gmfn_share(d, t);
+}
+
static inline int xsm_claim_pages(xsm_default_t def, struct domain *d)
{
return xsm_ops->claim_pages(d);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 6e75119..04e91d3 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -124,6 +124,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
set_to_dummy_if_null(ops, add_to_physmap);
set_to_dummy_if_null(ops, remove_from_physmap);
set_to_dummy_if_null(ops, map_gmfn_foreign);
+ set_to_dummy_if_null(ops, map_gmfn_share);
set_to_dummy_if_null(ops, vm_event_control);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 78bc326..250d476 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1198,6 +1198,17 @@ static int flask_map_gmfn_foreign(struct domain *d, struct domain *t)
return domain_has_perm(d, t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE);
}
+/*
+ * This action also requires that @current has MMU__MAP_READ/WRITE over @d,
+ * but that has already been checked somewhere higher in the call stack (for
+ * example, by flask_add_to_physmap()).
+ */
+static int flask_map_gmfn_share(struct domain *d, struct domain *t)
+{
+ return current_has_perm(t, SECCLASS_MMU, MMU__MAP_READ | MMU__MAP_WRITE) ?:
+ domain_has_perm(d, t, SECCLASS_MMU, MMU__SHARE_MEM);
+}
+
static int flask_hvm_param(struct domain *d, unsigned long op)
{
u32 perm;
@@ -1822,6 +1833,7 @@ static struct xsm_operations flask_ops = {
.add_to_physmap = flask_add_to_physmap,
.remove_from_physmap = flask_remove_from_physmap,
.map_gmfn_foreign = flask_map_gmfn_foreign,
+ .map_gmfn_share = flask_map_gmfn_share,
#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
.get_device_group = flask_get_device_group,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c5d8548..4a92252 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -385,6 +385,11 @@ class mmu
# Allow a privileged domain to install a map of a page it does not own. Used
# for stub domain device models with the PV framebuffer.
target_hack
+# Checked when using XENMEM_add_to_physmap with XENMAPSPACE_gmfn_share
+# to share memory between two domains:
+# source = domain whose memory is being shared
+# target = client domain
+ share_mem
}
# control of the paging_domctl split by subop
--
1.9.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v5 2/6] libxl: introduce a new structure to represent static shared memory regions
2018-05-25 0:55 [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 1/6] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
@ 2018-05-25 0:55 ` Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 3/6] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
` (5 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Stefano Stabellini @ 2018-05-25 0:55 UTC (permalink / raw)
To: xen-devel
Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
ian.jackson, julien.grall
From: Zhongze Liu <blackskygg@gmail.com>
Author: Zhongze Liu <blackskygg@gmail.com>
Add a new structure to the IDL family to represent static shared memory regions
as proposed in the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).
And deleted some trailing white spaces.
[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- fix typos
- add LIBXL_HAVE_SSHM
- replace end with size
---
tools/libxl/libxl.h | 6 ++++++
tools/libxl/libxl_types.idl | 32 ++++++++++++++++++++++++++++++--
2 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index a09d069..d25de5d 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -2405,6 +2405,12 @@ int libxl_fd_set_nonblock(libxl_ctx *ctx, int fd, int nonblock);
int libxl_qemu_monitor_command(libxl_ctx *ctx, uint32_t domid,
const char *command_line, char **output);
+#define LIBXL_HAVE_SSHM 1
+
+/* Constants for libxl_static_shm */
+#define LIBXL_SSHM_RANGE_UNKNOWN UINT64_MAX
+#define LIBXL_SSHM_ID_MAXLEN 128
+
#include <libxl_event.h>
#endif /* LIBXL_H */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 01ec1d1..2cf06b4 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -559,10 +559,10 @@ libxl_domain_build_info = Struct("domain_build_info",[
("keymap", string),
("sdl", libxl_sdl_info),
("spice", libxl_spice_info),
-
+
("gfx_passthru", libxl_defbool),
("gfx_passthru_kind", libxl_gfx_passthru_kind),
-
+
("serial", string),
("boot", string),
("usb", libxl_defbool),
@@ -822,6 +822,33 @@ libxl_device_vdispl = Struct("device_vdispl", [
("connectors", Array(libxl_connector_param, "num_connectors"))
])
+libxl_sshm_cachepolicy = Enumeration("sshm_cachepolicy", [
+ (-1, "UNKNOWN"),
+ (0, "ARM_NORMAL"), # ARM policies should be < 32
+ (32, "X86_NORMAL"), # X86 policies should be >= 32
+ ], init_val = "LIBXL_SSHM_CHCHE_POLICY_UNKNOWN")
+
+libxl_sshm_prot = Enumeration("sshm_prot", [
+ (-1, "UNKNOWN"),
+ (3, "RW"),
+ ], init_val = "LIBXL_SSHM_PROT_UNKNOWN")
+
+libxl_sshm_role = Enumeration("sshm_role", [
+ (-1, "UNKNOWN"),
+ (0, "MASTER"),
+ (1, "SLAVE"),
+ ], init_val = "LIBXL_SSHM_ROLE_UNKNOWN")
+
+libxl_static_shm = Struct("static_shm", [
+ ("id", string),
+ ("offset", uint64, {'init_val': 'LIBXL_SSHM_RANGE_UNKNOWN'}),
+ ("begin", uint64, {'init_val': 'LIBXL_SSHM_RANGE_UNKNOWN'}),
+ ("size", uint64, {'init_val': 'LIBXL_SSHM_RANGE_UNKNOWN'}),
+ ("prot", libxl_sshm_prot, {'init_val': 'LIBXL_SSHM_PROT_UNKNOWN'}),
+ ("cache_policy", libxl_sshm_cachepolicy, {'init_val': 'LIBXL_SSHM_CACHEPOLICY_UNKNOWN'}),
+ ("role", libxl_sshm_role, {'init_val': 'LIBXL_SSHM_ROLE_UNKNOWN'}),
+])
+
libxl_domain_config = Struct("domain_config", [
("c_info", libxl_domain_create_info),
("b_info", libxl_domain_build_info),
@@ -842,6 +869,7 @@ libxl_domain_config = Struct("domain_config", [
("channels", Array(libxl_device_channel, "num_channels")),
("usbctrls", Array(libxl_device_usbctrl, "num_usbctrls")),
("usbdevs", Array(libxl_device_usbdev, "num_usbdevs")),
+ ("sshms", Array(libxl_static_shm, "num_sshms")),
("on_poweroff", libxl_action_on_shutdown),
("on_reboot", libxl_action_on_shutdown),
--
1.9.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v5 3/6] libxl: support mapping static shared memory areas during domain creation
2018-05-25 0:55 [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 1/6] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 2/6] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
@ 2018-05-25 0:55 ` Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 4/6] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
` (4 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Stefano Stabellini @ 2018-05-25 0:55 UTC (permalink / raw)
To: xen-devel
Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
ian.jackson, julien.grall
From: Zhongze Liu <blackskygg@gmail.com>
Author: Zhongze Liu <blackskygg@gmail.com>
Add libxl__sshm_add to map shared pages from one DomU to another, The mapping
process involves the following steps:
* Set defaults and check for further errors in the static_shm configs:
overlapping areas, invalid ranges, duplicated master domain,
not page aligned, no master domain etc.
* Use xc_domain_add_to_physmap_batch to do the page sharing.
* When some of the pages can't be successfully mapped, roll back any
successfully mapped pages so that the system stays in a consistent state.
* Write information about static shared memory areas into the appropriate
xenstore paths and set the refcount of the shared region accordingly.
Temporarily mark this as unsupported on x86 because calling p2m_add_foreign on
two domU's is currently not allowd on x86 (see the comments in
x86/mm/p2m.c:p2m_add_foreign for more details).
This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).
[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- fix typos
- add comments
- add value checks (including alignment checks) in sshm_setdefaults
---
tools/libxl/Makefile | 2 +-
tools/libxl/libxl_arch.h | 6 +
tools/libxl/libxl_arm.c | 15 ++
tools/libxl/libxl_create.c | 27 +++
tools/libxl/libxl_internal.h | 14 ++
tools/libxl/libxl_sshm.c | 405 +++++++++++++++++++++++++++++++++++++++++++
tools/libxl/libxl_x86.c | 19 ++
7 files changed, 487 insertions(+), 1 deletion(-)
create mode 100644 tools/libxl/libxl_sshm.c
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 035e66e..3b762c5 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -140,7 +140,7 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
libxl_vtpm.o libxl_nic.o libxl_disk.o libxl_console.o \
libxl_cpupool.o libxl_mem.o libxl_sched.o libxl_tmem.o \
libxl_9pfs.o libxl_domain.o libxl_vdispl.o \
- libxl_pvcalls.o $(LIBXL_OBJS-y)
+ libxl_pvcalls.o libxl_sshm.o $(LIBXL_OBJS-y)
LIBXL_OBJS += libxl_genid.o
LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
diff --git a/tools/libxl/libxl_arch.h b/tools/libxl/libxl_arch.h
index 74a5af3..6a07ccf 100644
--- a/tools/libxl/libxl_arch.h
+++ b/tools/libxl/libxl_arch.h
@@ -73,6 +73,12 @@ int libxl__arch_extra_memory(libxl__gc *gc,
const libxl_domain_build_info *info,
uint64_t *out);
+_hidden
+bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info);
+
+_hidden
+int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm);
+
#if defined(__i386__) || defined(__x86_64__)
#define LAPIC_BASE_ADDRESS 0xfee00000
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 8af9f6f..5f62e78 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -1141,6 +1141,21 @@ void libxl__arch_domain_build_info_acpi_setdefault(
libxl_defbool_setdefault(&b_info->acpi, false);
}
+bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
+{
+ return true;
+}
+
+int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm)
+{
+ if (sshm->cache_policy == LIBXL_SSHM_CACHEPOLICY_UNKNOWN)
+ sshm->cache_policy = LIBXL_SSHM_CACHEPOLICY_ARM_NORMAL;
+ if (sshm->cache_policy >= LIBXL_SSHM_CACHEPOLICY_X86_NORMAL)
+ return ERROR_INVAL;
+
+ return 0;
+}
+
/*
* Local variables:
* mode: C
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index b5e27a7..62b0679 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -532,6 +532,14 @@ int libxl__domain_build(libxl__gc *gc,
ret = ERROR_INVAL;
goto out;
}
+
+ /* The p2m has been setup, we could map the static shared memory now. */
+ ret = libxl__sshm_add(gc, domid, d_config->sshms, d_config->num_sshms);
+ if (ret != 0) {
+ LOG(ERROR, "failed to map static shared memory");
+ goto out;
+ }
+
ret = libxl__build_post(gc, domid, info, state, vments, localents);
out:
return ret;
@@ -959,6 +967,25 @@ static void initiate_domain_create(libxl__egc *egc,
goto error_out;
}
+ if (d_config->num_sshms != 0 &&
+ !libxl__arch_domain_support_sshm(&d_config->b_info)) {
+ LOGD(ERROR, domid, "static_shm is not supported by this domain type.");
+ ret = ERROR_INVAL;
+ goto error_out;
+ }
+
+ for (i = 0; i < d_config->num_sshms; ++i) {
+ ret = libxl__sshm_setdefault(gc, domid, &d_config->sshms[i]);
+ if (ret) {
+ LOGD(ERROR, domid, "Unable to set defaults for static shm");
+ goto error_out;
+ }
+ }
+
+ ret = libxl__sshm_check_overlap(gc, domid,
+ d_config->sshms, d_config->num_sshms);
+ if (ret) goto error_out;
+
ret = libxl__domain_make(gc, d_config, &dcs->build_state, &domid);
if (ret) {
LOGD(ERROR, domid, "cannot make domain: %d", ret);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index c582894..076c30f 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -4421,6 +4421,20 @@ static inline bool libxl__string_is_default(char **s)
#endif
/*
+ * Set up static shared ram pages for HVM domains to communicate
+ *
+ * This function should only be called after the memory map is constructed
+ * and before any further memory access.
+ */
+_hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
+ libxl_static_shm *sshm, int len);
+
+_hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
+ libxl_static_shm *sshms, int len);
+_hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
+ libxl_static_shm *sshm);
+
+/*
* Local variables:
* mode: C
* c-basic-offset: 4
diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
new file mode 100644
index 0000000..f61b80c
--- /dev/null
+++ b/tools/libxl/libxl_sshm.c
@@ -0,0 +1,405 @@
+#include "libxl_osdeps.h"
+#include "libxl_internal.h"
+#include "libxl_arch.h"
+
+#define SSHM_PATH(id) GCSPRINTF("/libxl/static_shm/%s", id)
+
+#define SSHM_ERROR(domid, sshmid, f, ...) \
+ LOGD(ERROR, domid, "static_shm id = %s: " f, sshmid, ##__VA_ARGS__)
+
+
+/* Set default values for libxl_static_shm */
+int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
+ libxl_static_shm *sshm)
+{
+ int rc;
+
+ if (sshm->role != LIBXL_SSHM_ROLE_SLAVE &&
+ sshm->role != LIBXL_SSHM_ROLE_MASTER)
+ return ERROR_INVAL;
+ if (sshm->begin & ~XC_PAGE_MASK ||
+ sshm->size & ~XC_PAGE_MASK ||
+ (sshm->offset != LIBXL_SSHM_RANGE_UNKNOWN &&
+ sshm->offset & ~XC_PAGE_MASK)) {
+ SSHM_ERROR(domid, sshm->id,
+ "begin/size/offset is not a multiple of 4K");
+ return ERROR_INVAL;
+ }
+
+ /* role-specific checks */
+ if (sshm->role == LIBXL_SSHM_ROLE_SLAVE) {
+ if (sshm->offset == LIBXL_SSHM_RANGE_UNKNOWN)
+ sshm->offset = 0;
+ if (sshm->cache_policy != LIBXL_SSHM_CACHEPOLICY_UNKNOWN) {
+ SSHM_ERROR(domid, sshm->id,
+ "cache_policy is only applicable to master domains");
+ rc = ERROR_INVAL;
+ goto out;
+ }
+ } else {
+ if (sshm->offset != LIBXL_SSHM_RANGE_UNKNOWN) {
+ SSHM_ERROR(domid, sshm->id,
+ "offset is only applicable to slave domains");
+ rc = ERROR_INVAL;
+ goto out;
+ }
+
+ rc = libxl__arch_domain_sshm_cachepolicy_setdefault(sshm);
+ if (rc) {
+ SSHM_ERROR(domid, sshm->id,
+ "cache policy not supported on this platform");
+ goto out;
+ }
+ }
+
+ rc = 0;
+out:
+ return rc;
+}
+
+/* Comparator for sorting sshm ranges by sshm->begin */
+static int sshm_range_cmp(const void *a, const void *b)
+{
+ libxl_static_shm *const *sshma = a, *const *sshmb = b;
+ return (*sshma)->begin > (*sshmb)->begin ? 1 : -1;
+}
+
+/* Check if the sshm slave configs in @sshm overlap */
+int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
+ libxl_static_shm *sshms, int len)
+{
+
+ const libxl_static_shm **slave_sshms = NULL;
+ int num_slaves;
+ int i;
+
+ if (!len) return 0;
+
+ slave_sshms = libxl__calloc(gc, len, sizeof(slave_sshms[0]));
+ num_slaves = 0;
+ for (i = 0; i < len; ++i) {
+ if (sshms[i].role == LIBXL_SSHM_ROLE_SLAVE)
+ slave_sshms[num_slaves++] = sshms + i;
+ }
+ qsort(slave_sshms, num_slaves, sizeof(slave_sshms[0]), sshm_range_cmp);
+
+ for (i = 0; i < num_slaves - 1; ++i) {
+ if (slave_sshms[i+1]->begin <
+ slave_sshms[i]->begin + slave_sshms[i]->size) {
+ SSHM_ERROR(domid, slave_sshms[i+1]->id, "slave ranges overlap.");
+ return ERROR_INVAL;
+ }
+ }
+
+ return 0;
+}
+
+/* libxl__sshm_do_map -- map pages into slave's physmap
+ *
+ * This functions maps
+ * master gfn: [@msshm->begin + @sshm->offset,
+ * @msshm->begin + @msshm->size + @sshm->offset)
+ * into
+ * slave gfn: [@sshm->begin, @sshm->begin + @sshm->size)
+ *
+ * The gfns of the pages that are successfully mapped will be stored
+ * in @mapped, and the number of the gfns will be stored in @nmapped.
+ *
+ * The caller has to guarantee that all the values are page-aligned.
+ */
+static int libxl__sshm_do_map(libxl__gc *gc, uint32_t mid, uint32_t sid,
+ libxl_static_shm *sshm, libxl_static_shm *msshm,
+ xen_pfn_t *mapped, unsigned int *nmapped)
+{
+ int rc;
+ int i;
+ xen_pfn_t num_mpages, num_spages, num_success, offset;
+ int *errs;
+ xen_ulong_t *idxs;
+ xen_pfn_t *gpfns;
+
+ num_mpages = (msshm->size) >> XC_PAGE_SHIFT;
+ num_spages = (sshm->size) >> XC_PAGE_SHIFT;
+ offset = sshm->offset >> XC_PAGE_SHIFT;
+
+ /* Check range. Test offset < mpages first to avoid overflow */
+ if ((offset >= num_mpages) || (num_mpages - offset < num_spages)) {
+ SSHM_ERROR(sid, sshm->id, "exceeds master's address space.");
+ rc = ERROR_INVAL;
+ goto out;
+ }
+
+ /* fill out the gfn's and do the mapping */
+ errs = libxl__calloc(gc, num_spages, sizeof(int));
+ idxs = libxl__calloc(gc, num_spages, sizeof(xen_ulong_t));
+ gpfns = libxl__calloc(gc, num_spages, sizeof(xen_pfn_t));
+ for (i = 0; i < num_spages; i++) {
+ idxs[i] = (msshm->begin >> XC_PAGE_SHIFT) + offset + i;
+ gpfns[i]= (sshm->begin >> XC_PAGE_SHIFT) + i;
+ }
+ rc = xc_domain_add_to_physmap_batch(CTX->xch,
+ sid, mid,
+ XENMAPSPACE_gmfn_share,
+ num_spages,
+ idxs, gpfns, errs);
+
+ num_success = 0;
+ for (i = 0; i < num_spages; i++) {
+ if (errs[i]) {
+ SSHM_ERROR(sid, sshm->id,
+ "can't map at address 0x%"PRIx64".",
+ gpfns[i] << XC_PAGE_SHIFT);
+ rc = ERROR_FAIL;
+ } else {
+ mapped[num_success++] = gpfns[i];
+ }
+ }
+ *nmapped = num_success;
+ if (rc) goto out;
+
+ rc = 0;
+out:
+ return rc;
+}
+
+/* Xenstore ops are protected by a transaction */
+static int libxl__sshm_incref(libxl__gc *gc, xs_transaction_t xt,
+ const char *sshm_path)
+{
+ int rc, count;
+ const char *count_path, *count_string;
+
+ count_path = GCSPRINTF("%s/usercnt", sshm_path);
+ rc = libxl__xs_read_checked(gc, xt, count_path, &count_string);
+ if (rc) goto out;
+ count = atoi(count_string);
+
+ count_string = GCSPRINTF("%d", count+1);
+ rc = libxl__xs_write_checked(gc, xt, count_path, count_string);
+ if (rc) goto out;
+
+ rc = 0;
+out:
+ return rc;
+}
+
+static int libxl__sshm_add_slave(libxl__gc *gc, uint32_t domid,
+ libxl_static_shm *sshm)
+{
+ int rc, i;
+ const char *sshm_path, *slave_path;
+ const char *dom_path, *dom_sshm_path, *dom_role_path;
+ const char *xs_value;
+ char *ents[9];
+ libxl_static_shm master_sshm;
+ uint32_t master_domid;
+ xen_pfn_t *mapped;
+ unsigned int nmapped = 0;
+ xs_transaction_t xt = XBT_NULL;
+ bool isretry;
+
+ sshm_path = SSHM_PATH(sshm->id);
+ slave_path = GCSPRINTF("%s/slaves/%"PRIu32, sshm_path, domid);
+ dom_path = libxl__xs_get_dompath(gc, domid);
+ /* the domain should be in xenstore by now */
+ assert(dom_path);
+ dom_sshm_path = GCSPRINTF("%s/static_shm/%s", dom_path, sshm->id);
+ dom_role_path = GCSPRINTF("%s/role", dom_sshm_path);
+
+ /* prepare the slave xenstore entries */
+ ents[0] = "begin";
+ ents[1] = GCSPRINTF("0x%"PRIx64, sshm->begin);
+ ents[2] = "size";
+ ents[3] = GCSPRINTF("0x%"PRIx64, sshm->size);
+ ents[4] = "offset";
+ ents[5] = GCSPRINTF("0x%"PRIx64, sshm->offset);
+ ents[6] = "prot";
+ ents[7] = libxl__strdup(gc, libxl_sshm_prot_to_string(sshm->prot));
+ ents[8] = NULL;
+
+ mapped = libxl__calloc(gc, sshm->size >> XC_PAGE_SHIFT, sizeof(xen_pfn_t));
+
+ isretry = false;
+ for (;;) {
+ rc = libxl__xs_transaction_start(gc, &xt);
+ if (rc) goto out;
+
+ if (!libxl__xs_read(gc, xt, sshm_path)) {
+ SSHM_ERROR(domid, sshm->id, "no master found.");
+ rc = ERROR_FAIL;
+ goto out;
+ }
+
+ /* every ID can appear in each domain at most once */
+ if (libxl__xs_read(gc, xt, dom_sshm_path)) {
+ SSHM_ERROR(domid, sshm->id,
+ "domain tried to map the same ID twice.");
+ rc = ERROR_FAIL;
+ goto out;
+ }
+
+ /* look at the master info and see if we could do the mapping */
+ rc = libxl__xs_read_checked(gc, xt,
+ GCSPRINTF("%s/prot", sshm_path),
+ &xs_value);
+ if (rc) goto out;
+ libxl_sshm_prot_from_string(xs_value, &master_sshm.prot);
+
+ rc = libxl__xs_read_checked(gc, xt,
+ GCSPRINTF("%s/begin", sshm_path),
+ &xs_value);
+ if (rc) goto out;
+ master_sshm.begin = strtoull(xs_value, NULL, 16);
+
+ rc = libxl__xs_read_checked(gc, xt,
+ GCSPRINTF("%s/size", sshm_path),
+ &xs_value);
+ if (rc) goto out;
+ master_sshm.size = strtoull(xs_value, NULL, 16);
+
+ rc = libxl__xs_read_checked(gc, xt,
+ GCSPRINTF("%s/master", sshm_path),
+ &xs_value);
+ if (rc) goto out;
+ master_domid = strtoull(xs_value, NULL, 16);
+
+ if (sshm->prot == LIBXL_SSHM_PROT_UNKNOWN)
+ sshm->prot = master_sshm.prot;
+
+ /* check if the slave is asking too much permission */
+ if (master_sshm.prot < sshm->prot) {
+ SSHM_ERROR(domid, sshm->id, "slave is asking too much permission.");
+ rc = ERROR_INVAL;
+ goto out;
+ }
+
+ /* all checks passed, do the job */
+ if (!isretry) {
+ rc = libxl__sshm_do_map(gc, master_domid, domid,
+ sshm, &master_sshm,
+ mapped, &nmapped);
+ if (rc) goto out;
+ }
+
+ /* write the result to xenstore and commit */
+ rc = libxl__xs_write_checked(gc, xt, dom_role_path, "slave");
+ if (rc) goto out;
+ rc = libxl__xs_writev(gc, xt, slave_path, ents);
+ if (rc) goto out;
+ rc = libxl__sshm_incref(gc, xt, sshm_path);
+ if (rc) goto out;
+
+ rc = libxl__xs_transaction_commit(gc, &xt);
+ if (!rc) break;
+ if (rc < 0) goto out;
+ isretry = true;
+ }
+
+ rc = 0;
+out:
+ if (rc) {
+ /* roll back successfully mapped pages */
+ SSHM_ERROR(domid, sshm->id, "failed to map some pages, cancelling.");
+ for (i = 0; i < nmapped; i++) {
+ xc_domain_remove_from_physmap(CTX->xch, domid, mapped[i]);
+ }
+ }
+
+ libxl__xs_transaction_abort(gc, &xt);
+
+ return rc;
+}
+
+static int libxl__sshm_add_master(libxl__gc *gc, uint32_t domid,
+ libxl_static_shm *sshm)
+{
+ int rc;
+ const char *sshm_path, *dom_path, *dom_role_path;
+ char *ents[13];
+ struct xs_permissions noperm;
+ xs_transaction_t xt = XBT_NULL;
+
+ sshm_path = SSHM_PATH(sshm->id);
+ dom_path = libxl__xs_get_dompath(gc, domid);
+ /* the domain should be in xenstore by now */
+ assert(dom_path);
+ dom_role_path = GCSPRINTF("%s/static_shm/%s/role", dom_path, sshm->id);
+
+ /* prepare the xenstore entries */
+ ents[0] = "master";
+ ents[1] = GCSPRINTF("%"PRIu32, domid);
+ ents[2] = "begin";
+ ents[3] = GCSPRINTF("0x%"PRIx64, sshm->begin);
+ ents[4] = "size";
+ ents[5] = GCSPRINTF("0x%"PRIx64, sshm->size);
+ ents[6] = "prot";
+ ents[7] = libxl__strdup(gc, libxl_sshm_prot_to_string(sshm->prot));
+ ents[8] = "cache_policy";
+ ents[9] = libxl__strdup(gc,
+ libxl_sshm_cachepolicy_to_string(sshm->cache_policy));
+ ents[10] = "usercnt";
+ ents[11] = "1";
+ ents[12] = NULL;
+
+ /* could only be accessed by Dom0 */
+ noperm.id = 0;
+ noperm.perms = XS_PERM_NONE;
+
+ for (;;) {
+ rc = libxl__xs_transaction_start(gc, &xt);
+ if (rc) goto out;
+
+ if (!libxl__xs_read(gc, xt, sshm_path)) {
+ /* every ID can appear in each domain at most once */
+ if (libxl__xs_read(gc, xt, dom_role_path)) {
+ SSHM_ERROR(domid, sshm->id,
+ "domain tried to map the same ID twice.");
+ rc = ERROR_FAIL;
+ goto out;
+ }
+ rc = libxl__xs_write_checked(gc, xt, dom_role_path, "master");
+ if (rc) goto out;;
+
+ libxl__xs_mknod(gc, xt, sshm_path, &noperm, 1);
+ libxl__xs_writev(gc, xt, sshm_path, ents);
+ } else {
+ SSHM_ERROR(domid, sshm->id, "can only have one master.");
+ rc = ERROR_FAIL;
+ goto out;
+ }
+
+ rc = libxl__xs_transaction_commit(gc, &xt);
+ if (!rc) break;
+ if (rc < 0) goto out;
+ }
+
+ rc = 0;
+out:
+ libxl__xs_transaction_abort(gc, &xt);
+ return rc;
+}
+
+int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
+ libxl_static_shm *sshms, int len)
+{
+ int rc, i;
+
+ for (i = 0; i < len; ++i) {
+ if (sshms[i].role == LIBXL_SSHM_ROLE_SLAVE) {
+ rc = libxl__sshm_add_slave(gc, domid, sshms+i);
+ } else {
+ rc = libxl__sshm_add_master(gc, domid, sshms+i);
+ }
+ if (rc) return rc;
+ }
+
+ return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index ab88562..dd3de96 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -619,6 +619,25 @@ void libxl__arch_domain_build_info_acpi_setdefault(
libxl_defbool_setdefault(&b_info->acpi, true);
}
+bool libxl__arch_domain_support_sshm(const libxl_domain_build_info *b_info)
+{
+ /* FIXME: Mark this as unsupported for calling p2m_add_foreign on two
+ * DomU's is currently not allowd on x86, see the comments in
+ * x86/mm/p2m.c: p2m_add_foreign.
+ */
+ return false;
+}
+
+int libxl__arch_domain_sshm_cachepolicy_setdefault(libxl_static_shm *sshm)
+{
+ if (sshm->cache_policy == LIBXL_SSHM_CACHEPOLICY_UNKNOWN)
+ sshm->cache_policy = LIBXL_SSHM_CACHEPOLICY_X86_NORMAL;
+ if (sshm->cache_policy < LIBXL_SSHM_CACHEPOLICY_X86_NORMAL)
+ return ERROR_INVAL;
+
+ return 0;
+}
+
/*
* Local variables:
* mode: C
--
1.9.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v5 4/6] libxl: support unmapping static shared memory areas during domain destruction
2018-05-25 0:55 [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
` (2 preceding siblings ...)
2018-05-25 0:55 ` [PATCH v5 3/6] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
@ 2018-05-25 0:55 ` Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 5/6] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files Stefano Stabellini
` (3 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Stefano Stabellini @ 2018-05-25 0:55 UTC (permalink / raw)
To: xen-devel
Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
ian.jackson, julien.grall
From: Zhongze Liu <blackskygg@gmail.com>
Author: Zhongze Liu <blackskygg@gmail.com>
Add libxl__sshm_del to unmap static shared memory areas mapped by
libxl__sshm_add during domain creation. The unmapping process is:
* For a master: decrease the refcount of the sshm region, if the refcount
reaches 0, cleanup the whole sshm path.
* For a slave:
1) unmap the shared pages, and cleanup related xs entries. If the
system works normally, all the shared pages will be unmapped, so there
won't be page leaks. In case of errors, the unmapping process will go
on and unmap all the other pages that can be unmapped, so the other
pages won't be leaked, either.
2) Decrease the refcount of the sshm region, if the refcount reaches
0, cleanup the whole sshm path.
This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file" (see [1]).
[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- fix typos
- add comments
- cannot move unmap before xenstore transaction because it needs to
retrieve begin/size values from xenstore
---
tools/libxl/libxl_domain.c | 5 ++
tools/libxl/libxl_internal.h | 2 +
tools/libxl/libxl_sshm.c | 107 +++++++++++++++++++++++++++++++++++++++++++
3 files changed, 114 insertions(+)
diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
index 533bcdf..053bbe2 100644
--- a/tools/libxl/libxl_domain.c
+++ b/tools/libxl/libxl_domain.c
@@ -1060,6 +1060,11 @@ void libxl__destroy_domid(libxl__egc *egc, libxl__destroy_domid_state *dis)
goto out;
}
+ rc = libxl__sshm_del(gc, domid);
+ if (rc) {
+ LOGD(ERROR, domid, "Deleting static shm failed.");
+ }
+
if (libxl__device_pci_destroy_all(gc, domid) < 0)
LOGD(ERROR, domid, "Pci shutdown failed");
rc = xc_domain_pause(ctx->xch, domid);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 076c30f..a2b81bf 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -4429,6 +4429,8 @@ static inline bool libxl__string_is_default(char **s)
_hidden int libxl__sshm_add(libxl__gc *gc, uint32_t domid,
libxl_static_shm *sshm, int len);
+_hidden int libxl__sshm_del(libxl__gc *gc, uint32_t domid);
+
_hidden int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
libxl_static_shm *sshms, int len);
_hidden int libxl__sshm_setdefault(libxl__gc *gc, uint32_t domid,
diff --git a/tools/libxl/libxl_sshm.c b/tools/libxl/libxl_sshm.c
index f61b80c..9672056 100644
--- a/tools/libxl/libxl_sshm.c
+++ b/tools/libxl/libxl_sshm.c
@@ -94,6 +94,113 @@ int libxl__sshm_check_overlap(libxl__gc *gc, uint32_t domid,
return 0;
}
+/*
+ * Decrease the refcount of an sshm. When refcount reaches 0,
+ * clean up the whole sshm path.
+ * Xenstore operations are done within the same transaction.
+ */
+static void libxl__sshm_decref(libxl__gc *gc, xs_transaction_t xt,
+ const char *sshm_path)
+{
+ int count;
+ const char *count_path, *count_string;
+
+ count_path = GCSPRINTF("%s/usercnt", sshm_path);
+ if (libxl__xs_read_checked(gc, xt, count_path, &count_string))
+ return;
+ count = atoi(count_string);
+
+ if (--count == 0) {
+ libxl__xs_path_cleanup(gc, xt, sshm_path);
+ return;
+ }
+
+ count_string = GCSPRINTF("%d", count);
+ libxl__xs_write_checked(gc, xt, count_path, count_string);
+
+ return;
+}
+
+static void libxl__sshm_do_unmap(libxl__gc *gc, uint32_t domid, const char *id,
+ uint64_t begin, uint64_t size)
+{
+ uint64_t end;
+ begin >>= XC_PAGE_SHIFT;
+ size >>= XC_PAGE_SHIFT;
+ end = begin + size;
+ for (; begin < end; ++begin) {
+ if (xc_domain_remove_from_physmap(CTX->xch, domid, begin)) {
+ SSHM_ERROR(domid, id,
+ "unable to unmap shared page at 0x%"PRIx64".",
+ begin);
+ }
+ }
+}
+
+static void libxl__sshm_del_slave(libxl__gc *gc, xs_transaction_t xt,
+ uint32_t domid, const char *id, bool isretry)
+{
+ const char *slave_path, *begin_str, *size_str;
+ uint64_t begin, size;
+
+ slave_path = GCSPRINTF("%s/slaves/%"PRIu32, SSHM_PATH(id), domid);
+
+ begin_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/begin", slave_path));
+ size_str = libxl__xs_read(gc, xt, GCSPRINTF("%s/size", slave_path));
+ begin = strtoull(begin_str, NULL, 16);
+ size = strtoull(size_str, NULL, 16);
+
+ libxl__sshm_do_unmap(gc, domid, id, begin, size);
+ libxl__xs_path_cleanup(gc, xt, slave_path);
+}
+
+/* Delete static_shm entries in the xensotre. */
+int libxl__sshm_del(libxl__gc *gc, uint32_t domid)
+{
+ int rc, i;
+ bool isretry;
+ xs_transaction_t xt = XBT_NULL;
+ const char *dom_path, *dom_sshm_path, *role;
+ char **sshm_ents;
+ unsigned int sshm_num;
+
+ dom_path = libxl__xs_get_dompath(gc, domid);
+ dom_sshm_path = GCSPRINTF("%s/static_shm", dom_path);
+
+ isretry = false;
+ for (;;) {
+ rc = libxl__xs_transaction_start(gc, &xt);
+ if (rc) goto out;
+
+ if (libxl__xs_read(gc, xt, dom_sshm_path)) {
+ sshm_ents = libxl__xs_directory(gc, xt, dom_sshm_path, &sshm_num);
+ if (!sshm_ents) continue;
+
+ for (i = 0; i < sshm_num; ++i) {
+ role = libxl__xs_read(gc, xt,
+ GCSPRINTF("%s/%s/role",
+ dom_sshm_path,
+ sshm_ents[i]));
+ assert(role);
+ if (!strncmp(role, "slave", 5))
+ libxl__sshm_del_slave(gc, xt, domid, sshm_ents[i], isretry);
+
+ libxl__sshm_decref(gc, xt, SSHM_PATH(sshm_ents[i]));
+ }
+ }
+
+ rc = libxl__xs_transaction_commit(gc, &xt);
+ if (!rc) break;
+ if (rc < 0) goto out;
+ isretry = true;
+ }
+
+ rc = 0;
+out:
+ libxl__xs_transaction_abort(gc, &xt);
+ return rc;
+}
+
/* libxl__sshm_do_map -- map pages into slave's physmap
*
* This functions maps
--
1.9.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v5 5/6] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
2018-05-25 0:55 [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
` (3 preceding siblings ...)
2018-05-25 0:55 ` [PATCH v5 4/6] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
@ 2018-05-25 0:55 ` Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 6/6] docs: documentation about static shared memory regions Stefano Stabellini
` (2 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Stefano Stabellini @ 2018-05-25 0:55 UTC (permalink / raw)
To: xen-devel
Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
ian.jackson, julien.grall
From: Zhongze Liu <blackskygg@gmail.com>
Author: Zhongze Liu <blackskygg@gmail.com>
Add the parsing utils for the newly introduced libxl_static_sshm struct
to the libxl/libxlu_* family. And add realated parsing code in xl to
parse the struct from xl config files. This is for the proposal "Allow
setting up shared memory areas between VMs from xl config file" (see [1]).
[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- remove alignment checks, they were moved to libxl
---
tools/libxl/Makefile | 2 +-
tools/libxl/libxlu_sshm.c | 207 ++++++++++++++++++++++++++++++++++++++++++++++
tools/libxl/libxlutil.h | 6 ++
tools/xl/xl_parse.c | 25 +++++-
4 files changed, 238 insertions(+), 2 deletions(-)
create mode 100644 tools/libxl/libxlu_sshm.c
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 3b762c5..2920f7e 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -176,7 +176,7 @@ AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
- libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o
+ libxlu_disk_l.o libxlu_disk.o libxlu_vif.o libxlu_pci.o libxlu_sshm.o
$(LIBXLU_OBJS): CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
$(TEST_PROG_OBJS) _libxl.api-for-check: CFLAGS += $(CFLAGS_libxentoollog) $(CFLAGS_libxentoolcore)
diff --git a/tools/libxl/libxlu_sshm.c b/tools/libxl/libxlu_sshm.c
new file mode 100644
index 0000000..cc709a7
--- /dev/null
+++ b/tools/libxl/libxlu_sshm.c
@@ -0,0 +1,207 @@
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxlu_internal.h"
+#include "xenctrl.h"
+
+#include <ctype.h>
+
+#define PARAM_RE(EXPR) "^\\s*" EXPR "\\s*(,|$)"
+#define WORD_RE "([_a-zA-Z0-9]+)"
+#define EQU_RE PARAM_RE(WORD_RE "\\s*=\\s*" WORD_RE)
+
+#define RET_INVAL(msg, curr_str) do { \
+ xlu__sshm_err(cfg, msg, curr_str); \
+ rc = EINVAL; \
+ goto out; \
+ } while(0)
+
+/* set a member in libxl_static_shm and report an error if it's respecified,
+ * @curr_str indicates the head of the remaining string. */
+#define SET_VAL(var, name, type, value, curr_str) do { \
+ if ((var) != LIBXL_SSHM_##type##_UNKNOWN && (var) != value) { \
+ RET_INVAL("\"" name "\" respecified", curr_str); \
+ } \
+ (var) = value; \
+ } while(0)
+
+
+static void xlu__sshm_err(XLU_Config *cfg, const char *msg,
+ const char *curr_str) {
+ fprintf(cfg->report,
+ "%s: config parsing error in shared_memory: %s at '%s'\n",
+ cfg->config_source, msg, curr_str);
+}
+
+static int parse_prot(XLU_Config *cfg, char *str, libxl_sshm_prot *prot)
+{
+ int rc;
+ libxl_sshm_prot new_prot;
+
+ if (!strcmp(str, "rw")) {
+ new_prot = LIBXL_SSHM_PROT_RW;
+ } else {
+ RET_INVAL("invalid permission flags", str);
+ }
+
+ SET_VAL(*prot, "permission flags", PROT, new_prot, str);
+
+ rc = 0;
+
+ out:
+ return rc;
+}
+
+static int parse_cachepolicy(XLU_Config *cfg, char *str,
+ libxl_sshm_cachepolicy *policy)
+{
+ int rc;
+ libxl_sshm_cachepolicy new_policy;
+
+ if (!strcmp(str, "ARM_normal")) {
+ new_policy = LIBXL_SSHM_CACHEPOLICY_ARM_NORMAL;
+ } else if (!strcmp(str, "x86_normal")) {
+ new_policy = LIBXL_SSHM_CACHEPOLICY_X86_NORMAL;
+ } else {
+ RET_INVAL("invalid cache policy", str);
+ }
+
+ SET_VAL(*policy, "cache policy", CACHEPOLICY, new_policy, str);
+ rc = 0;
+
+ out:
+ return rc;
+}
+
+/* handle key = value pairs */
+static int handle_equ(XLU_Config *cfg, char *key, char *val,
+ libxl_static_shm *sshm)
+{
+ int rc;
+
+ if (!strcmp(key, "id")) {
+ if (strlen(val) > LIBXL_SSHM_ID_MAXLEN) { RET_INVAL("id too long", val); }
+ if (sshm->id && !strcmp(sshm->id, val)) {
+ RET_INVAL("id respecified", val);
+ }
+
+ sshm->id = strdup(val);
+ if (!sshm->id) {
+ fprintf(stderr, "sshm parser out of memory\n");
+ rc = ENOMEM;
+ goto out;
+ }
+ } else if (!strcmp(key, "role")) {
+ libxl_sshm_role new_role;
+
+ if (!strcmp("master", val)) {
+ new_role = LIBXL_SSHM_ROLE_MASTER;
+ } else if (!strcmp("slave", val)) {
+ new_role = LIBXL_SSHM_ROLE_SLAVE;
+ } else {
+ RET_INVAL("invalid role", val);
+ }
+
+ SET_VAL(sshm->role, "role", ROLE, new_role, val);
+ } else if (!strcmp(key, "begin") ||
+ !strcmp(key, "size") ||
+ !strcmp(key, "offset")) {
+ char *endptr;
+ int base = 10;
+ uint64_t new_addr;
+
+ /* Could be in hex form. Note that we don't need to check the length here,
+ * for val[] is NULL-terminated */
+ if (val[0] == '0' && val[1] == 'x') { base = 16; }
+ new_addr = strtoull(val, &endptr, base);
+ if (errno == ERANGE || *endptr)
+ RET_INVAL("invalid begin/size/offset", val);
+
+ if (key[0] == 'b') {
+ SET_VAL(sshm->begin, "beginning address", RANGE, new_addr, val);
+ } else if(key[0] == 's'){
+ SET_VAL(sshm->size, "size", RANGE, new_addr, val);
+ } else {
+ SET_VAL(sshm->offset, "offset", RANGE, new_addr, val);
+ }
+ } else if (!strcmp(key, "prot")) {
+ rc = parse_prot(cfg, val, &sshm->prot);
+ if (rc) { goto out; }
+ } else if (!strcmp(key, "cache_policy")) {
+ rc = parse_cachepolicy(cfg, val, &sshm->cache_policy);
+ if (rc) { goto out; }
+ } else {
+ RET_INVAL("invalid option", key);
+ }
+
+ rc = 0;
+
+ out:
+ return rc;
+}
+
+int xlu_sshm_parse(XLU_Config *cfg, const char *spec,
+ libxl_static_shm *sshm)
+{
+ int rc;
+ regex_t equ_rec;
+ char *buf2 = NULL, *ptr = NULL;
+ regmatch_t pmatch[3];
+
+ rc = regcomp(&equ_rec, EQU_RE, REG_EXTENDED);
+ if (rc) {
+ fprintf(stderr, "sshm parser failed to initialize\n");
+ goto out;
+ }
+
+ buf2 = ptr = strdup(spec);
+ if (!buf2) {
+ fprintf(stderr, "sshm parser out of memory\n");
+ rc = ENOMEM;
+ goto out;
+ }
+
+ /* main parsing loop */
+ while (true) {
+ if (!*ptr) { break; }
+ if (regexec(&equ_rec, ptr, 3, pmatch, 0))
+ RET_INVAL("unrecognized token", ptr);
+
+ ptr[pmatch[1].rm_eo] = '\0';
+ ptr[pmatch[2].rm_eo] = '\0';
+ rc = handle_equ(cfg, ptr + pmatch[1].rm_so,
+ ptr + pmatch[2].rm_so, sshm);
+ if (rc) { goto out; }
+
+ ptr += pmatch[0].rm_eo;
+ }
+
+ if (*ptr) { RET_INVAL("invalid syntax", ptr); }
+
+ /* do some early checks */
+ if (!sshm->id) {
+ RET_INVAL("id not specified", spec);
+ }
+ if (sshm->begin == LIBXL_SSHM_RANGE_UNKNOWN) {
+ RET_INVAL("begin address not specified", spec);
+ }
+ if (sshm->size == LIBXL_SSHM_RANGE_UNKNOWN) {
+ RET_INVAL("size not specified", spec);
+ }
+ if (sshm->prot == LIBXL_SSHM_PROT_UNKNOWN) {
+ sshm->prot = LIBXL_SSHM_PROT_RW;
+ }
+
+ rc = 0;
+
+ out:
+ if (buf2) { free(buf2); }
+ regfree(&equ_rec);
+ return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxlutil.h b/tools/libxl/libxlutil.h
index e81b644..ee39cb5 100644
--- a/tools/libxl/libxlutil.h
+++ b/tools/libxl/libxlutil.h
@@ -118,6 +118,12 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str);
int xlu_vif_parse_rate(XLU_Config *cfg, const char *rate,
libxl_device_nic *nic);
+/*
+ * static shared memory specification parsing
+ */
+int xlu_sshm_parse(XLU_Config *cfg, const char *spec,
+ libxl_static_shm *sshm);
+
#endif /* LIBXLUTIL_H */
/*
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index e6c5448..3a5c874 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -860,7 +860,8 @@ void parse_config_data(const char *config_source,
long l, vcpus = 0;
XLU_Config *config;
XLU_ConfigList *cpus, *vbds, *nics, *pcis, *cvfbs, *cpuids, *vtpms,
- *usbctrls, *usbdevs, *p9devs, *vdispls, *pvcallsifs_devs;
+ *usbctrls, *usbdevs, *p9devs, *vdispls, *pvcallsifs_devs,
+ *sshms;
XLU_ConfigList *channels, *ioports, *irqs, *iomem, *viridian, *dtdevs,
*mca_caps;
int num_ioports, num_irqs, num_iomem, num_cpus, num_viridian, num_mca_caps;
@@ -1566,6 +1567,28 @@ void parse_config_data(const char *config_source,
}
}
+ if (!xlu_cfg_get_list (config, "static_shm", &sshms, 0, 0)) {
+ d_config->num_sshms = 0;
+ d_config->sshms = NULL;
+ while ((buf = xlu_cfg_get_listitem (sshms, d_config->num_sshms)) != NULL) {
+ libxl_static_shm *sshm;
+ char *buf2 = strdup(buf);
+ int ret;
+
+ sshm = ARRAY_EXTEND_INIT_NODEVID(d_config->sshms,
+ d_config->num_sshms,
+ libxl_static_shm_init);
+ ret = xlu_sshm_parse(config, buf2, sshm);
+ if (ret) {
+ fprintf(stderr,
+ "xl: Invalid argument for static_shm: %s", buf2);
+ exit(EXIT_FAILURE);
+ }
+
+ free(buf2);
+ }
+ }
+
if (!xlu_cfg_get_list(config, "p9", &p9devs, 0, 0)) {
libxl_device_p9 *p9;
char *security_model = NULL;
--
1.9.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v5 6/6] docs: documentation about static shared memory regions
2018-05-25 0:55 [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
` (4 preceding siblings ...)
2018-05-25 0:55 ` [PATCH v5 5/6] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files Stefano Stabellini
@ 2018-05-25 0:55 ` Stefano Stabellini
2018-05-26 0:50 ` [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Zhongze Liu
2018-10-24 14:15 ` Oleksandr Andrushchenko
7 siblings, 0 replies; 13+ messages in thread
From: Stefano Stabellini @ 2018-05-25 0:55 UTC (permalink / raw)
To: xen-devel
Cc: Stefano Stabellini, sstabellini, wei.liu2, blackskygg,
ian.jackson, julien.grall
From: Zhongze Liu <blackskygg@gmail.com>
Author: Zhongze Liu <blackskygg@gmail.com>
Add docs to document the motivation, usage, use cases and other
relevant information about the static shared memory feature.
This is for the proposal "Allow setting up shared memory areas between VMs
from xl config file". See:
https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
Signed-off-by: Zhongze Liu <blackskygg@gmail.com>
Signed-off-by: Stefano Stabellini <stefanos@xilinx.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: xen-devel@lists.xen.org
---
Changes in v5:
- fix typos
---
docs/man/xl-static-shm-configuration.pod.5 | 257 +++++++++++++++++++++++++++++
docs/man/xl.cfg.pod.5.in | 8 +
docs/misc/xenstore-paths.markdown | 47 ++++++
3 files changed, 312 insertions(+)
create mode 100644 docs/man/xl-static-shm-configuration.pod.5
diff --git a/docs/man/xl-static-shm-configuration.pod.5 b/docs/man/xl-static-shm-configuration.pod.5
new file mode 100644
index 0000000..0865835
--- /dev/null
+++ b/docs/man/xl-static-shm-configuration.pod.5
@@ -0,0 +1,257 @@
+=head1 NAME
+
+xl-static-shm-configuration - XL Static Shared Memory Configuration Syntax
+
+
+(B<NOTE>: This is currently only available to ARM guests.)
+
+=head1 DESCRIPTION
+
+The static_shm option allows users to statically setup shared memory regions
+among a group of VMs, enabling guests without grant table support to do
+shm-based communication.
+
+Every shared region is:
+
+=over 4
+
+* Uniquely identified by a string that is no longer than 128 characters, which
+is called an B<identifier> in this document.
+
+* Backed by exactly one domain, which is called a B<master> domain, and all
+the other domains who are also sharing this region are called B<slave>s.
+
+=back
+
+=head1 SYNTAX
+
+This document specifies syntax of the static shared memory configuration in
+the xl config file. It has the following form:
+
+ static_shm = [ "SSHM_SPEC", "SSHM_SPEC", ... ]
+
+where each C<SSHM_SPEC> is in this form:
+
+ [<key>=<value>,]*
+
+Valid examples of C<SSHM_SPEC> are:
+
+ id=ID1, begin=0x100000, size=0x100000, role=master, cache_policy=x86_normal
+ id=ID1, offset = 0, begin=0x500000, size=0x100000, role=slave, prot=rw
+ id=ID2, begin=0x300000, size=0x100000, role=master
+ id=ID2, offset = 0x10000, begin=0x690000, size=0x110000, role=slave
+ id=ID2, offset = 0x10000, begin=0x690000, size=0x110000, role=slave
+
+These might be specified in the domain config file like this:
+
+ static_shm = ["id=ID2, offset = 0x10000, begin=0x690000, size=0x110000,
+role=slave"]
+
+
+More formally, the string is a series of comma-separated keyword/value
+pairs. Each parameter may be specified at most once. Default values apply if
+the parameter is not specified.
+
+=head1 Parameters
+
+=over 4
+
+=item B<id>
+
+=over 4
+
+=item Description
+
+The unique identifier of the shared memory region.
+
+Every identifier could appear only once in each xl config file.
+
+=item Supported values
+
+A string that contains alphanumerics and "_"s, and is no longer than 128
+characters.
+
+=item Default value
+
+None, this parameter is mandatory.
+
+=back
+
+=item B<begin>/B<size>
+
+=over 4
+
+=item Description
+
+The boundaries of the shared memory area.
+
+=item Supported values
+
+Same with B<offset>.
+
+=item Default Value
+
+None, this parameter is mandatory.
+
+=back
+
+=item B<offset>
+
+=over 4
+
+=item Description
+
+Can only appear when B<role> = slave. If set, the address mapping will not
+start from the beginning the backing memory region, but from the middle
+(B<offset> bytes away from the beginning) of it. See the graph below:
+
+With B<offset> = 0, the mapping will look like:
+
+ backing memory region: #########################################
+ | |
+ | |
+ | |
+ V V
+ slave's shared region: #########################
+
+With B<offset> > 0:
+
+ backing memory region: #########################################
+ |<-- offset -->|| |
+ | |
+ | |
+ V V
+ slave's memory region: #########################
+
+=item Supported values
+
+Decimals or hexadecimals with a prefix "0x", and should be the multiple of the
+hypervisor page granularity (currently 4K on both ARM and x86).
+
+=item Default value
+
+0x0
+
+=back
+
+=item B<role>
+
+=over 4
+
+=item Description
+
+The backing area would be taken from one domain, which we will mark
+as the "master domain", and this domain should be created prior to any
+other slave domains that depend on it.
+
+This argument specifies the role of this domain.
+
+=item Supported values
+
+master, slave
+
+=item Default value
+
+slave
+
+=back
+
+=item B<prot>
+
+=over 4
+
+=item Description
+
+When B<role> = master, this means the largest set of stage-2 permission flags
+that can be granted to the slave domains. When B<role> = slave, this means the
+stage-2 permission flags of the shared memory area.
+
+=item Supported values
+
+Currently only 'rw' is supported.
+
+=item Default value
+
+rw
+
+=back
+
+=item B<cache_policy>
+
+=over 4
+
+=item Description
+
+The stage-2 cacheability/shareability attributes of the shared memory area.
+This can only appear when B<role> = master.
+
+=item Supported values
+
+Currently, only the following policy is supported:
+
+=over 4
+
+=item B<ARM_normal>
+
+Only applicable to ARM guests. This would mean Inner and Outer Write-Back
+Cacheable, and Inner Shareable.
+
+=back
+
+=item Default value
+
+ARM_normal
+
+=back
+
+=back
+
+=head1 TYPICAL USAGE
+
+A typical procedure of setting up a shared mem region among several VMs is:
+
+=over 4
+
+1. Add a static_shm option to the master domain's xl config file, assign an
+B<ID> to it and mark it's B<role> as master, and set up the boundaries, prot
+flag, and B<cache_policy> appropriately.
+
+2. Add a static_shm option to every slave domain's xl config file, set
+their B<ID> to the same value as the master's, and set up the B<offset>,
+boundaries and prot flag appropriately.
+
+3. Create the master domain.
+
+4. Create the slaves.
+
+=back
+
+Remember that the master domain must be created before any slave domains could
+be created, for the slaves depend on the memory pages backed by their master.
+
+=head1 Example
+
+Suppose that we have 3 domains: vm1~vm3. And we want to setup two shared
+regions, say, ID1 and ID2, among the three domains, with the following address
+mapping:
+
+ ID1: (vm1 : 0x100000~0x200000) <=====> (vm2 : 0x500000~0x600000)
+ ID2: (vm1 : 0x310000~0x400000) <=====> (vm3 : 0x690000~0x800000)
+
+According to the syntax defined above, the xl config files of the three domains
+should contains the following content:
+
+In xl config file of vm1:
+ static_shm = [ "id=ID1, begin=0x100000, size=0x100000, role=master,
+cache_policy=x86_normal, prot=rw",
+"id=ID2, begin=0x300000, size=0x100000, role=master" ]
+
+In xl config file of vm2:
+ static_shm = [ "id=ID1, offset=0, begin=0x500000, size=0x100000,
+role=slave, prot=rw" ]
+
+In xl config file of vm3:
+ static_shm = [ "id=ID2, offset=0x10000, begin=0x690000,
+size=0x110000, role=slave" ]
+
+After that, just create vm1 first, and then create vm2 and vm3 in any order.
diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index 47d8824..88ef18e 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -277,6 +277,14 @@ memory=8096 will report significantly less memory available for use
than a system with maxmem=8096 memory=8096 due to the memory overhead
of having to track the unused pages.
+=item B<static_shm=[ "SSHM_SPEC", "SSHM_SPEC", ... ]>
+
+Specifies the static shared memory regions of this guest. Static shared
+memory regions enables guests to communicate with each other through
+one or more shared memory regions, even without grant table support.
+Currently, this only works on ARM guests.
+See L<xl-static-shm-configuration(5)> for more details.
+
=back
=head3 Guest Virtual NUMA Configuration
diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
index 60c8b3f..59b4be5 100644
--- a/docs/misc/xenstore-paths.markdown
+++ b/docs/misc/xenstore-paths.markdown
@@ -174,6 +174,14 @@ than this amount of RAM.
The size of the video RAM this domain is configured with.
+#### ~/static_shm/[_a-zA-Z0-9]+/role = ("master"|"slave") []
+
+(Note: Currently, this will only appear on ARM guests.)
+
+The role of this domain in the static shared memory region whose id matches
+the `[_a-zA-Z0-9]+` part in the path. (Described in the manpage
+**xl-static-shm-configuration(5)**).
+
#### ~/device/suspend/event-channel = ""|EVTCHN [w]
The domain's suspend event channel. The toolstack will create this
@@ -548,6 +556,45 @@ type. The name of each backend directory is the same as the backend type
Contains the PIDs of the device models running on the domain.
+#### /libxl/static_shm/[_a-zA-Z0-9]+/* []
+
+(Note: Currently, this will only appear on ARM guests.)
+
+The following paths contain backing memory parameters of a static shared memory
+whose id matches the `[_a-zA-Z0-9]+` part in the path. Their formats and
+meanings are the same as those in an xl config file, described in the manpage
+**xl-static-shm-configuration(5)**.
+
+* begin/size: the boundary of the backing memory region.
+* prot: the largest set of stage-2 permission flags that can be granted to
+ the slave domains.
+* cache_policy: the stage-2 cacheability/shareability attributes of the backing
+ memory region.
+
+The following paths contain run-time information about the static shared memory
+region.
+
+* master: the domid of the backing domain.
+* slaves: information about the slaves that are sharing the region, see
+ ** /libxl/static_shm/[_a-zA-Z0-9]+/slaves/$DOMID/* ** below.
+* usercnt: An integer. This is the reference count of the backing memory region,
+ including the master domain itself. When this value reachies 0, the backing
+ memory region will be freed.
+
+#### /libxl/staitc_shm/[_a-zA-Z0-9]+/slaves/$DOMID/* []
+
+(Note: Currently, this will only appear on ARM guests.)
+
+The following paths contain static shared memory region parameters of a slave
+domain. Their formats and meanings are the same as those in xl config files,
+described in the manpage **xl-static-shm-configuration(5)**.
+
+* begin/size: the boundary of the shared memory region.
+* prot: the stage-2 permission flags of the shared memory area.
+* offset: when mapping the backing memory region to the slave's memory space,
+ the mapping will start from offset bytes after the beginning of the backing
+ memory region.
+
## Virtual Machine Paths
The /vm/$UUID namespace is used by toolstacks to store various
--
1.9.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files
2018-05-25 0:55 [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
` (5 preceding siblings ...)
2018-05-25 0:55 ` [PATCH v5 6/6] docs: documentation about static shared memory regions Stefano Stabellini
@ 2018-05-26 0:50 ` Zhongze Liu
2018-05-29 21:24 ` Stefano Stabellini
2018-10-24 14:15 ` Oleksandr Andrushchenko
7 siblings, 1 reply; 13+ messages in thread
From: Zhongze Liu @ 2018-05-26 0:50 UTC (permalink / raw)
To: Stefano Stabellini; +Cc: Ian Jackson, Julien Grall, Wei Liu, xen-devel
Hi Stefano,
Thank you very much for your fixes. It seems that you've addressed all
the issues during the last round of reviews.
I'm sorry for my off-and-on contribution time, which has largely
delayed the process of merging the patch set.
Please tell me if you need anything from me.
Cheers,
Zhongze Liu
2018-05-25 8:55 GMT+08:00 Stefano Stabellini <sstabellini@kernel.org>:
> Hi,
>
> This series implements a new xl config entry. Users can use the new
> config entry to statically setup shared memory areas among VMs that
> don't have grant table support so that they could communicate with each
> other through the static shared memory areas.
>
> It was originally developed by Zhongze, I am just updating the last few
> issued that were address during the last round of reviews in January. I
> tested the feature on ARM and works fine.
>
> Cheers,
>
> Stefano
>
>
> Zhongze Liu (6):
> xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
> libxl: introduce a new structure to represent static shared memory regions
> libxl: support mapping static shared memory areas during domain creation
> libxl: support unmapping static shared memory areas during domain destruction
> libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
> docs: documentation about static shared memory regions
>
> docs/man/xl-static-shm-configuration.pod.5 | 257 +++++++++++++++
> docs/man/xl.cfg.pod.5.in | 8 +
> docs/misc/xenstore-paths.markdown | 47 +++
> tools/flask/policy/modules/xen.if | 2 +
> tools/libxl/Makefile | 4 +-
> tools/libxl/libxl.h | 6 +
> tools/libxl/libxl_arch.h | 6 +
> tools/libxl/libxl_arm.c | 15 +
> tools/libxl/libxl_create.c | 27 ++
> tools/libxl/libxl_domain.c | 5 +
> tools/libxl/libxl_internal.h | 16 +
> tools/libxl/libxl_sshm.c | 512 +++++++++++++++++++++++++++++
> tools/libxl/libxl_types.idl | 32 +-
> tools/libxl/libxl_x86.c | 19 ++
> tools/libxl/libxlu_sshm.c | 207 ++++++++++++
> tools/libxl/libxlutil.h | 6 +
> tools/xl/xl_parse.c | 25 +-
> xen/arch/arm/mm.c | 7 +-
> xen/include/public/memory.h | 8 +
> xen/include/xsm/dummy.h | 15 +
> xen/include/xsm/xsm.h | 6 +
> xen/xsm/dummy.c | 1 +
> xen/xsm/flask/hooks.c | 12 +
> xen/xsm/flask/policy/access_vectors | 5 +
> 24 files changed, 1242 insertions(+), 6 deletions(-)
> create mode 100644 docs/man/xl-static-shm-configuration.pod.5
> create mode 100644 tools/libxl/libxl_sshm.c
> create mode 100644 tools/libxl/libxlu_sshm.c
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files
2018-05-26 0:50 ` [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Zhongze Liu
@ 2018-05-29 21:24 ` Stefano Stabellini
0 siblings, 0 replies; 13+ messages in thread
From: Stefano Stabellini @ 2018-05-29 21:24 UTC (permalink / raw)
To: Zhongze Liu
Cc: Ian Jackson, Julien Grall, Stefano Stabellini, Wei Liu, xen-devel
No problem, thank you for your work on this.
On Sat, 26 May 2018, Zhongze Liu wrote:
> Hi Stefano,
>
> Thank you very much for your fixes. It seems that you've addressed all
> the issues during the last round of reviews.
> I'm sorry for my off-and-on contribution time, which has largely
> delayed the process of merging the patch set.
> Please tell me if you need anything from me.
>
> Cheers,
> Zhongze Liu
>
> 2018-05-25 8:55 GMT+08:00 Stefano Stabellini <sstabellini@kernel.org>:
> > Hi,
> >
> > This series implements a new xl config entry. Users can use the new
> > config entry to statically setup shared memory areas among VMs that
> > don't have grant table support so that they could communicate with each
> > other through the static shared memory areas.
> >
> > It was originally developed by Zhongze, I am just updating the last few
> > issued that were address during the last round of reviews in January. I
> > tested the feature on ARM and works fine.
> >
> > Cheers,
> >
> > Stefano
> >
> >
> > Zhongze Liu (6):
> > xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
> > libxl: introduce a new structure to represent static shared memory regions
> > libxl: support mapping static shared memory areas during domain creation
> > libxl: support unmapping static shared memory areas during domain destruction
> > libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
> > docs: documentation about static shared memory regions
> >
> > docs/man/xl-static-shm-configuration.pod.5 | 257 +++++++++++++++
> > docs/man/xl.cfg.pod.5.in | 8 +
> > docs/misc/xenstore-paths.markdown | 47 +++
> > tools/flask/policy/modules/xen.if | 2 +
> > tools/libxl/Makefile | 4 +-
> > tools/libxl/libxl.h | 6 +
> > tools/libxl/libxl_arch.h | 6 +
> > tools/libxl/libxl_arm.c | 15 +
> > tools/libxl/libxl_create.c | 27 ++
> > tools/libxl/libxl_domain.c | 5 +
> > tools/libxl/libxl_internal.h | 16 +
> > tools/libxl/libxl_sshm.c | 512 +++++++++++++++++++++++++++++
> > tools/libxl/libxl_types.idl | 32 +-
> > tools/libxl/libxl_x86.c | 19 ++
> > tools/libxl/libxlu_sshm.c | 207 ++++++++++++
> > tools/libxl/libxlutil.h | 6 +
> > tools/xl/xl_parse.c | 25 +-
> > xen/arch/arm/mm.c | 7 +-
> > xen/include/public/memory.h | 8 +
> > xen/include/xsm/dummy.h | 15 +
> > xen/include/xsm/xsm.h | 6 +
> > xen/xsm/dummy.c | 1 +
> > xen/xsm/flask/hooks.c | 12 +
> > xen/xsm/flask/policy/access_vectors | 5 +
> > 24 files changed, 1242 insertions(+), 6 deletions(-)
> > create mode 100644 docs/man/xl-static-shm-configuration.pod.5
> > create mode 100644 tools/libxl/libxl_sshm.c
> > create mode 100644 tools/libxl/libxlu_sshm.c
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files
2018-05-25 0:55 [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
` (6 preceding siblings ...)
2018-05-26 0:50 ` [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Zhongze Liu
@ 2018-10-24 14:15 ` Oleksandr Andrushchenko
2018-10-24 14:24 ` Julien Grall
7 siblings, 1 reply; 13+ messages in thread
From: Oleksandr Andrushchenko @ 2018-10-24 14:15 UTC (permalink / raw)
To: Stefano Stabellini, xen-devel
Cc: wei.liu2, julien.grall, ian.jackson, blackskygg
Hi, Stefano!
I'm just curious if the below is on your TODO list [1]:
3. Future Directions:
====================================================
Allow users to optionally share *mfn-contiguous* pages.
Thank you,
Oleksandr
On 05/25/2018 03:55 AM, Stefano Stabellini wrote:
> Hi,
>
> This series implements a new xl config entry. Users can use the new
> config entry to statically setup shared memory areas among VMs that
> don't have grant table support so that they could communicate with each
> other through the static shared memory areas.
>
> It was originally developed by Zhongze, I am just updating the last few
> issued that were address during the last round of reviews in January. I
> tested the feature on ARM and works fine.
>
> Cheers,
>
> Stefano
>
>
> Zhongze Liu (6):
> xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing
> libxl: introduce a new structure to represent static shared memory regions
> libxl: support mapping static shared memory areas during domain creation
> libxl: support unmapping static shared memory areas during domain destruction
> libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files
> docs: documentation about static shared memory regions
>
> docs/man/xl-static-shm-configuration.pod.5 | 257 +++++++++++++++
> docs/man/xl.cfg.pod.5.in | 8 +
> docs/misc/xenstore-paths.markdown | 47 +++
> tools/flask/policy/modules/xen.if | 2 +
> tools/libxl/Makefile | 4 +-
> tools/libxl/libxl.h | 6 +
> tools/libxl/libxl_arch.h | 6 +
> tools/libxl/libxl_arm.c | 15 +
> tools/libxl/libxl_create.c | 27 ++
> tools/libxl/libxl_domain.c | 5 +
> tools/libxl/libxl_internal.h | 16 +
> tools/libxl/libxl_sshm.c | 512 +++++++++++++++++++++++++++++
> tools/libxl/libxl_types.idl | 32 +-
> tools/libxl/libxl_x86.c | 19 ++
> tools/libxl/libxlu_sshm.c | 207 ++++++++++++
> tools/libxl/libxlutil.h | 6 +
> tools/xl/xl_parse.c | 25 +-
> xen/arch/arm/mm.c | 7 +-
> xen/include/public/memory.h | 8 +
> xen/include/xsm/dummy.h | 15 +
> xen/include/xsm/xsm.h | 6 +
> xen/xsm/dummy.c | 1 +
> xen/xsm/flask/hooks.c | 12 +
> xen/xsm/flask/policy/access_vectors | 5 +
> 24 files changed, 1242 insertions(+), 6 deletions(-)
> create mode 100644 docs/man/xl-static-shm-configuration.pod.5
> create mode 100644 tools/libxl/libxl_sshm.c
> create mode 100644 tools/libxl/libxlu_sshm.c
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
[1] https://lists.xen.org/archives/html/xen-devel/2017-08/msg03242.html
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files
2018-10-24 14:15 ` Oleksandr Andrushchenko
@ 2018-10-24 14:24 ` Julien Grall
2018-10-24 14:32 ` Oleksandr Andrushchenko
0 siblings, 1 reply; 13+ messages in thread
From: Julien Grall @ 2018-10-24 14:24 UTC (permalink / raw)
To: Oleksandr Andrushchenko, Stefano Stabellini, xen-devel
Cc: wei.liu2, ian.jackson, blackskygg
On 10/24/18 3:15 PM, Oleksandr Andrushchenko wrote:
> Hi, Stefano!
>
> I'm just curious if the below is on your TODO list [1]:
>
> 3. Future Directions:
> ====================================================
>
> Allow users to optionally share *mfn-contiguous* pages.
What is the use case for that?
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files
2018-10-24 14:24 ` Julien Grall
@ 2018-10-24 14:32 ` Oleksandr Andrushchenko
2018-10-24 14:33 ` Julien Grall
0 siblings, 1 reply; 13+ messages in thread
From: Oleksandr Andrushchenko @ 2018-10-24 14:32 UTC (permalink / raw)
To: Julien Grall, Stefano Stabellini, xen-devel
Cc: wei.liu2, ian.jackson, blackskygg
On 10/24/2018 05:24 PM, Julien Grall wrote:
>
>
> On 10/24/18 3:15 PM, Oleksandr Andrushchenko wrote:
>> Hi, Stefano!
>>
>> I'm just curious if the below is on your TODO list [1]:
>>
>> 3. Future Directions:
>> ====================================================
>>
>> Allow users to optionally share *mfn-contiguous* pages.
>
> What is the use case for that?
No real use-case at the moment, but we might have one: for instance,
sharing some memory which is used by device firmware and domains.
But in this case iomem could probably help...
>
> Cheers,
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files
2018-10-24 14:32 ` Oleksandr Andrushchenko
@ 2018-10-24 14:33 ` Julien Grall
0 siblings, 0 replies; 13+ messages in thread
From: Julien Grall @ 2018-10-24 14:33 UTC (permalink / raw)
To: Oleksandr Andrushchenko, Stefano Stabellini, xen-devel
Cc: wei.liu2, ian.jackson, blackskygg
On 10/24/18 3:32 PM, Oleksandr Andrushchenko wrote:
> On 10/24/2018 05:24 PM, Julien Grall wrote:
>>
>>
>> On 10/24/18 3:15 PM, Oleksandr Andrushchenko wrote:
>>> Hi, Stefano!
>>>
>>> I'm just curious if the below is on your TODO list [1]:
>>>
>>> 3. Future Directions:
>>> ====================================================
>>>
>>> Allow users to optionally share *mfn-contiguous* pages.
>>
>> What is the use case for that?
> No real use-case at the moment, but we might have one: for instance,
> sharing some memory which is used by device firmware and domains.
> But in this case iomem could probably help...
Yes, that would be iomem. Shared memory is more between domains.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2018-10-24 14:33 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-05-25 0:55 [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 1/6] xen: xsm: flask: introduce XENMAPSPACE_gmfn_share for memory sharing Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 2/6] libxl: introduce a new structure to represent static shared memory regions Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 3/6] libxl: support mapping static shared memory areas during domain creation Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 4/6] libxl: support unmapping static shared memory areas during domain destruction Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 5/6] libxl:xl: add parsing code to parse "libxl_static_sshm" from xl config files Stefano Stabellini
2018-05-25 0:55 ` [PATCH v5 6/6] docs: documentation about static shared memory regions Stefano Stabellini
2018-05-26 0:50 ` [PATCH v5 0/6] Allow setting up shared memory areas between VMs from xl config files Zhongze Liu
2018-05-29 21:24 ` Stefano Stabellini
2018-10-24 14:15 ` Oleksandr Andrushchenko
2018-10-24 14:24 ` Julien Grall
2018-10-24 14:32 ` Oleksandr Andrushchenko
2018-10-24 14:33 ` Julien Grall
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).