* [PATCH v20 0/2] guest resource mapping (reprise)
@ 2018-08-06 12:54 Paul Durrant
2018-08-06 12:54 ` [PATCH v20 1/2] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
2018-08-06 12:54 ` [PATCH v20 2/2] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant
0 siblings, 2 replies; 7+ messages in thread
From: Paul Durrant @ 2018-08-06 12:54 UTC (permalink / raw)
To: xen-devel; +Cc: Paul Durrant
These patches are the patches from my original resource mapping series
that did not make it into 4.11.
Paul Durrant (2):
common: add a new mappable resource type: XENMEM_resource_grant_table
tools/libxenctrl: use new xenforeignmemory API to seed grant table
tools/libxc/include/xc_dom.h | 8 +--
tools/libxc/xc_dom_boot.c | 114 +++++++++++++++++++++++++-----------
tools/libxc/xc_sr_restore_x86_hvm.c | 10 ++--
tools/libxc/xc_sr_restore_x86_pv.c | 2 +-
tools/libxl/libxl_dom.c | 1 -
tools/python/xen/lowlevel/xc/xc.c | 6 +-
xen/common/grant_table.c | 74 +++++++++++++++++++----
xen/common/memory.c | 56 +++++++++++++++++-
xen/include/public/memory.h | 9 ++-
xen/include/xen/grant_table.h | 4 ++
10 files changed, 219 insertions(+), 65 deletions(-)
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v20 1/2] common: add a new mappable resource type: XENMEM_resource_grant_table
2018-08-06 12:54 [PATCH v20 0/2] guest resource mapping (reprise) Paul Durrant
@ 2018-08-06 12:54 ` Paul Durrant
2018-08-07 13:38 ` Jan Beulich
2018-08-06 12:54 ` [PATCH v20 2/2] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant
1 sibling, 1 reply; 7+ messages in thread
From: Paul Durrant @ 2018-08-06 12:54 UTC (permalink / raw)
To: xen-devel
Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
Ian Jackson, Tim Deegan, Paul Durrant, Jan Beulich
This patch allows grant table frames to be mapped using the
XENMEM_acquire_resource memory op.
NOTE: This patch expands the on-stack mfn_list array in acquire_resource()
but it is still small enough to remain on-stack.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
v19:
- Add test to prevent PVH/HVM tools domains mapping grant status frames
this way as the mapping infrastructure in Xen does not yet implement the
necessary reference counting to make this safe.
- Make sure grant table version is set before any attempt to grow the table.
v18:
- Non-trivial re-base of grant table code.
- Dropped Jan's R-b because of the grant table changes.
v13:
- Re-work the internals to avoid using the XENMAPIDX_grant_table_status
hack.
v12:
- Dropped limit checks as requested by Jan.
v10:
- Addressed comments from Jan.
v8:
- The functionality was originally incorporated into the earlier patch
"x86/mm: add HYPERVISOR_memory_op to acquire guest resources".
---
xen/common/grant_table.c | 74 +++++++++++++++++++++++++++++++++++--------
xen/common/memory.c | 56 +++++++++++++++++++++++++++++++-
xen/include/public/memory.h | 9 ++++--
xen/include/xen/grant_table.h | 4 +++
4 files changed, 127 insertions(+), 16 deletions(-)
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index d9ec711c73..9db8e953b3 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -3860,6 +3860,38 @@ int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
}
#endif
+/* caller must hold read or write lock */
+static int gnttab_get_status_frame_mfn(struct domain *d,
+ unsigned long idx, mfn_t *mfn)
+{
+ struct grant_table *gt = d->grant_table;
+
+ if ( idx >= nr_status_frames(gt) )
+ return -EINVAL;
+
+ *mfn = _mfn(virt_to_mfn(gt->status[idx]));
+ return 0;
+}
+
+/* caller must hold write lock */
+static int gnttab_get_shared_frame_mfn(struct domain *d,
+ unsigned long idx, mfn_t *mfn)
+{
+ struct grant_table *gt = d->grant_table;
+
+ if ( gt->gt_version == 0 )
+ gt->gt_version = 1;
+
+ if ( (idx >= nr_grant_frames(gt)) && (idx < gt->max_grant_frames) )
+ gnttab_grow_table(d, idx + 1);
+
+ if ( idx >= nr_grant_frames(gt) )
+ return -EINVAL;
+
+ *mfn = _mfn(virt_to_mfn(gt->shared_raw[idx]));
+ return 0;
+}
+
int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
mfn_t *mfn)
{
@@ -3877,21 +3909,11 @@ int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
{
idx &= ~XENMAPIDX_grant_table_status;
status = true;
- if ( idx < nr_status_frames(gt) )
- *mfn = _mfn(virt_to_mfn(gt->status[idx]));
- else
- rc = -EINVAL;
- }
- else
- {
- if ( (idx >= nr_grant_frames(gt)) && (idx < gt->max_grant_frames) )
- gnttab_grow_table(d, idx + 1);
- if ( idx < nr_grant_frames(gt) )
- *mfn = _mfn(virt_to_mfn(gt->shared_raw[idx]));
- else
- rc = -EINVAL;
+ rc = gnttab_get_status_frame_mfn(d, idx, mfn);
}
+ else
+ rc = gnttab_get_shared_frame_mfn(d, idx, mfn);
if ( !rc && paging_mode_translate(d) &&
!gfn_eq(gnttab_get_frame_gfn(gt, status, idx), INVALID_GFN) )
@@ -3906,6 +3928,32 @@ int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
return rc;
}
+int gnttab_get_shared_frame(struct domain *d, unsigned long idx,
+ mfn_t *mfn)
+{
+ struct grant_table *gt = d->grant_table;
+ int rc;
+
+ grant_write_lock(gt);
+ rc = gnttab_get_shared_frame_mfn(d, idx, mfn);
+ grant_write_unlock(gt);
+
+ return rc;
+}
+
+int gnttab_get_status_frame(struct domain *d, unsigned long idx,
+ mfn_t *mfn)
+{
+ struct grant_table *gt = d->grant_table;
+ int rc;
+
+ grant_read_lock(gt);
+ rc = gnttab_get_status_frame_mfn(d, idx, mfn);
+ grant_read_unlock(gt);
+
+ return rc;
+}
+
static void gnttab_usage_print(struct domain *rd)
{
int first = 1;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index e29d596727..7910bb16cc 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -23,6 +23,7 @@
#include <xen/numa.h>
#include <xen/mem_access.h>
#include <xen/trace.h>
+#include <xen/grant_table.h>
#include <asm/current.h>
#include <asm/hardirq.h>
#include <asm/p2m.h>
@@ -982,6 +983,54 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
}
+static int acquire_grant_table(struct domain *d, unsigned int id,
+ unsigned long frame,
+ unsigned int nr_frames,
+ xen_pfn_t mfn_list[])
+{
+ unsigned int i = nr_frames;
+
+ /*
+ * FIXME: It is not currently safe to map grant status frames if they
+ * will be inserted into the caller's P2M, because these
+ * insertions are not yet properly reference counted.
+ * This restriction can be removed when appropriate reference
+ * counting is added.
+ */
+ if ( paging_mode_translate(current->domain) &&
+ (id == XENMEM_resource_grant_table_id_status) )
+ return -EOPNOTSUPP;
+
+ /* Iterate backwards in case table needs to grow */
+ while ( i-- != 0 )
+ {
+ mfn_t mfn = INVALID_MFN;
+ int rc;
+
+ switch ( id )
+ {
+ case XENMEM_resource_grant_table_id_shared:
+ rc = gnttab_get_shared_frame(d, frame + i, &mfn);
+ break;
+
+ case XENMEM_resource_grant_table_id_status:
+ rc = gnttab_get_status_frame(d, frame + i, &mfn);
+ break;
+
+ default:
+ rc = -EINVAL;
+ break;
+ }
+
+ if ( rc )
+ return rc;
+
+ mfn_list[i] = mfn_x(mfn);
+ }
+
+ return 0;
+}
+
static int acquire_resource(
XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
{
@@ -992,7 +1041,7 @@ static int acquire_resource(
* moment since they are small, but if they need to grow in future
* use-cases then per-CPU arrays or heap allocations may be required.
*/
- xen_pfn_t mfn_list[2];
+ xen_pfn_t mfn_list[32];
int rc;
if ( copy_from_guest(&xmar, arg, 1) )
@@ -1027,6 +1076,11 @@ static int acquire_resource(
switch ( xmar.type )
{
+ case XENMEM_resource_grant_table:
+ rc = acquire_grant_table(d, xmar.id, xmar.frame, xmar.nr_frames,
+ mfn_list);
+ break;
+
default:
rc = arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame,
xmar.nr_frames, mfn_list, &xmar.flags);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index bf2f81faae..1735a53916 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -611,16 +611,21 @@ struct xen_mem_acquire_resource {
uint16_t type;
#define XENMEM_resource_ioreq_server 0
+#define XENMEM_resource_grant_table 1
/*
* IN - a type-specific resource identifier, which must be zero
* unless stated otherwise.
*
* type == XENMEM_resource_ioreq_server -> id == ioreq server id
+ * type == XENMEM_resource_grant_table -> id defined below
*/
uint32_t id;
- /*
- * IN/OUT - As an IN parameter number of frames of the resource
+
+#define XENMEM_resource_grant_table_id_shared 0
+#define XENMEM_resource_grant_table_id_status 1
+
+ /* IN/OUT - As an IN parameter number of frames of the resource
* to be mapped. However, if the specified value is 0 and
* frame_list is NULL then this field will be set to the
* maximum value supported by the implementation on return.
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index 0286ba33dd..c881414e5b 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -58,6 +58,10 @@ int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
mfn_t *mfn);
+int gnttab_get_shared_frame(struct domain *d, unsigned long idx,
+ mfn_t *mfn);
+int gnttab_get_status_frame(struct domain *d, unsigned long idx,
+ mfn_t *mfn);
unsigned int gnttab_dom0_frames(void);
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v20 2/2] tools/libxenctrl: use new xenforeignmemory API to seed grant table
2018-08-06 12:54 [PATCH v20 0/2] guest resource mapping (reprise) Paul Durrant
2018-08-06 12:54 ` [PATCH v20 1/2] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
@ 2018-08-06 12:54 ` Paul Durrant
1 sibling, 0 replies; 7+ messages in thread
From: Paul Durrant @ 2018-08-06 12:54 UTC (permalink / raw)
To: xen-devel; +Cc: Paul Durrant, Ian Jackson
A previous patch added support for priv-mapping guest resources directly
(rather than having to foreign-map, which requires P2M modification for
HVM guests).
This patch makes use of the new API to seed the guest grant table unless
the underlying infrastructure (i.e. privcmd) doesn't support it, in which
case the old scheme is used.
NOTE: The call to xc_dom_gnttab_hvm_seed() in hvm_build_set_params() was
actually unnecessary, as the grant table has already been seeded
by a prior call to xc_dom_gnttab_init() made by libxl__build_dom().
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
v18:
- Trivial re-base.
v13:
- Re-base.
v10:
- Use new id constant for grant table.
v4:
- Minor cosmetic fix suggested by Roger.
v3:
- Introduced xc_dom_set_gnttab_entry() to avoid duplicated code.
---
tools/libxc/include/xc_dom.h | 8 +--
tools/libxc/xc_dom_boot.c | 114 +++++++++++++++++++++++++-----------
tools/libxc/xc_sr_restore_x86_hvm.c | 10 ++--
tools/libxc/xc_sr_restore_x86_pv.c | 2 +-
tools/libxl/libxl_dom.c | 1 -
tools/python/xen/lowlevel/xc/xc.c | 6 +-
6 files changed, 92 insertions(+), 49 deletions(-)
diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h
index 8a66889c68..a8a0c0da66 100644
--- a/tools/libxc/include/xc_dom.h
+++ b/tools/libxc/include/xc_dom.h
@@ -337,12 +337,8 @@ void *xc_dom_boot_domU_map(struct xc_dom_image *dom, xen_pfn_t pfn,
int xc_dom_boot_image(struct xc_dom_image *dom);
int xc_dom_compat_check(struct xc_dom_image *dom);
int xc_dom_gnttab_init(struct xc_dom_image *dom);
-int xc_dom_gnttab_hvm_seed(xc_interface *xch, uint32_t domid,
- xen_pfn_t console_gmfn,
- xen_pfn_t xenstore_gmfn,
- uint32_t console_domid,
- uint32_t xenstore_domid);
-int xc_dom_gnttab_seed(xc_interface *xch, uint32_t domid,
+int xc_dom_gnttab_seed(xc_interface *xch, uint32_t guest_domid,
+ bool is_hvm,
xen_pfn_t console_gmfn,
xen_pfn_t xenstore_gmfn,
uint32_t console_domid,
diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
index 2e5681dc5d..8307ebeaf6 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xc_dom_boot.c
@@ -256,11 +256,29 @@ static xen_pfn_t xc_dom_gnttab_setup(xc_interface *xch, uint32_t domid)
return gmfn;
}
-int xc_dom_gnttab_seed(xc_interface *xch, uint32_t domid,
- xen_pfn_t console_gmfn,
- xen_pfn_t xenstore_gmfn,
- uint32_t console_domid,
- uint32_t xenstore_domid)
+static void xc_dom_set_gnttab_entry(xc_interface *xch,
+ grant_entry_v1_t *gnttab,
+ unsigned int idx,
+ uint32_t guest_domid,
+ uint32_t backend_domid,
+ xen_pfn_t backend_gmfn)
+{
+ if ( guest_domid == backend_domid || backend_gmfn == -1)
+ return;
+
+ xc_dom_printf(xch, "%s: [%u] -> 0x%"PRI_xen_pfn,
+ __FUNCTION__, idx, backend_gmfn);
+
+ gnttab[idx].flags = GTF_permit_access;
+ gnttab[idx].domid = backend_domid;
+ gnttab[idx].frame = backend_gmfn;
+}
+
+static int compat_gnttab_seed(xc_interface *xch, uint32_t domid,
+ xen_pfn_t console_gmfn,
+ xen_pfn_t xenstore_gmfn,
+ uint32_t console_domid,
+ uint32_t xenstore_domid)
{
xen_pfn_t gnttab_gmfn;
@@ -284,18 +302,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, uint32_t domid,
return -1;
}
- if ( domid != console_domid && console_gmfn != -1)
- {
- gnttab[GNTTAB_RESERVED_CONSOLE].flags = GTF_permit_access;
- gnttab[GNTTAB_RESERVED_CONSOLE].domid = console_domid;
- gnttab[GNTTAB_RESERVED_CONSOLE].frame = console_gmfn;
- }
- if ( domid != xenstore_domid && xenstore_gmfn != -1)
- {
- gnttab[GNTTAB_RESERVED_XENSTORE].flags = GTF_permit_access;
- gnttab[GNTTAB_RESERVED_XENSTORE].domid = xenstore_domid;
- gnttab[GNTTAB_RESERVED_XENSTORE].frame = xenstore_gmfn;
- }
+ xc_dom_set_gnttab_entry(xch, gnttab, GNTTAB_RESERVED_CONSOLE,
+ domid, console_domid, console_gmfn);
+ xc_dom_set_gnttab_entry(xch, gnttab, GNTTAB_RESERVED_XENSTORE,
+ domid, xenstore_domid, xenstore_gmfn);
if ( munmap(gnttab, PAGE_SIZE) == -1 )
{
@@ -313,11 +323,11 @@ int xc_dom_gnttab_seed(xc_interface *xch, uint32_t domid,
return 0;
}
-int xc_dom_gnttab_hvm_seed(xc_interface *xch, uint32_t domid,
- xen_pfn_t console_gpfn,
- xen_pfn_t xenstore_gpfn,
- uint32_t console_domid,
- uint32_t xenstore_domid)
+static int compat_gnttab_hvm_seed(xc_interface *xch, uint32_t domid,
+ xen_pfn_t console_gpfn,
+ xen_pfn_t xenstore_gpfn,
+ uint32_t console_domid,
+ uint32_t xenstore_domid)
{
int rc;
xen_pfn_t scratch_gpfn;
@@ -356,7 +366,7 @@ int xc_dom_gnttab_hvm_seed(xc_interface *xch, uint32_t domid,
return -1;
}
- rc = xc_dom_gnttab_seed(xch, domid,
+ rc = compat_gnttab_seed(xch, domid,
console_gpfn, xenstore_gpfn,
console_domid, xenstore_domid);
if (rc != 0)
@@ -381,18 +391,56 @@ int xc_dom_gnttab_hvm_seed(xc_interface *xch, uint32_t domid,
return 0;
}
-int xc_dom_gnttab_init(struct xc_dom_image *dom)
+int xc_dom_gnttab_seed(xc_interface *xch, uint32_t guest_domid,
+ bool is_hvm, xen_pfn_t console_gmfn,
+ xen_pfn_t xenstore_gmfn, uint32_t console_domid,
+ uint32_t xenstore_domid)
{
- if ( xc_dom_translated(dom) ) {
- return xc_dom_gnttab_hvm_seed(dom->xch, dom->guest_domid,
- dom->console_pfn, dom->xenstore_pfn,
- dom->console_domid, dom->xenstore_domid);
- } else {
- return xc_dom_gnttab_seed(dom->xch, dom->guest_domid,
- xc_dom_p2m(dom, dom->console_pfn),
- xc_dom_p2m(dom, dom->xenstore_pfn),
- dom->console_domid, dom->xenstore_domid);
+ xenforeignmemory_handle* fmem = xch->fmem;
+ xenforeignmemory_resource_handle *fres;
+ void *addr = NULL;
+
+ fres = xenforeignmemory_map_resource(
+ fmem, guest_domid, XENMEM_resource_grant_table,
+ XENMEM_resource_grant_table_id_shared, 0, 1, &addr,
+ PROT_READ | PROT_WRITE, 0);
+ if ( !fres )
+ {
+ if ( errno == EOPNOTSUPP )
+ return is_hvm ?
+ compat_gnttab_hvm_seed(xch, guest_domid,
+ console_gmfn, xenstore_gmfn,
+ console_domid, xenstore_domid) :
+ compat_gnttab_seed(xch, guest_domid,
+ console_gmfn, xenstore_gmfn,
+ console_domid, xenstore_domid);
+
+ xc_dom_panic(xch, XC_INTERNAL_ERROR,
+ "%s: failed to acquire grant table "
+ "[errno=%d]\n",
+ __FUNCTION__, errno);
+ return -1;
}
+
+ xc_dom_set_gnttab_entry(xch, addr, GNTTAB_RESERVED_CONSOLE,
+ guest_domid, console_domid, console_gmfn);
+ xc_dom_set_gnttab_entry(xch, addr, GNTTAB_RESERVED_XENSTORE,
+ guest_domid, xenstore_domid, xenstore_gmfn);
+
+ xenforeignmemory_unmap_resource(fmem, fres);
+
+ return 0;
+}
+
+int xc_dom_gnttab_init(struct xc_dom_image *dom)
+{
+ bool is_hvm = xc_dom_translated(dom);
+ xen_pfn_t console_gmfn = xc_dom_p2m(dom, dom->console_pfn);
+ xen_pfn_t xenstore_gmfn = xc_dom_p2m(dom, dom->xenstore_pfn);
+
+ return xc_dom_gnttab_seed(dom->xch, dom->guest_domid, is_hvm,
+ console_gmfn, xenstore_gmfn,
+ dom->console_domid, dom->xenstore_domid);
}
/*
diff --git a/tools/libxc/xc_sr_restore_x86_hvm.c b/tools/libxc/xc_sr_restore_x86_hvm.c
index 227c48553e..4765a52f33 100644
--- a/tools/libxc/xc_sr_restore_x86_hvm.c
+++ b/tools/libxc/xc_sr_restore_x86_hvm.c
@@ -216,11 +216,11 @@ static int x86_hvm_stream_complete(struct xc_sr_context *ctx)
return rc;
}
- rc = xc_dom_gnttab_hvm_seed(xch, ctx->domid,
- ctx->restore.console_gfn,
- ctx->restore.xenstore_gfn,
- ctx->restore.console_domid,
- ctx->restore.xenstore_domid);
+ rc = xc_dom_gnttab_seed(xch, ctx->domid, true,
+ ctx->restore.console_gfn,
+ ctx->restore.xenstore_gfn,
+ ctx->restore.console_domid,
+ ctx->restore.xenstore_domid);
if ( rc )
{
PERROR("Failed to seed grant table");
diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xc_sr_restore_x86_pv.c
index d81dfdcca6..a2dbf85157 100644
--- a/tools/libxc/xc_sr_restore_x86_pv.c
+++ b/tools/libxc/xc_sr_restore_x86_pv.c
@@ -1105,7 +1105,7 @@ static int x86_pv_stream_complete(struct xc_sr_context *ctx)
if ( rc )
return rc;
- rc = xc_dom_gnttab_seed(xch, ctx->domid,
+ rc = xc_dom_gnttab_seed(xch, ctx->domid, false,
ctx->restore.console_gfn,
ctx->restore.xenstore_gfn,
ctx->restore.console_domid,
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 3cfe0d4808..c8a1dc7fd5 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -905,7 +905,6 @@ static int hvm_build_set_params(xc_interface *handle, uint32_t domid,
*store_mfn = str_mfn;
*console_mfn = cons_mfn;
- xc_dom_gnttab_hvm_seed(handle, domid, *console_mfn, *store_mfn, console_domid, store_domid);
return 0;
}
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index fc19ee0741..5ade12762a 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -800,9 +800,9 @@ static PyObject *pyxc_gnttab_hvm_seed(XcObject *self,
&console_domid, &xenstore_domid) )
return NULL;
- if ( xc_dom_gnttab_hvm_seed(self->xc_handle, dom,
- console_gmfn, xenstore_gmfn,
- console_domid, xenstore_domid) != 0 )
+ if ( xc_dom_gnttab_seed(self->xc_handle, dom, true,
+ console_gmfn, xenstore_gmfn,
+ console_domid, xenstore_domid) != 0 )
return pyxc_error_to_exception(self->xc_handle);
return Py_None;
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v20 1/2] common: add a new mappable resource type: XENMEM_resource_grant_table
2018-08-06 12:54 ` [PATCH v20 1/2] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
@ 2018-08-07 13:38 ` Jan Beulich
2018-08-07 14:14 ` Paul Durrant
0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2018-08-07 13:38 UTC (permalink / raw)
To: Paul Durrant
Cc: Stefano Stabellini, Wei Liu, George Dunlap, Andrew Cooper,
Ian Jackson, Tim Deegan, xen-devel
>>> On 06.08.18 at 14:54, <paul.durrant@citrix.com> wrote:
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -3860,6 +3860,38 @@ int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
> }
> #endif
>
> +/* caller must hold read or write lock */
> +static int gnttab_get_status_frame_mfn(struct domain *d,
> + unsigned long idx, mfn_t *mfn)
> +{
> + struct grant_table *gt = d->grant_table;
> +
> + if ( idx >= nr_status_frames(gt) )
> + return -EINVAL;
> +
> + *mfn = _mfn(virt_to_mfn(gt->status[idx]));
> + return 0;
> +}
> +
> +/* caller must hold write lock */
> +static int gnttab_get_shared_frame_mfn(struct domain *d,
> + unsigned long idx, mfn_t *mfn)
> +{
> + struct grant_table *gt = d->grant_table;
> +
> + if ( gt->gt_version == 0 )
> + gt->gt_version = 1;
> +
> + if ( (idx >= nr_grant_frames(gt)) && (idx < gt->max_grant_frames) )
> + gnttab_grow_table(d, idx + 1);
I guess I had commented on this before, but it has been a while:
While I realize that you're just moving code, I dislike the resulting
asymmetry between the two functions: Why would the former
not want to grow the grant table? And why would a version
adjustment be needed (only) here (I've put "only" in parentheses
because it's not obvious to me why this is needed)?
And this asymmetry would surely be coming as surprise at least
to callers of the new interface you add.
> +int gnttab_get_status_frame(struct domain *d, unsigned long idx,
> + mfn_t *mfn)
> +{
> + struct grant_table *gt = d->grant_table;
> + int rc;
> +
> + grant_read_lock(gt);
> + rc = gnttab_get_status_frame_mfn(d, idx, mfn);
> + grant_read_unlock(gt);
> +
> + return rc;
> +}
Along the lines of what gnttab_map_frame() does, I'd expect a
check for gt_version to be 2 here. Or well, maybe that's implicit
by there not being any status entries/pages for version 1 tables.
But it would become a requirement if gnttab_grow_table() was to
be called from gnttab_get_status_frame_mfn().
> @@ -982,6 +983,54 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
> return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
> }
>
> +static int acquire_grant_table(struct domain *d, unsigned int id,
> + unsigned long frame,
> + unsigned int nr_frames,
> + xen_pfn_t mfn_list[])
> +{
> + unsigned int i = nr_frames;
> +
> + /*
> + * FIXME: It is not currently safe to map grant status frames if they
> + * will be inserted into the caller's P2M, because these
> + * insertions are not yet properly reference counted.
> + * This restriction can be removed when appropriate reference
> + * counting is added.
> + */
> + if ( paging_mode_translate(current->domain) &&
> + (id == XENMEM_resource_grant_table_id_status) )
> + return -EOPNOTSUPP;
Would you mind reminding me why the P2M insertions are safe for
the "ordinary" (shared) grant frames? I'm puzzled because P2M
management in general lacks refcounting, yet I have the vague
feeling that we had discussed this before and there was a reason.
(If there is, perhaps slightly extending the comment above would
be helpful.)
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -611,16 +611,21 @@ struct xen_mem_acquire_resource {
> uint16_t type;
>
> #define XENMEM_resource_ioreq_server 0
> +#define XENMEM_resource_grant_table 1
>
> /*
> * IN - a type-specific resource identifier, which must be zero
> * unless stated otherwise.
> *
> * type == XENMEM_resource_ioreq_server -> id == ioreq server id
> + * type == XENMEM_resource_grant_table -> id defined below
> */
> uint32_t id;
> - /*
> - * IN/OUT - As an IN parameter number of frames of the resource
> +
> +#define XENMEM_resource_grant_table_id_shared 0
> +#define XENMEM_resource_grant_table_id_status 1
> +
> + /* IN/OUT - As an IN parameter number of frames of the resource
Please don't break previously correct comment style here.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v20 1/2] common: add a new mappable resource type: XENMEM_resource_grant_table
2018-08-07 13:38 ` Jan Beulich
@ 2018-08-07 14:14 ` Paul Durrant
2018-08-07 14:36 ` Jan Beulich
0 siblings, 1 reply; 7+ messages in thread
From: Paul Durrant @ 2018-08-07 14:14 UTC (permalink / raw)
To: 'Jan Beulich'
Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Tim (Xen.org),
George Dunlap, Ian Jackson, xen-devel
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 07 August 2018 14:38
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Stefano Stabellini
> <sstabellini@kernel.org>; xen-devel <xen-devel@lists.xenproject.org>;
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Tim (Xen.org)
> <tim@xen.org>
> Subject: Re: [PATCH v20 1/2] common: add a new mappable resource type:
> XENMEM_resource_grant_table
>
> >>> On 06.08.18 at 14:54, <paul.durrant@citrix.com> wrote:
> > --- a/xen/common/grant_table.c
> > +++ b/xen/common/grant_table.c
> > @@ -3860,6 +3860,38 @@ int mem_sharing_gref_to_gfn(struct
> grant_table *gt, grant_ref_t ref,
> > }
> > #endif
> >
> > +/* caller must hold read or write lock */
> > +static int gnttab_get_status_frame_mfn(struct domain *d,
> > + unsigned long idx, mfn_t *mfn)
> > +{
> > + struct grant_table *gt = d->grant_table;
> > +
> > + if ( idx >= nr_status_frames(gt) )
> > + return -EINVAL;
> > +
> > + *mfn = _mfn(virt_to_mfn(gt->status[idx]));
> > + return 0;
> > +}
> > +
> > +/* caller must hold write lock */
> > +static int gnttab_get_shared_frame_mfn(struct domain *d,
> > + unsigned long idx, mfn_t *mfn)
> > +{
> > + struct grant_table *gt = d->grant_table;
> > +
> > + if ( gt->gt_version == 0 )
> > + gt->gt_version = 1;
> > +
> > + if ( (idx >= nr_grant_frames(gt)) && (idx < gt->max_grant_frames) )
> > + gnttab_grow_table(d, idx + 1);
>
> I guess I had commented on this before, but it has been a while:
> While I realize that you're just moving code, I dislike the resulting
> asymmetry between the two functions: Why would the former
> not want to grow the grant table? And why would a version
> adjustment be needed (only) here (I've put "only" in parentheses
> because it's not obvious to me why this is needed)?
>
> And this asymmetry would surely be coming as surprise at least
> to callers of the new interface you add.
>
I'm happy to have get_status_frame() grow the table too but it does mean a change in behaviour to callers of gnttab_map_frame(). If that's not a problem then I'll make the change.
The version adjustment may be a hangover from the previous code base. Juergen's per-domain gnttab adjustments have probably rendered this unnecessary so I'll drop it as long as nothing breaks.
> > +int gnttab_get_status_frame(struct domain *d, unsigned long idx,
> > + mfn_t *mfn)
> > +{
> > + struct grant_table *gt = d->grant_table;
> > + int rc;
> > +
> > + grant_read_lock(gt);
> > + rc = gnttab_get_status_frame_mfn(d, idx, mfn);
> > + grant_read_unlock(gt);
> > +
> > + return rc;
> > +}
>
> Along the lines of what gnttab_map_frame() does, I'd expect a
> check for gt_version to be 2 here. Or well, maybe that's implicit
> by there not being any status entries/pages for version 1 tables.
Yes, that should be the case.
> But it would become a requirement if gnttab_grow_table() was to
> be called from gnttab_get_status_frame_mfn().
>
Yes, so making that change will certainly add complexity.
> > @@ -982,6 +983,54 @@ static long xatp_permission_check(struct domain
> *d, unsigned int space)
> > return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
> > }
> >
> > +static int acquire_grant_table(struct domain *d, unsigned int id,
> > + unsigned long frame,
> > + unsigned int nr_frames,
> > + xen_pfn_t mfn_list[])
> > +{
> > + unsigned int i = nr_frames;
> > +
> > + /*
> > + * FIXME: It is not currently safe to map grant status frames if they
> > + * will be inserted into the caller's P2M, because these
> > + * insertions are not yet properly reference counted.
> > + * This restriction can be removed when appropriate reference
> > + * counting is added.
> > + */
> > + if ( paging_mode_translate(current->domain) &&
> > + (id == XENMEM_resource_grant_table_id_status) )
> > + return -EOPNOTSUPP;
>
> Would you mind reminding me why the P2M insertions are safe for
> the "ordinary" (shared) grant frames? I'm puzzled because P2M
> management in general lacks refcounting, yet I have the vague
> feeling that we had discussed this before and there was a reason.
> (If there is, perhaps slightly extending the comment above would
> be helpful.)
My memory is hazy too so I'll have to mine back down the email traffic. Without having done that I can only assume it is because there is the possibility of mapping the status frames and then setting the version back to 1 thus causing the status frames to evaporate whilst still being present in the P2M. This clearly won't happen to the shared frames.
>
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -611,16 +611,21 @@ struct xen_mem_acquire_resource {
> > uint16_t type;
> >
> > #define XENMEM_resource_ioreq_server 0
> > +#define XENMEM_resource_grant_table 1
> >
> > /*
> > * IN - a type-specific resource identifier, which must be zero
> > * unless stated otherwise.
> > *
> > * type == XENMEM_resource_ioreq_server -> id == ioreq server id
> > + * type == XENMEM_resource_grant_table -> id defined below
> > */
> > uint32_t id;
> > - /*
> > - * IN/OUT - As an IN parameter number of frames of the resource
> > +
> > +#define XENMEM_resource_grant_table_id_shared 0
> > +#define XENMEM_resource_grant_table_id_status 1
> > +
> > + /* IN/OUT - As an IN parameter number of frames of the resource
>
> Please don't break previously correct comment style here.
>
Oh yes. Missed that bit of rebase breakage.
Paul
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v20 1/2] common: add a new mappable resource type: XENMEM_resource_grant_table
2018-08-07 14:14 ` Paul Durrant
@ 2018-08-07 14:36 ` Jan Beulich
2018-08-07 14:43 ` Paul Durrant
0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2018-08-07 14:36 UTC (permalink / raw)
To: Paul Durrant
Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Tim Deegan,
george.dunlap, Ian Jackson, xen-devel
>>> On 07.08.18 at 16:14, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 07 August 2018 14:38
>>
>> >>> On 06.08.18 at 14:54, <paul.durrant@citrix.com> wrote:
>> > --- a/xen/common/grant_table.c
>> > +++ b/xen/common/grant_table.c
>> > @@ -3860,6 +3860,38 @@ int mem_sharing_gref_to_gfn(struct
>> grant_table *gt, grant_ref_t ref,
>> > }
>> > #endif
>> >
>> > +/* caller must hold read or write lock */
>> > +static int gnttab_get_status_frame_mfn(struct domain *d,
>> > + unsigned long idx, mfn_t *mfn)
>> > +{
>> > + struct grant_table *gt = d->grant_table;
>> > +
>> > + if ( idx >= nr_status_frames(gt) )
>> > + return -EINVAL;
>> > +
>> > + *mfn = _mfn(virt_to_mfn(gt->status[idx]));
>> > + return 0;
>> > +}
>> > +
>> > +/* caller must hold write lock */
>> > +static int gnttab_get_shared_frame_mfn(struct domain *d,
>> > + unsigned long idx, mfn_t *mfn)
>> > +{
>> > + struct grant_table *gt = d->grant_table;
>> > +
>> > + if ( gt->gt_version == 0 )
>> > + gt->gt_version = 1;
>> > +
>> > + if ( (idx >= nr_grant_frames(gt)) && (idx < gt->max_grant_frames) )
>> > + gnttab_grow_table(d, idx + 1);
>>
>> I guess I had commented on this before, but it has been a while:
>> While I realize that you're just moving code, I dislike the resulting
>> asymmetry between the two functions: Why would the former
>> not want to grow the grant table? And why would a version
>> adjustment be needed (only) here (I've put "only" in parentheses
>> because it's not obvious to me why this is needed)?
>>
>> And this asymmetry would surely be coming as surprise at least
>> to callers of the new interface you add.
>
> I'm happy to have get_status_frame() grow the table too but it does mean a
> change in behaviour to callers of gnttab_map_frame(). If that's not a problem
> then I'll make the change.
I'd say it was a mistake there before. The (slight) difficulty (which
probably made whoever wrote this originally ignore this aspect) is
figuring out how much to grow the table.
>> > +int gnttab_get_status_frame(struct domain *d, unsigned long idx,
>> > + mfn_t *mfn)
>> > +{
>> > + struct grant_table *gt = d->grant_table;
>> > + int rc;
>> > +
>> > + grant_read_lock(gt);
>> > + rc = gnttab_get_status_frame_mfn(d, idx, mfn);
>> > + grant_read_unlock(gt);
>> > +
>> > + return rc;
>> > +}
>>
>> Along the lines of what gnttab_map_frame() does, I'd expect a
>> check for gt_version to be 2 here. Or well, maybe that's implicit
>> by there not being any status entries/pages for version 1 tables.
>
> Yes, that should be the case.
>
>> But it would become a requirement if gnttab_grow_table() was to
>> be called from gnttab_get_status_frame_mfn().
>>
>
> Yes, so making that change will certainly add complexity.
Hmm, yes - a single if().
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v20 1/2] common: add a new mappable resource type: XENMEM_resource_grant_table
2018-08-07 14:36 ` Jan Beulich
@ 2018-08-07 14:43 ` Paul Durrant
0 siblings, 0 replies; 7+ messages in thread
From: Paul Durrant @ 2018-08-07 14:43 UTC (permalink / raw)
To: 'Jan Beulich'
Cc: Stefano Stabellini, Wei Liu, Andrew Cooper, Tim (Xen.org),
George Dunlap, Ian Jackson, xen-devel
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 07 August 2018 15:37
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen-
> devel <xen-devel@lists.xenproject.org>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> Subject: RE: [PATCH v20 1/2] common: add a new mappable resource type:
> XENMEM_resource_grant_table
>
> >>> On 07.08.18 at 16:14, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 07 August 2018 14:38
> >>
> >> >>> On 06.08.18 at 14:54, <paul.durrant@citrix.com> wrote:
> >> > --- a/xen/common/grant_table.c
> >> > +++ b/xen/common/grant_table.c
> >> > @@ -3860,6 +3860,38 @@ int mem_sharing_gref_to_gfn(struct
> >> grant_table *gt, grant_ref_t ref,
> >> > }
> >> > #endif
> >> >
> >> > +/* caller must hold read or write lock */
> >> > +static int gnttab_get_status_frame_mfn(struct domain *d,
> >> > + unsigned long idx, mfn_t *mfn)
> >> > +{
> >> > + struct grant_table *gt = d->grant_table;
> >> > +
> >> > + if ( idx >= nr_status_frames(gt) )
> >> > + return -EINVAL;
> >> > +
> >> > + *mfn = _mfn(virt_to_mfn(gt->status[idx]));
> >> > + return 0;
> >> > +}
> >> > +
> >> > +/* caller must hold write lock */
> >> > +static int gnttab_get_shared_frame_mfn(struct domain *d,
> >> > + unsigned long idx, mfn_t *mfn)
> >> > +{
> >> > + struct grant_table *gt = d->grant_table;
> >> > +
> >> > + if ( gt->gt_version == 0 )
> >> > + gt->gt_version = 1;
> >> > +
> >> > + if ( (idx >= nr_grant_frames(gt)) && (idx < gt->max_grant_frames) )
> >> > + gnttab_grow_table(d, idx + 1);
> >>
> >> I guess I had commented on this before, but it has been a while:
> >> While I realize that you're just moving code, I dislike the resulting
> >> asymmetry between the two functions: Why would the former
> >> not want to grow the grant table? And why would a version
> >> adjustment be needed (only) here (I've put "only" in parentheses
> >> because it's not obvious to me why this is needed)?
> >>
> >> And this asymmetry would surely be coming as surprise at least
> >> to callers of the new interface you add.
> >
> > I'm happy to have get_status_frame() grow the table too but it does mean
> a
> > change in behaviour to callers of gnttab_map_frame(). If that's not a
> problem
> > then I'll make the change.
>
> I'd say it was a mistake there before. The (slight) difficulty (which
> probably made whoever wrote this originally ignore this aspect) is
> figuring out how much to grow the table.
>
Yes, that's probably why it wasn't done.
> >> > +int gnttab_get_status_frame(struct domain *d, unsigned long idx,
> >> > + mfn_t *mfn)
> >> > +{
> >> > + struct grant_table *gt = d->grant_table;
> >> > + int rc;
> >> > +
> >> > + grant_read_lock(gt);
> >> > + rc = gnttab_get_status_frame_mfn(d, idx, mfn);
> >> > + grant_read_unlock(gt);
> >> > +
> >> > + return rc;
> >> > +}
> >>
> >> Along the lines of what gnttab_map_frame() does, I'd expect a
> >> check for gt_version to be 2 here. Or well, maybe that's implicit
> >> by there not being any status entries/pages for version 1 tables.
> >
> > Yes, that should be the case.
> >
> >> But it would become a requirement if gnttab_grow_table() was to
> >> be called from gnttab_get_status_frame_mfn().
> >>
> >
> > Yes, so making that change will certainly add complexity.
>
> Hmm, yes - a single if().
Along with the calculation to work out how many frames to grow the grant table by to get to the necessary status frame. Not massive complexity... but more than there is now.
Paul
>
> Jan
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2018-08-07 14:43 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-08-06 12:54 [PATCH v20 0/2] guest resource mapping (reprise) Paul Durrant
2018-08-06 12:54 ` [PATCH v20 1/2] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
2018-08-07 13:38 ` Jan Beulich
2018-08-07 14:14 ` Paul Durrant
2018-08-07 14:36 ` Jan Beulich
2018-08-07 14:43 ` Paul Durrant
2018-08-06 12:54 ` [PATCH v20 2/2] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).