From: Paul Durrant <paul.durrant@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
Julien Grall <julien.grall@arm.com>,
Paul Durrant <paul.durrant@citrix.com>,
Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 12/13] x86: add iommu_ops to modify and flush IOMMU mappings
Date: Sat, 7 Jul 2018 12:05:25 +0100 [thread overview]
Message-ID: <20180707110526.35822-13-paul.durrant@citrix.com> (raw)
In-Reply-To: <20180707110526.35822-1-paul.durrant@citrix.com>
This patch adds iommu_ops to add (map) or remove (unmap) frames in the
domain's IOMMU mappings, and an iommu_op to synchronize (flush) those
manipulations with the hardware.
Mappings added by the map operation are tracked and only those mappings
may be removed by a subsequent unmap operation. Frames are specified by the
owning domain and GFN. It is, of course, permissable for a domain to map
its own frames using DOMID_SELF.
NOTE: The owning domain and GFN must also be specified in the unmap
operation, as well as the BFN, so that they can be cross-checked
with the existent mapping.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
v2:
- Heavily re-worked in v2, including explicit tracking of mappings.
This avoids the need to clear non-reserved mappings from IOMMU
at start of day, which would be prohibitively slow on a large host.
---
xen/arch/x86/iommu_op.c | 134 ++++++++++++++++++++++++++++++++++++++++++
xen/include/public/iommu_op.h | 43 ++++++++++++++
xen/include/xlat.lst | 2 +
3 files changed, 179 insertions(+)
diff --git a/xen/arch/x86/iommu_op.c b/xen/arch/x86/iommu_op.c
index e91d39b6c4..7174838a6c 100644
--- a/xen/arch/x86/iommu_op.c
+++ b/xen/arch/x86/iommu_op.c
@@ -113,6 +113,116 @@ static int iommu_op_enable_modification(void)
return 0;
}
+static int iommuop_map(struct xen_iommu_op_map *op)
+{
+ struct domain *d, *currd = current->domain;
+ struct domain_iommu *iommu = dom_iommu(currd);
+ bool readonly = op->flags & XEN_IOMMUOP_map_readonly;
+ bfn_t bfn = _bfn(op->bfn);
+ struct page_info *page;
+ unsigned int prot;
+ int rc, ignore;
+
+ if ( op->pad || (op->flags & ~XEN_IOMMUOP_map_readonly) )
+ return -EINVAL;
+
+ if ( !iommu->iommu_op_ranges )
+ return -EOPNOTSUPP;
+
+ /* Check whether the specified BFN falls in a reserved region */
+ if ( rangeset_contains_singleton(iommu->reserved_ranges, bfn_x(bfn)) )
+ return -EINVAL;
+
+ d = rcu_lock_domain_by_any_id(op->domid);
+ if ( !d )
+ return -ESRCH;
+
+ rc = get_paged_gfn(d, _gfn(op->gfn), readonly, NULL, &page);
+ if ( rc )
+ goto unlock;
+
+ prot = IOMMUF_readable;
+ if ( !readonly )
+ prot |= IOMMUF_writable;
+
+ rc = -EIO;
+ if ( iommu_map_page(currd, bfn, page_to_mfn(page), prot) )
+ goto release;
+
+ rc = rangeset_add_singleton(iommu->iommu_op_ranges, bfn_x(bfn));
+ if ( rc )
+ goto unmap;
+
+ rc = 0;
+ goto unlock; /* retain mapping and reference */
+
+ unmap:
+ ignore = iommu_unmap_page(currd, bfn);
+
+ release:
+ put_page(page);
+
+ unlock:
+ rcu_unlock_domain(d);
+ return rc;
+}
+
+static int iommuop_unmap(struct xen_iommu_op_unmap *op)
+{
+ struct domain *d, *currd = current->domain;
+ struct domain_iommu *iommu = dom_iommu(currd);
+ bfn_t bfn = _bfn(op->bfn);
+ mfn_t mfn;
+ unsigned int prot;
+ struct page_info *page;
+ int rc;
+
+ if ( op->pad0 || op->pad1 )
+ return -EINVAL;
+
+ if ( !iommu->iommu_op_ranges )
+ return -EOPNOTSUPP;
+
+ if ( !rangeset_contains_singleton(iommu->iommu_op_ranges, bfn_x(bfn)) ||
+ iommu_lookup_page(currd, bfn, &mfn, &prot) ||
+ !mfn_valid(mfn) )
+ return -ENOENT;
+
+ d = rcu_lock_domain_by_any_id(op->domid);
+ if ( !d )
+ return -ESRCH;
+
+ rc = get_paged_gfn(d, _gfn(op->gfn), !(prot & IOMMUF_writable), NULL,
+ &page);
+ if ( rc )
+ goto unlock;
+
+ put_page(page); /* release extra reference just taken */
+
+ rc = -EINVAL;
+ if ( !mfn_eq(page_to_mfn(page), mfn) )
+ goto unlock;
+
+ put_page(page); /* release reference taken in map */
+
+ rc = rangeset_remove_singleton(iommu->iommu_op_ranges, bfn_x(bfn));
+ if ( rc )
+ goto unlock;
+
+ if ( !iommu_unmap_page(currd, bfn) )
+ rc = -EIO;
+
+ unlock:
+ rcu_unlock_domain(d);
+
+ return rc;
+}
+
+static int iommuop_flush(void)
+{
+ return !iommu_iotlb_flush_all(current->domain) ? 0 : -EIO;
+}
+
static void iommu_op(xen_iommu_op_t *op)
{
switch ( op->op )
@@ -125,6 +235,22 @@ static void iommu_op(xen_iommu_op_t *op)
op->status = iommu_op_enable_modification();
break;
+ case XEN_IOMMUOP_map:
+ this_cpu(iommu_dont_flush_iotlb) = 1;
+ op->status = iommuop_map(&op->u.map);
+ this_cpu(iommu_dont_flush_iotlb) = 0;
+ break;
+
+ case XEN_IOMMUOP_unmap:
+ this_cpu(iommu_dont_flush_iotlb) = 1;
+ op->status = iommuop_unmap(&op->u.unmap);
+ this_cpu(iommu_dont_flush_iotlb) = 0;
+ break;
+
+ case XEN_IOMMUOP_flush:
+ op->status = iommuop_flush();
+ break;
+
default:
op->status = -EOPNOTSUPP;
break;
@@ -138,6 +264,9 @@ int do_one_iommu_op(xen_iommu_op_buf_t *buf)
static const size_t op_size[] = {
[XEN_IOMMUOP_query_reserved] = sizeof(struct xen_iommu_op_query_reserved),
[XEN_IOMMUOP_enable_modification] = 0,
+ [XEN_IOMMUOP_map] = sizeof(struct xen_iommu_op_map),
+ [XEN_IOMMUOP_unmap] = sizeof(struct xen_iommu_op_unmap),
+ [XEN_IOMMUOP_flush] = 0,
};
offset = offsetof(struct xen_iommu_op, u);
@@ -222,6 +351,9 @@ int compat_one_iommu_op(compat_iommu_op_buf_t *buf)
static const size_t op_size[] = {
[XEN_IOMMUOP_query_reserved] = sizeof(struct compat_iommu_op_query_reserved),
[XEN_IOMMUOP_enable_modification] = 0,
+ [XEN_IOMMUOP_map] = sizeof(struct compat_iommu_op_map),
+ [XEN_IOMMUOP_unmap] = sizeof(struct compat_iommu_op_unmap),
+ [XEN_IOMMUOP_flush] = 0,
};
xen_iommu_op_t nat;
unsigned int u;
@@ -254,6 +386,8 @@ int compat_one_iommu_op(compat_iommu_op_buf_t *buf)
* we need to fix things up here.
*/
#define XLAT_iommu_op_u_query_reserved XEN_IOMMUOP_query_reserved
+#define XLAT_iommu_op_u_map XEN_IOMMUOP_map
+#define XLAT_iommu_op_u_unmap XEN_IOMMUOP_unmap
u = cmp.op;
#define XLAT_iommu_op_query_reserved_HNDL_ranges(_d_, _s_) \
diff --git a/xen/include/public/iommu_op.h b/xen/include/public/iommu_op.h
index 5a3148c247..737e2c8cfe 100644
--- a/xen/include/public/iommu_op.h
+++ b/xen/include/public/iommu_op.h
@@ -67,6 +67,47 @@ struct xen_iommu_op_query_reserved {
*/
#define XEN_IOMMUOP_enable_modification 2
+/*
+ * XEN_IOMMUOP_map: Map a guest page in the IOMMU.
+ */
+#define XEN_IOMMUOP_map 3
+
+struct xen_iommu_op_map {
+ /* IN - The domid of the guest */
+ domid_t domid;
+ uint16_t flags;
+
+#define _XEN_IOMMUOP_map_readonly 0
+#define XEN_IOMMUOP_map_readonly (1 << (_XEN_IOMMUOP_map_readonly))
+
+ uint32_t pad;
+ /* IN - The IOMMU frame number which will hold the new mapping */
+ xen_bfn_t bfn;
+ /* IN - The guest frame number of the page to be mapped */
+ xen_pfn_t gfn;
+};
+
+/*
+ * XEN_IOMMUOP_unmap_gfn: Remove a mapping in the IOMMU.
+ */
+#define XEN_IOMMUOP_unmap 4
+
+struct xen_iommu_op_unmap {
+ /* IN - The domid of the guest */
+ domid_t domid;
+ uint16_t pad0;
+ uint32_t pad1;
+ /* IN - The IOMMU frame number which holds the mapping to be removed */
+ xen_bfn_t bfn;
+ /* IN - The guest frame number of the page that is mapped */
+ xen_pfn_t gfn;
+};
+
+/*
+ * XEN_IOMMUOP_flush: Flush the IOMMU TLB.
+ */
+#define XEN_IOMMUOP_flush 5
+
struct xen_iommu_op {
uint16_t op; /* op type */
uint16_t pad;
@@ -74,6 +115,8 @@ struct xen_iommu_op {
/* 0 for success otherwise, negative errno */
union {
struct xen_iommu_op_query_reserved query_reserved;
+ struct xen_iommu_op_map map;
+ struct xen_iommu_op_unmap unmap;
} u;
};
typedef struct xen_iommu_op xen_iommu_op_t;
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 93bcf4b4d0..ed50216394 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -79,7 +79,9 @@
? vcpu_hvm_x86_64 hvm/hvm_vcpu.h
! iommu_op iommu_op.h
! iommu_op_buf iommu_op.h
+! iommu_op_map iommu_op.h
! iommu_op_query_reserved iommu_op.h
+! iommu_op_unmap iommu_op.h
! iommu_reserved_range iommu_op.h
? kexec_exec kexec.h
! kexec_image kexec.h
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-07-07 11:28 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-07 11:05 [PATCH v2 00/13] paravirtual IOMMU interface Paul Durrant
2018-07-07 11:05 ` [PATCH v2 01/13] grant_table: use term 'mfn' for machine frame numbers Paul Durrant
2018-07-10 13:19 ` George Dunlap
2018-07-11 8:31 ` Paul Durrant
2018-07-07 11:05 ` [PATCH v2 02/13] iommu: introduce the concept of BFN Paul Durrant
2018-07-10 13:47 ` George Dunlap
2018-07-10 14:08 ` Paul Durrant
2018-07-10 14:18 ` Jan Beulich
2018-07-07 11:05 ` [PATCH v2 03/13] iommu: make use of type-safe BFN and MFN in exported functions Paul Durrant
2018-07-10 14:00 ` George Dunlap
2018-07-10 14:10 ` Paul Durrant
2018-07-10 14:28 ` Jan Beulich
2018-07-10 14:37 ` Paul Durrant
2018-07-10 16:13 ` George Dunlap
2018-07-10 16:18 ` Paul Durrant
2018-07-10 16:19 ` George Dunlap
2018-07-11 7:57 ` Jan Beulich
2018-07-11 7:59 ` Paul Durrant
2018-07-07 11:05 ` [PATCH v2 04/13] iommu: push use of type-safe BFN and MFN into iommu_ops Paul Durrant
2018-07-10 16:38 ` George Dunlap
2018-07-07 11:05 ` [PATCH v2 05/13] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
2018-07-10 16:49 ` George Dunlap
2018-07-16 14:09 ` Wei Liu
2018-07-07 11:05 ` [PATCH v2 06/13] public / x86: introduce __HYPERCALL_iommu_op Paul Durrant
2018-07-11 9:09 ` George Dunlap
2018-07-16 10:00 ` Paul Durrant
2018-07-16 14:14 ` Wei Liu
2018-07-16 14:17 ` Paul Durrant
2018-07-07 11:05 ` [PATCH v2 07/13] iommu: track reserved ranges using a rangeset Paul Durrant
2018-07-11 9:16 ` George Dunlap
2018-07-16 10:21 ` Paul Durrant
2018-07-07 11:05 ` [PATCH v2 08/13] x86: add iommu_op to query reserved ranges Paul Durrant
2018-07-11 10:34 ` George Dunlap
2018-07-11 12:21 ` Paul Durrant
2018-07-07 11:05 ` [PATCH v2 09/13] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-07-11 10:51 ` George Dunlap
2018-07-11 12:25 ` Paul Durrant
2018-07-07 11:05 ` [PATCH v2 10/13] x86: add iommu_op to enable modification of IOMMU mappings Paul Durrant
2018-07-07 11:05 ` [PATCH v2 11/13] memory: add get_paged_gfn() as a wrapper Paul Durrant
2018-07-11 11:24 ` George Dunlap
2018-07-11 12:31 ` Paul Durrant
2018-07-11 13:04 ` George Dunlap
2018-07-11 13:09 ` Paul Durrant
2018-07-07 11:05 ` Paul Durrant [this message]
2018-07-11 11:46 ` [PATCH v2 12/13] x86: add iommu_ops to modify and flush IOMMU mappings George Dunlap
2018-07-11 12:36 ` Paul Durrant
2018-07-07 11:05 ` [PATCH v2 13/13] x86: extend the map and unmap iommu_ops to support grant references Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180707110526.35822-13-paul.durrant@citrix.com \
--to=paul.durrant@citrix.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=julien.grall@arm.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).