From: Paul Durrant <paul.durrant@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Kevin Tian <kevin.tian@intel.com>,
Paul Durrant <paul.durrant@citrix.com>,
Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v5 07/15] iommu: track reserved ranges using a rangeset
Date: Fri, 3 Aug 2018 18:22:12 +0100 [thread overview]
Message-ID: <20180803172220.1657-8-paul.durrant@citrix.com> (raw)
In-Reply-To: <20180803172220.1657-1-paul.durrant@citrix.com>
Ranges that should be considered reserved in the IOMMU are not necessarily
limited to RMRRs. If iommu_inclusive_mapping is set then any frame number
falling within an E820 reserved region should also be considered as
reserved in the IOMMU.
This patch adds a rangeset to the domain_iommu structure that is then used
to track all reserved ranges. This will be needed by a subsequent patch
to test whether it is safe to modify a particular IOMMU entry.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>
v2:
- New in v2.
---
xen/drivers/passthrough/iommu.c | 10 +++++++++-
xen/drivers/passthrough/vtd/iommu.c | 20 +++++++++++++-------
xen/drivers/passthrough/vtd/x86/vtd.c | 17 ++++++++++++++++-
xen/include/xen/iommu.h | 6 ++++++
4 files changed, 44 insertions(+), 9 deletions(-)
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 21e6886a3f..b10a37e5d7 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -147,6 +147,10 @@ int iommu_domain_init(struct domain *d)
if ( !iommu_enabled )
return 0;
+ hd->reserved_ranges = rangeset_new(d, NULL, 0);
+ if ( !hd->reserved_ranges )
+ return -ENOMEM;
+
hd->platform_ops = iommu_get_ops();
return hd->platform_ops->init(d);
}
@@ -248,12 +252,16 @@ int iommu_construct(struct domain *d)
void iommu_domain_destroy(struct domain *d)
{
- if ( !iommu_enabled || !dom_iommu(d)->platform_ops )
+ const struct domain_iommu *hd = dom_iommu(d);
+
+ if ( !iommu_enabled || !hd->platform_ops )
return;
iommu_teardown(d);
arch_iommu_domain_destroy(d);
+
+ rangeset_destroy(hd->reserved_ranges);
}
int iommu_map_page(struct domain *d, bfn_t bfn, mfn_t mfn,
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index c9f50f04ad..282e227414 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1910,6 +1910,7 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map,
unsigned long end_pfn = PAGE_ALIGN_4K(rmrr->end_address) >> PAGE_SHIFT_4K;
struct mapped_rmrr *mrmrr;
struct domain_iommu *hd = dom_iommu(d);
+ int err = 0;
ASSERT(pcidevs_locked());
ASSERT(rmrr->base_address < rmrr->end_address);
@@ -1923,8 +1924,6 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map,
if ( mrmrr->base == rmrr->base_address &&
mrmrr->end == rmrr->end_address )
{
- int ret = 0;
-
if ( map )
{
++mrmrr->count;
@@ -1934,28 +1933,35 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map,
if ( --mrmrr->count )
return 0;
- while ( base_pfn < end_pfn )
+ err = rangeset_remove_range(hd->reserved_ranges,
+ base_pfn, end_pfn);
+ while ( !err && base_pfn < end_pfn )
{
if ( clear_identity_p2m_entry(d, base_pfn) )
- ret = -ENXIO;
+ err = -ENXIO;
+
base_pfn++;
}
list_del(&mrmrr->list);
xfree(mrmrr);
- return ret;
+ return err;
}
}
if ( !map )
return -ENOENT;
+ err = rangeset_add_range(hd->reserved_ranges, base_pfn, end_pfn);
+ if ( err )
+ return err;
+
while ( base_pfn < end_pfn )
{
- int err = set_identity_p2m_entry(d, base_pfn, p2m_access_rw, flag);
-
+ err = set_identity_p2m_entry(d, base_pfn, p2m_access_rw, flag);
if ( err )
return err;
+
base_pfn++;
}
diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthrough/vtd/x86/vtd.c
index 6fed4a92cb..032412b8c6 100644
--- a/xen/drivers/passthrough/vtd/x86/vtd.c
+++ b/xen/drivers/passthrough/vtd/x86/vtd.c
@@ -164,10 +164,25 @@ void __hwdom_init vtd_set_hwdom_mapping(struct domain *d)
if ( !rc )
rc = ret;
+
+ /*
+ * The only reason a reserved page would be mapped is that
+ * iommu_inclusive_mapping is set, in which case the BFN
+ * needs to be marked as reserved in the IOMMU.
+ */
+ if ( page_is_ram_type(pfn, RAM_TYPE_RESERVED) )
+ {
+ ASSERT(iommu_inclusive_mapping);
+
+ ret = rangeset_add_singleton(dom_iommu(d)->reserved_ranges,
+ bfn_x(bfn));
+ if ( !rc )
+ rc = ret;
+ }
}
if ( rc )
- printk(XENLOG_WARNING VTDPREFIX " d%d: IOMMU mapping failed: %d\n",
+ printk(XENLOG_WARNING VTDPREFIX " d%d: IOMMU mapping/reservation failed: %d\n",
d->domain_id, rc);
if (!(i & (0xfffff >> (PAGE_SHIFT - PAGE_SHIFT_4K))))
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 624784fec8..cc0be81b4e 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -122,6 +122,12 @@ struct domain_iommu {
/* Features supported by the IOMMU */
DECLARE_BITMAP(features, IOMMU_FEAT_count);
+
+ /*
+ * BFN ranges that are reserved in the domain IOMMU mappings and
+ * must not be modified after initialization.
+ */
+ struct rangeset *reserved_ranges;
};
#define dom_iommu(d) (&(d)->iommu)
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-08-03 17:22 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-03 17:22 [PATCH v5 00/15] paravirtual IOMMU interface Paul Durrant
2018-08-03 17:22 ` [PATCH v5 01/15] iommu: turn need_iommu back into a boolean Paul Durrant
2018-08-08 13:39 ` Jan Beulich
2018-08-08 13:56 ` Paul Durrant
2018-08-03 17:22 ` [PATCH v5 02/15] iommu: introduce the concept of BFN Paul Durrant
2018-08-07 2:38 ` Tian, Kevin
2018-08-07 7:59 ` Paul Durrant
2018-08-07 8:26 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 03/15] iommu: make use of type-safe BFN and MFN in exported functions Paul Durrant
2018-08-07 2:45 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 04/15] iommu: push use of type-safe BFN and MFN into iommu_ops Paul Durrant
2018-08-07 2:49 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 05/15] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
2018-08-07 2:55 ` Tian, Kevin
2018-08-07 8:05 ` Paul Durrant
2018-08-07 8:23 ` Jan Beulich
2018-08-03 17:22 ` [PATCH v5 06/15] public / x86: introduce __HYPERCALL_iommu_op Paul Durrant
2018-08-07 3:00 ` Tian, Kevin
2018-08-07 8:10 ` Paul Durrant
2018-08-07 8:25 ` Jan Beulich
2018-08-17 21:10 ` Daniel De Graaf
2018-08-03 17:22 ` Paul Durrant [this message]
2018-08-07 3:04 ` [PATCH v5 07/15] iommu: track reserved ranges using a rangeset Tian, Kevin
2018-08-07 8:16 ` Paul Durrant
2018-08-07 8:23 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 08/15] x86: add iommu_op to query reserved ranges Paul Durrant
2018-08-03 17:22 ` [PATCH v5 09/15] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-08-07 3:25 ` Tian, Kevin
2018-08-07 8:21 ` Paul Durrant
2018-08-07 8:29 ` Jan Beulich
2018-08-07 8:32 ` Tian, Kevin
2018-08-07 8:37 ` Paul Durrant
2018-08-07 8:48 ` Tian, Kevin
2018-08-07 8:56 ` Paul Durrant
2018-08-07 9:03 ` Tian, Kevin
2018-08-07 9:07 ` Paul Durrant
2018-08-07 8:31 ` Tian, Kevin
2018-08-07 8:35 ` Paul Durrant
2018-08-07 8:47 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 10/15] mm / iommu: include need_iommu() test in iommu_use_hap_pt() Paul Durrant
2018-08-07 3:32 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 11/15] mm / iommu: split need_iommu() into has_iommu_pt() and sync_iommu_pt() Paul Durrant
2018-08-03 18:18 ` Razvan Cojocaru
2018-08-07 3:41 ` Tian, Kevin
2018-08-07 8:24 ` Paul Durrant
2018-08-03 17:22 ` [PATCH v5 12/15] x86: add iommu_op to enable modification of IOMMU mappings Paul Durrant
2018-08-07 4:08 ` Tian, Kevin
2018-08-07 8:32 ` Paul Durrant
2018-08-07 8:37 ` Tian, Kevin
2018-08-07 8:44 ` Paul Durrant
2018-08-07 9:01 ` Tian, Kevin
2018-08-07 9:12 ` Paul Durrant
2018-08-07 9:19 ` Tian, Kevin
2018-08-07 9:22 ` Paul Durrant
2018-08-03 17:22 ` [PATCH v5 13/15] memory: add get_paged_gfn() as a wrapper Paul Durrant
2018-08-03 17:22 ` [PATCH v5 14/15] x86: add iommu_ops to modify and flush IOMMU mappings Paul Durrant
2018-08-03 17:22 ` [PATCH v5 15/15] x86: extend the map and unmap iommu_ops to support grant references Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180803172220.1657-8-paul.durrant@citrix.com \
--to=paul.durrant@citrix.com \
--cc=jbeulich@suse.com \
--cc=kevin.tian@intel.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).