From: Paul Durrant <paul.durrant@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Kevin Tian <kevin.tian@intel.com>,
Stefano Stabellini <sstabellini@kernel.org>,
Jun Nakajima <jun.nakajima@intel.com>,
George Dunlap <george.dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
Julien Grall <julien.grall@arm.com>,
Paul Durrant <paul.durrant@citrix.com>,
Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v5 05/15] iommu: don't domain_crash() inside iommu_map/unmap_page()
Date: Fri, 3 Aug 2018 18:22:10 +0100 [thread overview]
Message-ID: <20180803172220.1657-6-paul.durrant@citrix.com> (raw)
In-Reply-To: <20180803172220.1657-1-paul.durrant@citrix.com>
Turn iommu_map/unmap_page() into straightforward wrappers that check the
existence of the relevant iommu_op and call through to it. This makes them
usable by PV IOMMU code to be delivered in future patches.
Leave the decision on whether to invoke domain_crash() up to the caller.
This has the added benefit that the (module/line number) message that
domain_crash() spits out will be more indicative of where the problem lies.
NOTE: This patch includes one bit of clean-up in set_identity_p2m_entry()
replacing use of p2m->domain with the domain pointer passed into the
function.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: George Dunlap <George.Dunlap@eu.citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
v2:
- New in v2.
---
xen/arch/arm/p2m.c | 4 ++++
xen/arch/x86/mm.c | 3 +++
xen/arch/x86/mm/p2m-ept.c | 3 +++
xen/arch/x86/mm/p2m-pt.c | 3 +++
xen/arch/x86/mm/p2m.c | 24 ++++++++++++++++++++----
xen/common/grant_table.c | 8 ++++++++
xen/common/memory.c | 3 +++
xen/drivers/passthrough/iommu.c | 12 ------------
xen/drivers/passthrough/x86/iommu.c | 4 ++++
9 files changed, 48 insertions(+), 16 deletions(-)
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d719db1e30..eb39861b73 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -953,8 +953,12 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
if ( need_iommu(p2m->domain) &&
(lpae_valid(orig_pte) || lpae_valid(*entry)) )
+ {
rc = iommu_iotlb_flush(p2m->domain, _bfn(gfn_x(sgfn)),
1UL << page_order);
+ if ( unlikely(rc) && !is_hardware_domain(p2m->domain) )
+ domain_crash(p2m->domain);
+ }
else
rc = 0;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 99070f916d..08878574f3 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2737,6 +2737,9 @@ static int _get_page_type(struct page_info *page, unsigned long type,
iommu_ret = iommu_map_page(d, _bfn(mfn_x(mfn)), mfn,
IOMMUF_readable |
IOMMUF_writable);
+
+ if ( unlikely(iommu_ret) && !is_hardware_domain(d) )
+ domain_crash(d);
}
}
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 2089b5232d..33e77903d6 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -895,6 +895,9 @@ out:
if ( !rc )
rc = ret;
}
+
+ if ( unlikely(rc) && !is_hardware_domain(d) )
+ domain_crash(d);
}
}
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index a441af388a..9ff0b3fe7a 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -717,6 +717,9 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
rc = ret;
}
}
+
+ if ( unlikely(rc) && !is_hardware_domain(p2m->domain) )
+ domain_crash(p2m->domain);
}
/*
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index fbf67def50..036f52f4f1 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -724,6 +724,9 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn_l, unsigned long mfn,
if ( !rc )
rc = ret;
}
+
+ if ( unlikely(rc) && !is_hardware_domain(p2m->domain) )
+ domain_crash(p2m->domain);
}
return rc;
@@ -789,6 +792,9 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
if ( iommu_unmap_page(d, bfn_add(bfn, i)) )
continue;
+ if ( !is_hardware_domain(d) )
+ domain_crash(d);
+
return rc;
}
}
@@ -1157,12 +1163,17 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
struct p2m_domain *p2m = p2m_get_hostp2m(d);
int ret;
- if ( !paging_mode_translate(p2m->domain) )
+ if ( !paging_mode_translate(d) )
{
if ( !need_iommu(d) )
return 0;
- return iommu_map_page(d, _bfn(gfn_l), _mfn(gfn_l),
- IOMMUF_readable | IOMMUF_writable);
+
+ ret = iommu_map_page(d, _bfn(gfn_l), _mfn(gfn_l),
+ IOMMUF_readable | IOMMUF_writable);
+ if ( unlikely(ret) && !is_hardware_domain(d) )
+ domain_crash(d);
+
+ return ret;
}
gfn_lock(p2m, gfn, 0);
@@ -1252,7 +1263,12 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
{
if ( !need_iommu(d) )
return 0;
- return iommu_unmap_page(d, _bfn(gfn_l));
+
+ ret = iommu_unmap_page(d, _bfn(gfn_l));
+ if ( unlikely(ret) && !is_hardware_domain(d) )
+ domain_crash(d);
+
+ return ret;
}
gfn_lock(p2m, gfn, 0);
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index a83c8832af..1840b656c9 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1146,6 +1146,9 @@ map_grant_ref(
}
if ( err )
{
+ if ( !is_hardware_domain(ld) )
+ domain_crash(ld);
+
double_gt_unlock(lgt, rgt);
rc = GNTST_general_error;
goto undo_out;
@@ -1412,7 +1415,12 @@ unmap_common(
double_gt_unlock(lgt, rgt);
if ( err )
+ {
+ if ( !is_hardware_domain(ld) )
+ domain_crash(ld);
+
rc = GNTST_general_error;
+ }
}
/* If just unmapped a writable mapping, mark as dirtied */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index a5656743c2..0a34677cc3 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -840,6 +840,9 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
ret = iommu_iotlb_flush(d, _bfn(xatp->gpfn - done), done);
if ( unlikely(ret) && rc >= 0 )
rc = ret;
+
+ if ( unlikely(rc < 0) && !is_hardware_domain(d) )
+ domain_crash(d);
}
#endif
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 1f32958816..21e6886a3f 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -272,9 +272,6 @@ int iommu_map_page(struct domain *d, bfn_t bfn, mfn_t mfn,
printk(XENLOG_ERR
"d%d: IOMMU mapping bfn %"PRI_bfn" to mfn %"PRI_mfn" failed: %d\n",
d->domain_id, bfn_x(bfn), mfn_x(mfn), rc);
-
- if ( !is_hardware_domain(d) )
- domain_crash(d);
}
return rc;
@@ -295,9 +292,6 @@ int iommu_unmap_page(struct domain *d, bfn_t bfn)
printk(XENLOG_ERR
"d%d: IOMMU unmapping bfn %"PRI_bfn" failed: %d\n",
d->domain_id, bfn_x(bfn), rc);
-
- if ( !is_hardware_domain(d) )
- domain_crash(d);
}
return rc;
@@ -335,9 +329,6 @@ int iommu_iotlb_flush(struct domain *d, bfn_t bfn, unsigned int page_count)
printk(XENLOG_ERR
"d%d: IOMMU IOTLB flush failed: %d, bfn %"PRI_bfn", page count %u\n",
d->domain_id, rc, bfn_x(bfn), page_count);
-
- if ( !is_hardware_domain(d) )
- domain_crash(d);
}
return rc;
@@ -358,9 +349,6 @@ int iommu_iotlb_flush_all(struct domain *d)
printk(XENLOG_ERR
"d%d: IOMMU IOTLB flush all failed: %d\n",
d->domain_id, rc);
-
- if ( !is_hardware_domain(d) )
- domain_crash(d);
}
return rc;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 9f2ad15ba0..1b3d2a2c8f 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -102,7 +102,11 @@ int arch_iommu_populate_page_table(struct domain *d)
this_cpu(iommu_dont_flush_iotlb) = 0;
if ( !rc )
+ {
rc = iommu_iotlb_flush_all(d);
+ if ( unlikely(rc) )
+ domain_crash(d);
+ }
if ( rc && rc != -ERESTART )
iommu_teardown(d);
--
2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-08-03 17:22 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-03 17:22 [PATCH v5 00/15] paravirtual IOMMU interface Paul Durrant
2018-08-03 17:22 ` [PATCH v5 01/15] iommu: turn need_iommu back into a boolean Paul Durrant
2018-08-08 13:39 ` Jan Beulich
2018-08-08 13:56 ` Paul Durrant
2018-08-03 17:22 ` [PATCH v5 02/15] iommu: introduce the concept of BFN Paul Durrant
2018-08-07 2:38 ` Tian, Kevin
2018-08-07 7:59 ` Paul Durrant
2018-08-07 8:26 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 03/15] iommu: make use of type-safe BFN and MFN in exported functions Paul Durrant
2018-08-07 2:45 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 04/15] iommu: push use of type-safe BFN and MFN into iommu_ops Paul Durrant
2018-08-07 2:49 ` Tian, Kevin
2018-08-03 17:22 ` Paul Durrant [this message]
2018-08-07 2:55 ` [PATCH v5 05/15] iommu: don't domain_crash() inside iommu_map/unmap_page() Tian, Kevin
2018-08-07 8:05 ` Paul Durrant
2018-08-07 8:23 ` Jan Beulich
2018-08-03 17:22 ` [PATCH v5 06/15] public / x86: introduce __HYPERCALL_iommu_op Paul Durrant
2018-08-07 3:00 ` Tian, Kevin
2018-08-07 8:10 ` Paul Durrant
2018-08-07 8:25 ` Jan Beulich
2018-08-17 21:10 ` Daniel De Graaf
2018-08-03 17:22 ` [PATCH v5 07/15] iommu: track reserved ranges using a rangeset Paul Durrant
2018-08-07 3:04 ` Tian, Kevin
2018-08-07 8:16 ` Paul Durrant
2018-08-07 8:23 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 08/15] x86: add iommu_op to query reserved ranges Paul Durrant
2018-08-03 17:22 ` [PATCH v5 09/15] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-08-07 3:25 ` Tian, Kevin
2018-08-07 8:21 ` Paul Durrant
2018-08-07 8:29 ` Jan Beulich
2018-08-07 8:32 ` Tian, Kevin
2018-08-07 8:37 ` Paul Durrant
2018-08-07 8:48 ` Tian, Kevin
2018-08-07 8:56 ` Paul Durrant
2018-08-07 9:03 ` Tian, Kevin
2018-08-07 9:07 ` Paul Durrant
2018-08-07 8:31 ` Tian, Kevin
2018-08-07 8:35 ` Paul Durrant
2018-08-07 8:47 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 10/15] mm / iommu: include need_iommu() test in iommu_use_hap_pt() Paul Durrant
2018-08-07 3:32 ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 11/15] mm / iommu: split need_iommu() into has_iommu_pt() and sync_iommu_pt() Paul Durrant
2018-08-03 18:18 ` Razvan Cojocaru
2018-08-07 3:41 ` Tian, Kevin
2018-08-07 8:24 ` Paul Durrant
2018-08-03 17:22 ` [PATCH v5 12/15] x86: add iommu_op to enable modification of IOMMU mappings Paul Durrant
2018-08-07 4:08 ` Tian, Kevin
2018-08-07 8:32 ` Paul Durrant
2018-08-07 8:37 ` Tian, Kevin
2018-08-07 8:44 ` Paul Durrant
2018-08-07 9:01 ` Tian, Kevin
2018-08-07 9:12 ` Paul Durrant
2018-08-07 9:19 ` Tian, Kevin
2018-08-07 9:22 ` Paul Durrant
2018-08-03 17:22 ` [PATCH v5 13/15] memory: add get_paged_gfn() as a wrapper Paul Durrant
2018-08-03 17:22 ` [PATCH v5 14/15] x86: add iommu_ops to modify and flush IOMMU mappings Paul Durrant
2018-08-03 17:22 ` [PATCH v5 15/15] x86: extend the map and unmap iommu_ops to support grant references Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180803172220.1657-6-paul.durrant@citrix.com \
--to=paul.durrant@citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=julien.grall@arm.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).