From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mukesh Rathor Subject: [V1 PATCH 03/11] PVH dom0: iommu related changes Date: Fri, 8 Nov 2013 17:23:28 -0800 Message-ID: <1383960215-22444-4-git-send-email-mukesh.rathor@oracle.com> References: <1383960215-22444-1-git-send-email-mukesh.rathor@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1383960215-22444-1-git-send-email-mukesh.rathor@oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Xen-devel@lists.xensource.com Cc: keir.xen@gmail.com, tim@xen.org, JBeulich@suse.com List-Id: xen-devel@lists.xenproject.org - For now, iommu is required for PVH dom0. Check for that. - For PVH, we need to do mfn_to_gfn before calling mapping function intel_iommu_map_page/amd_iommu_map_page which expects a gfn. Signed-off-by: Mukesh Rathor --- xen/drivers/passthrough/iommu.c | 17 +++++++++++++++-- 1 files changed, 15 insertions(+), 2 deletions(-) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c index 93ad122..c3d31a5 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -125,15 +125,27 @@ int iommu_domain_init(struct domain *d) return hd->platform_ops->init(d); } +static inline void check_dom0_pvh_reqs(struct domain *d) +{ + if ( !iommu_enabled ) + panic("Presently, iommu must be enabled for pvh dom0\n"); + + if ( iommu_passthrough ) + panic("For pvh dom0, dom0-passthrough must not be enabled\n"); +} + void __init iommu_dom0_init(struct domain *d) { struct hvm_iommu *hd = domain_hvm_iommu(d); + if ( is_pvh_domain(d) ) + check_dom0_pvh_reqs(d); + if ( !iommu_enabled ) return; register_keyhandler('o', &iommu_p2m_table); - d->need_iommu = !!iommu_dom0_strict; + d->need_iommu = is_pvh_domain(d) || !!iommu_dom0_strict; if ( need_iommu(d) ) { struct page_info *page; @@ -141,12 +153,13 @@ void __init iommu_dom0_init(struct domain *d) page_list_for_each ( page, &d->page_list ) { unsigned long mfn = page_to_mfn(page); + unsigned long gfn = mfn_to_gfn(d, _mfn(mfn)); unsigned int mapping = IOMMUF_readable; if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) || ((page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page) ) mapping |= IOMMUF_writable; - hd->platform_ops->map_page(d, mfn, mfn, mapping); + hd->platform_ops->map_page(d, gfn, mfn, mapping); if ( !(i++ & 0xfffff) ) process_pending_softirqs(); } -- 1.7.2.3