From: Paul Durrant <Paul.Durrant@citrix.com>
To: Kevin Tian <kevin.tian@intel.com>,
"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Liu <wei.liu2@citrix.com>,
George Dunlap <George.Dunlap@citrix.com>,
Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
Date: Thu, 13 Sep 2018 08:30:40 +0000 [thread overview]
Message-ID: <246918f763064c0baa38d99036766caf@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D191301BC5@SHSMSX101.ccr.corp.intel.com>
> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: 13 September 2018 07:53
> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xenproject.org
> Cc: Wei Liu <wei.liu2@citrix.com>; Jan Beulich <jbeulich@suse.com>; George
> Dunlap <George.Dunlap@citrix.com>
> Subject: RE: [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops
>
> > From: Paul Durrant [mailto:paul.durrant@citrix.com]
> > Sent: Wednesday, September 12, 2018 7:30 PM
> >
> > This patch adds a new method to the VT-d IOMMU implementation to find
> > the
> > MFN currently mapped by the specified DFN along with a wrapper function
> > in generic IOMMU code to call the implementation if it exists.
> >
> > This patch also cleans up the initializers in intel_iommu_map_page() and
> > uses array-style dereference there, for consistency. A missing check for
> > shared EPT is also added to intel_iommu_unmap_page().
>
> then please split into two patches.
>
Ok.
> >
> > NOTE: This patch only adds a Xen-internal interface. This will be used by
> > a subsequent patch.
> > Another subsequent patch will add similar functionality for AMD
> > IOMMUs.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > ---
> > Cc: Wei Liu <wei.liu2@citrix.com>
> > Cc: Kevin Tian <kevin.tian@intel.com>
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: George Dunlap <george.dunlap@citrix.com>
> >
> > v7:
> > - Re-base and re-name BFN -> DFN.
> > - Add missing checks for shared EPT and iommu_passthrough.
> > - Remove unnecessary initializers and use array-style dereference.
> > - Drop Wei's R-b because of code churn.
> >
> > v3:
> > - Addressed comments from George.
> >
> > v2:
> > - Addressed some comments from Jan.
> > ---
> > xen/drivers/passthrough/iommu.c | 11 ++++++++
> > xen/drivers/passthrough/vtd/iommu.c | 52
> > +++++++++++++++++++++++++++++++++++--
> > xen/drivers/passthrough/vtd/iommu.h | 3 +++
> > xen/include/xen/iommu.h | 4 +++
> > 4 files changed, 68 insertions(+), 2 deletions(-)
> >
> > diff --git a/xen/drivers/passthrough/iommu.c
> > b/xen/drivers/passthrough/iommu.c
> > index a16f1a0c66..52e3f500c7 100644
> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -296,6 +296,17 @@ int iommu_unmap_page(struct domain *d, dfn_t
> > dfn)
> > return rc;
> > }
> >
> > +int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
> > + unsigned int *flags)
> > +{
> > + const struct domain_iommu *hd = dom_iommu(d);
> > +
> > + if ( !iommu_enabled || !hd->platform_ops )
> > + return -EOPNOTSUPP;
> > +
> > + return hd->platform_ops->lookup_page(d, dfn, mfn, flags);
> > +}
> > +
> > static void iommu_free_pagetables(unsigned long unused)
> > {
> > do {
> > diff --git a/xen/drivers/passthrough/vtd/iommu.c
> > b/xen/drivers/passthrough/vtd/iommu.c
> > index 0163bb949b..6622c2dd4c 100644
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -1770,7 +1770,7 @@ static int __must_check
> > intel_iommu_map_page(struct domain *d,
> > unsigned int flags)
> > {
> > struct domain_iommu *hd = dom_iommu(d);
> > - struct dma_pte *page = NULL, *pte = NULL, old, new = { 0 };
> > + struct dma_pte *page, *pte, old, new = {};
> > u64 pg_maddr;
> > int rc = 0;
> >
> > @@ -1790,9 +1790,11 @@ static int __must_check
> > intel_iommu_map_page(struct domain *d,
> > spin_unlock(&hd->arch.mapping_lock);
> > return -ENOMEM;
> > }
> > +
> > page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
> > - pte = page + (dfn_x(dfn) & LEVEL_MASK);
> > + pte = &page[dfn_x(dfn) & LEVEL_MASK];
> > old = *pte;
> > +
> > dma_set_pte_addr(new, mfn_to_maddr(mfn));
> > dma_set_pte_prot(new,
> > ((flags & IOMMUF_readable) ? DMA_PTE_READ : 0) |
> > @@ -1808,6 +1810,7 @@ static int __must_check
> > intel_iommu_map_page(struct domain *d,
> > unmap_vtd_domain_page(page);
> > return 0;
> > }
> > +
> > *pte = new;
> >
> > iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> > @@ -1823,6 +1826,10 @@ static int __must_check
> > intel_iommu_map_page(struct domain *d,
> > static int __must_check intel_iommu_unmap_page(struct domain *d,
> > dfn_t dfn)
> > {
> > + /* Do nothing if VT-d shares EPT page table */
> > + if ( iommu_use_hap_pt(d) )
> > + return 0;
> > +
> > /* Do nothing if hardware domain and iommu supports pass thru. */
> > if ( iommu_passthrough && is_hardware_domain(d) )
> > return 0;
> > @@ -1830,6 +1837,46 @@ static int __must_check
> > intel_iommu_unmap_page(struct domain *d,
> > return dma_pte_clear_one(d, dfn_to_daddr(dfn));
> > }
> >
> > +static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t
> > *mfn,
> > + unsigned int *flags)
> > +{
> > + struct domain_iommu *hd = dom_iommu(d);
> > + struct dma_pte *page, val;
> > + u64 pg_maddr;
> > +
> > + /* Fail if VT-d shares EPT page table */
> > + if ( iommu_use_hap_pt(d) )
> > + return -ENOENT;
> > +
> > + /* Fail if hardware domain and iommu supports pass thru. */
> > + if ( iommu_passthrough && is_hardware_domain(d) )
> > + return -ENOENT;
>
> why fail instead of returning dfn as mfn? passthrough is just one
> special translation mode in IOMMU, which doesn't mean lookup
> is not possible.
>
Hmm. Given that map and unmap don't return errors then maybe that is best. Will do.
Paul
> > +
> > + spin_lock(&hd->arch.mapping_lock);
> > +
> > + pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
> > + if ( pg_maddr == 0 )
> > + {
> > + spin_unlock(&hd->arch.mapping_lock);
> > + return -ENOMEM;
> > + }
> > +
> > + page = map_vtd_domain_page(pg_maddr);
> > + val = page[dfn_x(dfn) & LEVEL_MASK];
> > +
> > + unmap_vtd_domain_page(page);
> > + spin_unlock(&hd->arch.mapping_lock);
> > +
> > + if ( !dma_pte_present(val) )
> > + return -ENOENT;
> > +
> > + *mfn = maddr_to_mfn(dma_pte_addr(val));
> > + *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
> > + *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
> > +
> > + return 0;
> > +}
> > +
> > int iommu_pte_flush(struct domain *d, uint64_t dfn, uint64_t *pte,
> > int order, int present)
> > {
> > @@ -2655,6 +2702,7 @@ const struct iommu_ops intel_iommu_ops = {
> > .teardown = iommu_domain_teardown,
> > .map_page = intel_iommu_map_page,
> > .unmap_page = intel_iommu_unmap_page,
> > + .lookup_page = intel_iommu_lookup_page,
> > .free_page_table = iommu_free_page_table,
> > .reassign_device = reassign_device_ownership,
> > .get_device_group_id = intel_iommu_group_id,
> > diff --git a/xen/drivers/passthrough/vtd/iommu.h
> > b/xen/drivers/passthrough/vtd/iommu.h
> > index 72c1a2e3cd..47bdfcb5ea 100644
> > --- a/xen/drivers/passthrough/vtd/iommu.h
> > +++ b/xen/drivers/passthrough/vtd/iommu.h
> > @@ -272,6 +272,9 @@ struct dma_pte {
> > #define dma_set_pte_prot(p, prot) do { \
> > (p).val = ((p).val & ~DMA_PTE_PROT) | ((prot) & DMA_PTE_PROT); \
> > } while (0)
> > +#define dma_pte_prot(p) ((p).val & DMA_PTE_PROT)
> > +#define dma_pte_read(p) (dma_pte_prot(p) & DMA_PTE_READ)
> > +#define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
> > #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
> > #define dma_set_pte_addr(p, addr) do {\
> > (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
> > diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> > index 9e0b4e8638..bebddc2db4 100644
> > --- a/xen/include/xen/iommu.h
> > +++ b/xen/include/xen/iommu.h
> > @@ -100,6 +100,8 @@ void iommu_teardown(struct domain *d);
> > int __must_check iommu_map_page(struct domain *d, dfn_t dfn,
> > mfn_t mfn, unsigned int flags);
> > int __must_check iommu_unmap_page(struct domain *d, dfn_t dfn);
> > +int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn,
> > mfn_t *mfn,
> > + unsigned int *flags);
> >
> > enum iommu_feature
> > {
> > @@ -190,6 +192,8 @@ struct iommu_ops {
> > int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t
> mfn,
> > unsigned int flags);
> > int __must_check (*unmap_page)(struct domain *d, dfn_t dfn);
> > + int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t
> > *mfn,
> > + unsigned int *flags);
> > void (*free_page_table)(struct page_info *);
> > #ifdef CONFIG_X86
> > void (*update_ire_from_apic)(unsigned int apic, unsigned int reg,
> > unsigned int value);
> > --
> > 2.11.0
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-09-13 8:30 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-12 11:30 [PATCH v7 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
2018-09-12 11:30 ` [PATCH v7 1/6] iommu: introduce the concept of DFN Paul Durrant
2018-09-12 11:49 ` Andrew Cooper
2018-09-12 12:01 ` Paul Durrant
2018-09-12 12:14 ` Roger Pau Monné
2018-09-13 9:36 ` Paul Durrant
2018-09-12 11:30 ` [PATCH v7 2/6] iommu: make use of type-safe DFN and MFN in exported functions Paul Durrant
2018-09-13 8:09 ` Roger Pau Monné
2018-09-12 11:30 ` [PATCH v7 3/6] iommu: push use of type-safe DFN and MFN into iommu_ops Paul Durrant
2018-09-13 8:12 ` Roger Pau Monné
2018-09-12 11:30 ` [PATCH v7 4/6] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
2018-09-13 6:48 ` Tian, Kevin
2018-09-13 8:22 ` Roger Pau Monné
2018-09-13 8:27 ` Paul Durrant
2018-09-12 11:30 ` [PATCH v7 5/6] memory: add check_get_page_from_gfn() as a wrapper Paul Durrant
2018-09-13 8:30 ` Roger Pau Monné
2018-09-13 8:35 ` Paul Durrant
2018-09-12 11:30 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-09-13 6:53 ` Tian, Kevin
2018-09-13 8:30 ` Paul Durrant [this message]
2018-09-13 8:50 ` Roger Pau Monné
2018-09-13 8:59 ` Paul Durrant
-- strict thread matches above, loose matches on Subject: below --
2018-09-13 10:31 [PATCH v8 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
2018-09-13 10:31 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-09-13 15:21 [PATCH v9 0/6] paravirtual IOMMU pre-requisites and clean-up Paul Durrant
2018-09-13 15:21 ` [PATCH v7 6/6] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-09-13 15:22 ` Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=246918f763064c0baa38d99036766caf@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=George.Dunlap@citrix.com \
--cc=jbeulich@suse.com \
--cc=kevin.tian@intel.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).