xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Kevin Tian <kevin.tian@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v5 09/15] vtd: add lookup_page method to iommu_ops
Date: Tue, 7 Aug 2018 08:21:46 +0000	[thread overview]
Message-ID: <0a4bee6dd58242f39e71c8e9bf2f89e0@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D1912AC1B1@SHSMSX101.ccr.corp.intel.com>

> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: 07 August 2018 04:25
> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xenproject.org
> Cc: Jan Beulich <jbeulich@suse.com>; George Dunlap
> <George.Dunlap@citrix.com>
> Subject: RE: [PATCH v5 09/15] vtd: add lookup_page method to iommu_ops
> 
> > From: Paul Durrant [mailto:paul.durrant@citrix.com]
> > Sent: Saturday, August 4, 2018 1:22 AM
> >
> > This patch adds a new method to the VT-d IOMMU implementation to find
> > the
> > MFN currently mapped by the specified BFN along with a wrapper function
> > in
> > generic IOMMU code to call the implementation if it exists.
> >
> > This functionality will be used by a subsequent patch.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Reviewed-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> > Cc: Kevin Tian <kevin.tian@intel.com>
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: George Dunlap <george.dunlap@citrix.com>
> >
> > v3:
> >  - Addressed comments from George.
> >
> > v2:
> >  - Addressed some comments from Jan.
> > ---
> >  xen/drivers/passthrough/iommu.c     | 11 +++++++++++
> >  xen/drivers/passthrough/vtd/iommu.c | 34
> > ++++++++++++++++++++++++++++++++++
> >  xen/drivers/passthrough/vtd/iommu.h |  3 +++
> >  xen/include/xen/iommu.h             |  4 ++++
> >  4 files changed, 52 insertions(+)
> >
> > diff --git a/xen/drivers/passthrough/iommu.c
> > b/xen/drivers/passthrough/iommu.c
> > index b10a37e5d7..9b7baca93f 100644
> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -305,6 +305,17 @@ int iommu_unmap_page(struct domain *d, bfn_t
> > bfn)
> >      return rc;
> >  }
> >
> > +int iommu_lookup_page(struct domain *d, bfn_t bfn, mfn_t *mfn,
> > +                      unsigned int *flags)
> > +{
> > +    const struct domain_iommu *hd = dom_iommu(d);
> > +
> > +    if ( !iommu_enabled || !hd->platform_ops )
> > +        return -EOPNOTSUPP;
> > +
> > +    return hd->platform_ops->lookup_page(d, bfn, mfn, flags);
> > +}
> > +
> >  static void iommu_free_pagetables(unsigned long unused)
> >  {
> >      do {
> > diff --git a/xen/drivers/passthrough/vtd/iommu.c
> > b/xen/drivers/passthrough/vtd/iommu.c
> > index 282e227414..8cd3b59aa0 100644
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -1830,6 +1830,39 @@ static int __must_check
> > intel_iommu_unmap_page(struct domain *d,
> >      return dma_pte_clear_one(d, bfn_to_baddr(bfn));
> >  }
> >
> > +static int intel_iommu_lookup_page(struct domain *d, bfn_t bfn, mfn_t
> > *mfn,
> > +                                   unsigned int *flags)
> 
> Not looking at later patches yet... but in concept bfn address
> space is per device instead of per domain.

Not in this case. Xen has always maintained a single IOMMU address per virtual machine. That is what BFN refers to.

  Paul

> In default situation
> (w/o pvIOMMU exposed), all devices assigned to dom0 share
> the same address space (bfn=pfn) which is currently linked
> from domain structure. Then with pvIOMMU exposed, dom0
> starts to manage individual pfn address space (called IOVA
> address space within dom0) per assigned device. In that case
> lookup should accept a bdf number and then find the right
> page table...
> 

No. That is over-complicating things and would probably involve re-writing most of the IOMMU code in Xen AFAICT.

> > +{
> > +    struct domain_iommu *hd = dom_iommu(d);
> > +    struct dma_pte *page = NULL, *pte = NULL, val;
> > +    u64 pg_maddr;
> > +
> > +    spin_lock(&hd->arch.mapping_lock);
> > +
> > +    pg_maddr = addr_to_dma_page_maddr(d, bfn_to_baddr(bfn), 0);
> > +    if ( pg_maddr == 0 )
> > +    {
> > +        spin_unlock(&hd->arch.mapping_lock);
> > +        return -ENOMEM;
> > +    }
> > +
> > +    page = map_vtd_domain_page(pg_maddr);
> > +    pte = page + (bfn_x(bfn) & LEVEL_MASK);
> > +    val = *pte;
> > +
> > +    unmap_vtd_domain_page(page);
> > +    spin_unlock(&hd->arch.mapping_lock);
> > +
> > +    if ( !dma_pte_present(val) )
> > +        return -ENOENT;
> > +
> > +    *mfn = maddr_to_mfn(dma_pte_addr(val));
> > +    *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
> > +    *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
> > +
> > +    return 0;
> > +}
> > +
> >  int iommu_pte_flush(struct domain *d, uint64_t bfn, uint64_t *pte,
> >                      int order, int present)
> >  {
> > @@ -2661,6 +2694,7 @@ const struct iommu_ops intel_iommu_ops = {
> >      .teardown = iommu_domain_teardown,
> >      .map_page = intel_iommu_map_page,
> >      .unmap_page = intel_iommu_unmap_page,
> > +    .lookup_page = intel_iommu_lookup_page,
> >      .free_page_table = iommu_free_page_table,
> >      .reassign_device = reassign_device_ownership,
> >      .get_device_group_id = intel_iommu_group_id,
> > diff --git a/xen/drivers/passthrough/vtd/iommu.h
> > b/xen/drivers/passthrough/vtd/iommu.h
> > index 72c1a2e3cd..47bdfcb5ea 100644
> > --- a/xen/drivers/passthrough/vtd/iommu.h
> > +++ b/xen/drivers/passthrough/vtd/iommu.h
> > @@ -272,6 +272,9 @@ struct dma_pte {
> >  #define dma_set_pte_prot(p, prot) do { \
> >          (p).val = ((p).val & ~DMA_PTE_PROT) | ((prot) & DMA_PTE_PROT); \
> >      } while (0)
> > +#define dma_pte_prot(p) ((p).val & DMA_PTE_PROT)
> > +#define dma_pte_read(p) (dma_pte_prot(p) & DMA_PTE_READ)
> > +#define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
> >  #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
> >  #define dma_set_pte_addr(p, addr) do {\
> >              (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
> > diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> > index cc0be81b4e..7c5d46df81 100644
> > --- a/xen/include/xen/iommu.h
> > +++ b/xen/include/xen/iommu.h
> > @@ -100,6 +100,8 @@ void iommu_teardown(struct domain *d);
> >  int __must_check iommu_map_page(struct domain *d, bfn_t bfn,
> >                                  mfn_t mfn, unsigned int flags);
> >  int __must_check iommu_unmap_page(struct domain *d, bfn_t bfn);
> > +int __must_check iommu_lookup_page(struct domain *d, bfn_t bfn,
> > mfn_t *mfn,
> > +                                   unsigned int *flags);
> >
> >  enum iommu_feature
> >  {
> > @@ -198,6 +200,8 @@ struct iommu_ops {
> >      int __must_check (*map_page)(struct domain *d, bfn_t bfn, mfn_t
> mfn,
> >                                   unsigned int flags);
> >      int __must_check (*unmap_page)(struct domain *d, bfn_t bfn);
> > +    int __must_check (*lookup_page)(struct domain *d, bfn_t bfn, mfn_t
> > *mfn,
> > +                                    unsigned int *flags);
> >      void (*free_page_table)(struct page_info *);
> >  #ifdef CONFIG_X86
> >      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg,
> > unsigned int value);
> > --
> > 2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-08-07  8:22 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-03 17:22 [PATCH v5 00/15] paravirtual IOMMU interface Paul Durrant
2018-08-03 17:22 ` [PATCH v5 01/15] iommu: turn need_iommu back into a boolean Paul Durrant
2018-08-08 13:39   ` Jan Beulich
2018-08-08 13:56     ` Paul Durrant
2018-08-03 17:22 ` [PATCH v5 02/15] iommu: introduce the concept of BFN Paul Durrant
2018-08-07  2:38   ` Tian, Kevin
2018-08-07  7:59     ` Paul Durrant
2018-08-07  8:26       ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 03/15] iommu: make use of type-safe BFN and MFN in exported functions Paul Durrant
2018-08-07  2:45   ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 04/15] iommu: push use of type-safe BFN and MFN into iommu_ops Paul Durrant
2018-08-07  2:49   ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 05/15] iommu: don't domain_crash() inside iommu_map/unmap_page() Paul Durrant
2018-08-07  2:55   ` Tian, Kevin
2018-08-07  8:05     ` Paul Durrant
2018-08-07  8:23       ` Jan Beulich
2018-08-03 17:22 ` [PATCH v5 06/15] public / x86: introduce __HYPERCALL_iommu_op Paul Durrant
2018-08-07  3:00   ` Tian, Kevin
2018-08-07  8:10     ` Paul Durrant
2018-08-07  8:25       ` Jan Beulich
2018-08-17 21:10   ` Daniel De Graaf
2018-08-03 17:22 ` [PATCH v5 07/15] iommu: track reserved ranges using a rangeset Paul Durrant
2018-08-07  3:04   ` Tian, Kevin
2018-08-07  8:16     ` Paul Durrant
2018-08-07  8:23       ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 08/15] x86: add iommu_op to query reserved ranges Paul Durrant
2018-08-03 17:22 ` [PATCH v5 09/15] vtd: add lookup_page method to iommu_ops Paul Durrant
2018-08-07  3:25   ` Tian, Kevin
2018-08-07  8:21     ` Paul Durrant [this message]
2018-08-07  8:29       ` Jan Beulich
2018-08-07  8:32         ` Tian, Kevin
2018-08-07  8:37           ` Paul Durrant
2018-08-07  8:48             ` Tian, Kevin
2018-08-07  8:56               ` Paul Durrant
2018-08-07  9:03                 ` Tian, Kevin
2018-08-07  9:07                   ` Paul Durrant
2018-08-07  8:31       ` Tian, Kevin
2018-08-07  8:35         ` Paul Durrant
2018-08-07  8:47           ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 10/15] mm / iommu: include need_iommu() test in iommu_use_hap_pt() Paul Durrant
2018-08-07  3:32   ` Tian, Kevin
2018-08-03 17:22 ` [PATCH v5 11/15] mm / iommu: split need_iommu() into has_iommu_pt() and sync_iommu_pt() Paul Durrant
2018-08-03 18:18   ` Razvan Cojocaru
2018-08-07  3:41   ` Tian, Kevin
2018-08-07  8:24     ` Paul Durrant
2018-08-03 17:22 ` [PATCH v5 12/15] x86: add iommu_op to enable modification of IOMMU mappings Paul Durrant
2018-08-07  4:08   ` Tian, Kevin
2018-08-07  8:32     ` Paul Durrant
2018-08-07  8:37       ` Tian, Kevin
2018-08-07  8:44         ` Paul Durrant
2018-08-07  9:01           ` Tian, Kevin
2018-08-07  9:12             ` Paul Durrant
2018-08-07  9:19               ` Tian, Kevin
2018-08-07  9:22                 ` Paul Durrant
2018-08-03 17:22 ` [PATCH v5 13/15] memory: add get_paged_gfn() as a wrapper Paul Durrant
2018-08-03 17:22 ` [PATCH v5 14/15] x86: add iommu_ops to modify and flush IOMMU mappings Paul Durrant
2018-08-03 17:22 ` [PATCH v5 15/15] x86: extend the map and unmap iommu_ops to support grant references Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0a4bee6dd58242f39e71c8e9bf2f89e0@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=kevin.tian@intel.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).