linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: David Gibson <david@gibson.dropbear.id.au>
To: Alexey Kardashevskiy <aik@ozlabs.ru>
Cc: linuxppc-dev@lists.ozlabs.org, Paul Mackerras <paulus@samba.org>
Subject: Re: [PATCH guest kernel] powerpc/pseries: Enable VFIO
Date: Mon, 27 Mar 2017 09:41:21 +1100	[thread overview]
Message-ID: <20170326224121.GP19078@umbus.fritz.box> (raw)
In-Reply-To: <20170324063721.16887-1-aik@ozlabs.ru>

[-- Attachment #1: Type: text/plain, Size: 4266 bytes --]

On Fri, Mar 24, 2017 at 05:37:21PM +1100, Alexey Kardashevskiy wrote:
> This enables VFIO on pseries host in order to allow VFIO in nested guest
> under PR KVM or DPDK in a HV guest. This adds support of
> the VFIO_SPAPR_TCE_IOMMU type.
> 
> This adds exchange() callback to allow TCE updates by the SPAPR TCE IOMMU
> driver in VFIO.
> 
> This initializes DMA32 window parameters in iommu_table_group as
> as this does not implement VFIO_SPAPR_TCE_v2_IOMMU and
> VFIO_SPAPR_TCE_IOMMU just reuses the existing DMA32 window.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  arch/powerpc/platforms/pseries/iommu.c | 40 ++++++++++++++++++++++++++++++++--
>  1 file changed, 38 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
> index 4d757eaa46bf..7c8ed68d727e 100644
> --- a/arch/powerpc/platforms/pseries/iommu.c
> +++ b/arch/powerpc/platforms/pseries/iommu.c
> @@ -550,6 +550,7 @@ static void iommu_table_setparms(struct pci_controller *phb,
>  static void iommu_table_setparms_lpar(struct pci_controller *phb,
>  				      struct device_node *dn,
>  				      struct iommu_table *tbl,
> +				      struct iommu_table_group *table_group,
>  				      const __be32 *dma_window)
>  {
>  	unsigned long offset, size;
> @@ -563,6 +564,9 @@ static void iommu_table_setparms_lpar(struct pci_controller *phb,
>  	tbl->it_type = TCE_PCI;
>  	tbl->it_offset = offset >> tbl->it_page_shift;
>  	tbl->it_size = size >> tbl->it_page_shift;
> +
> +	table_group->tce32_start = offset;
> +	table_group->tce32_size = size;
>  }
>  
>  struct iommu_table_ops iommu_table_pseries_ops = {
> @@ -651,8 +655,38 @@ static void pci_dma_bus_setup_pSeries(struct pci_bus *bus)
>  	pr_debug("ISA/IDE, window size is 0x%llx\n", pci->phb->dma_window_size);
>  }
>  
> +#ifdef CONFIG_IOMMU_API
> +static int tce_exchange_pSeries(struct iommu_table *tbl, long index,
> +		unsigned long *tce, enum dma_data_direction *direction)
> +{
> +	long rc;
> +	unsigned long ioba = (unsigned long) index << tbl->it_page_shift;
> +	unsigned long flags, oldtce = 0;
> +	u64 proto_tce = iommu_direction_to_tce_perm(*direction);
> +	unsigned long newtce = *tce | proto_tce;
> +
> +	spin_lock_irqsave(&tbl->large_pool.lock, flags);
> +
> +	rc = plpar_tce_get((u64)tbl->it_index, ioba, &oldtce);
> +	if (!rc)
> +		rc = plpar_tce_put((u64)tbl->it_index, ioba, newtce);
> +
> +	if (!rc) {
> +		*direction = iommu_tce_direction(oldtce);
> +		*tce = oldtce & ~(TCE_PCI_READ | TCE_PCI_WRITE);
> +	}
> +
> +	spin_unlock_irqrestore(&tbl->large_pool.lock, flags);
> +
> +	return rc;
> +}
> +#endif
> +
>  struct iommu_table_ops iommu_table_lpar_multi_ops = {
>  	.set = tce_buildmulti_pSeriesLP,
> +#ifdef CONFIG_IOMMU_API
> +	.exchange = tce_exchange_pSeries,
> +#endif
>  	.clear = tce_freemulti_pSeriesLP,
>  	.get = tce_get_pSeriesLP
>  };
> @@ -689,7 +723,8 @@ static void pci_dma_bus_setup_pSeriesLP(struct pci_bus *bus)
>  	if (!ppci->table_group) {
>  		ppci->table_group = iommu_pseries_alloc_group(ppci->phb->node);
>  		tbl = ppci->table_group->tables[0];
> -		iommu_table_setparms_lpar(ppci->phb, pdn, tbl, dma_window);
> +		iommu_table_setparms_lpar(ppci->phb, pdn, tbl,
> +				ppci->table_group, dma_window);
>  		tbl->it_ops = &iommu_table_lpar_multi_ops;
>  		iommu_init_table(tbl, ppci->phb->node);
>  		iommu_register_group(ppci->table_group,
> @@ -1143,7 +1178,8 @@ static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev)
>  	if (!pci->table_group) {
>  		pci->table_group = iommu_pseries_alloc_group(pci->phb->node);
>  		tbl = pci->table_group->tables[0];
> -		iommu_table_setparms_lpar(pci->phb, pdn, tbl, dma_window);
> +		iommu_table_setparms_lpar(pci->phb, pdn, tbl,
> +				pci->table_group, dma_window);
>  		tbl->it_ops = &iommu_table_lpar_multi_ops;
>  		iommu_init_table(tbl, pci->phb->node);
>  		iommu_register_group(pci->table_group,

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

  reply	other threads:[~2017-03-26 23:47 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-24  6:37 [PATCH guest kernel] powerpc/pseries: Enable VFIO Alexey Kardashevskiy
2017-03-26 22:41 ` David Gibson [this message]
2017-05-01  2:58 ` [guest,kernel] " Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170326224121.GP19078@umbus.fritz.box \
    --to=david@gibson.dropbear.id.au \
    --cc=aik@ozlabs.ru \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=paulus@samba.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).