linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink
@ 2016-04-12  8:37 Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 1/9] vfio/spapr: Relax the IOMMU compatibility check Alexey Kardashevskiy
                   ` (8 more replies)
  0 siblings, 9 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-12  8:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey

IBM POWER8 NVlink systems contain usual Tesla K40-ish GPUs but also
contain a couple of really fast links between GPU and CPU. These links
are exposed to the userspace by the OPAL firmware as bridges.
In order to make these links work when GPU is passed to the guest,
these bridges need to be passed as well; otherwise performance will
degrade. More details are in 10/10.

This reworks the existing NPU support in the powernv platform and adds
VFIO support on top of that.

There was no v2 of this patchset, just a last patch, this is why this
is v3 and most changes went to "powerpc/powernv/npu: Enable NVLink pass through".

This was tested on POWER8NVL platform. pvr=0x004c0100.

Please comment. Thanks.


Alexey Kardashevskiy (9):
  vfio/spapr: Relax the IOMMU compatibility check
  powerpc/powernv: Rename pnv_pci_ioda2_tce_invalidate_entire
  powerpc/powernv: Define TCE Kill flags
  powerpc/powernv/npu: TCE Kill helpers cleanup
  powerpc/powernv/npu: Use the correct IOMMU page size
  powerpc/powernv/npu: Simplify DMA setup
  powerpc/powernv/npu: Rework TCE Kill handling
  powerpc/powernv/ioda2: Export debug helper pe_level_printk()
  powerpc/powernv/npu: Enable NVLink pass through

 arch/powerpc/platforms/powernv/npu-dma.c  | 340 +++++++++++++++++-------------
 arch/powerpc/platforms/powernv/pci-ioda.c | 227 +++++++++++++-------
 arch/powerpc/platforms/powernv/pci.h      |  31 +--
 drivers/vfio/vfio_iommu_spapr_tce.c       |   3 +-
 4 files changed, 360 insertions(+), 241 deletions(-)

-- 
2.5.0.rc3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH kernel v3 1/9] vfio/spapr: Relax the IOMMU compatibility check
  2016-04-12  8:37 [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
@ 2016-04-12  8:37 ` Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 2/9] powerpc/powernv: Rename pnv_pci_ioda2_tce_invalidate_entire Alexey Kardashevskiy
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-12  8:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey

We are going to have multiple different types of PHB on the same system
with POWER8 + NVLink and PHBs will have different IOMMU ops. However
we only really care about one callback - create_table - so we can
relax the compatibility check here.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 drivers/vfio/vfio_iommu_spapr_tce.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c
index 0582b72..3054e3f 100644
--- a/drivers/vfio/vfio_iommu_spapr_tce.c
+++ b/drivers/vfio/vfio_iommu_spapr_tce.c
@@ -1188,7 +1188,8 @@ static int tce_iommu_attach_group(void *iommu_data,
 			goto unlock_exit;
 		}
 		table_group_tmp = iommu_group_get_iommudata(tcegrp->grp);
-		if (table_group_tmp->ops != table_group->ops) {
+		if (table_group_tmp->ops->create_table !=
+				table_group->ops->create_table) {
 			pr_warn("tce_vfio: Group %d is incompatible with group %d\n",
 					iommu_group_id(iommu_group),
 					iommu_group_id(tcegrp->grp));
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH kernel v3 2/9] powerpc/powernv: Rename pnv_pci_ioda2_tce_invalidate_entire
  2016-04-12  8:37 [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 1/9] vfio/spapr: Relax the IOMMU compatibility check Alexey Kardashevskiy
@ 2016-04-12  8:37 ` Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 3/9] powerpc/powernv: Define TCE Kill flags Alexey Kardashevskiy
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-12  8:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey

As in fact pnv_pci_ioda2_tce_invalidate_entire() invalidates TCEs for
the specific PE rather than the entire cache, rename it to
pnv_pci_ioda2_tce_invalidate_pe(). In later patches we will add
a proper pnv_pci_ioda2_tce_invalidate_entire().

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/powernv/pci-ioda.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index c5baaf3..ca307b6 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1811,7 +1811,7 @@ static struct iommu_table_ops pnv_ioda1_iommu_ops = {
 	.get = pnv_tce_get,
 };
 
-static inline void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_ioda_pe *pe)
+static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
 {
 	/* 01xb - invalidate TCEs that match the specified PE# */
 	unsigned long val = (0x4ull << 60) | (pe->pe_number & 0xFF);
@@ -2075,7 +2075,7 @@ static long pnv_pci_ioda2_set_window(struct iommu_table_group *table_group,
 
 	pnv_pci_link_table_and_group(phb->hose->node, num,
 			tbl, &pe->table_group);
-	pnv_pci_ioda2_tce_invalidate_entire(pe);
+	pnv_pci_ioda2_tce_invalidate_pe(pe);
 
 	return 0;
 }
@@ -2219,7 +2219,7 @@ static long pnv_pci_ioda2_unset_window(struct iommu_table_group *table_group,
 	if (ret)
 		pe_warn(pe, "Unmapping failed, ret = %ld\n", ret);
 	else
-		pnv_pci_ioda2_tce_invalidate_entire(pe);
+		pnv_pci_ioda2_tce_invalidate_pe(pe);
 
 	pnv_pci_unlink_table_and_group(table_group->tables[num], table_group);
 
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH kernel v3 3/9] powerpc/powernv: Define TCE Kill flags
  2016-04-12  8:37 [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 1/9] vfio/spapr: Relax the IOMMU compatibility check Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 2/9] powerpc/powernv: Rename pnv_pci_ioda2_tce_invalidate_entire Alexey Kardashevskiy
@ 2016-04-12  8:37 ` Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 4/9] powerpc/powernv/npu: TCE Kill helpers cleanup Alexey Kardashevskiy
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-12  8:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey

This replaces magic constants for TCE Kill IODA2 register with macros.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 arch/powerpc/platforms/powernv/pci-ioda.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index ca307b6..03be25d 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1811,10 +1811,13 @@ static struct iommu_table_ops pnv_ioda1_iommu_ops = {
 	.get = pnv_tce_get,
 };
 
+#define TCE_KILL_INVAL_PE   PPC_BIT(1)
+#define TCE_KILL_INVAL_TCE  PPC_BIT(2)
+
 static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
 {
 	/* 01xb - invalidate TCEs that match the specified PE# */
-	unsigned long val = (0x4ull << 60) | (pe->pe_number & 0xFF);
+	unsigned long val = TCE_KILL_INVAL_PE | (pe->pe_number & 0xFF);
 	struct pnv_phb *phb = pe->phb;
 	struct pnv_ioda_pe *npe;
 	int i;
@@ -1842,7 +1845,7 @@ static void pnv_pci_ioda2_do_tce_invalidate(unsigned pe_number, bool rm,
 	unsigned long start, end, inc;
 
 	/* We'll invalidate DMA address in PE scope */
-	start = 0x2ull << 60;
+	start = TCE_KILL_INVAL_TCE;
 	start |= (pe_number & 0xFF);
 	end = start;
 
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH kernel v3 4/9] powerpc/powernv/npu: TCE Kill helpers cleanup
  2016-04-12  8:37 [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (2 preceding siblings ...)
  2016-04-12  8:37 ` [PATCH kernel v3 3/9] powerpc/powernv: Define TCE Kill flags Alexey Kardashevskiy
@ 2016-04-12  8:37 ` Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 5/9] powerpc/powernv/npu: Use the correct IOMMU page size Alexey Kardashevskiy
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-12  8:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey

NPU PHB TCE Kill register is exactly the same as in the rest of POWER8
so let's reuse the existing code for NPU. The only bit missing is
a helper to reset the entire TCE cache so this moves such a helper
from NPU code and renames it.

Since pnv_npu_tce_invalidate() does really invalidate the entire cache,
this uses pnv_pci_ioda2_tce_invalidate_entire() directly for NPU.
This adds an explicit comment for workaround for invalidating NPU TCE
cache.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alistair Popple <alistair@popple.id.au>
---
 arch/powerpc/platforms/powernv/npu-dma.c  | 41 -------------------------------
 arch/powerpc/platforms/powernv/pci-ioda.c | 29 ++++++++++++++++++----
 arch/powerpc/platforms/powernv/pci.h      |  7 +-----
 3 files changed, 25 insertions(+), 52 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 7229acd..778570c 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -25,8 +25,6 @@
  * Other types of TCE cache invalidation are not functional in the
  * hardware.
  */
-#define TCE_KILL_INVAL_ALL PPC_BIT(0)
-
 static struct pci_dev *get_pci_dev(struct device_node *dn)
 {
 	return PCI_DN(dn)->pcidev;
@@ -161,45 +159,6 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
 	return pe;
 }
 
-void pnv_npu_tce_invalidate_entire(struct pnv_ioda_pe *npe)
-{
-	struct pnv_phb *phb = npe->phb;
-
-	if (WARN_ON(phb->type != PNV_PHB_NPU ||
-		    !phb->ioda.tce_inval_reg ||
-		    !(npe->flags & PNV_IODA_PE_DEV)))
-		return;
-
-	mb(); /* Ensure previous TCE table stores are visible */
-	__raw_writeq(cpu_to_be64(TCE_KILL_INVAL_ALL),
-		phb->ioda.tce_inval_reg);
-}
-
-void pnv_npu_tce_invalidate(struct pnv_ioda_pe *npe,
-				struct iommu_table *tbl,
-				unsigned long index,
-				unsigned long npages,
-				bool rm)
-{
-	struct pnv_phb *phb = npe->phb;
-
-	/* We can only invalidate the whole cache on NPU */
-	unsigned long val = TCE_KILL_INVAL_ALL;
-
-	if (WARN_ON(phb->type != PNV_PHB_NPU ||
-		    !phb->ioda.tce_inval_reg ||
-		    !(npe->flags & PNV_IODA_PE_DEV)))
-		return;
-
-	mb(); /* Ensure previous TCE table stores are visible */
-	if (rm)
-		__raw_rm_writeq(cpu_to_be64(val),
-		  (__be64 __iomem *) phb->ioda.tce_inval_reg_phys);
-	else
-		__raw_writeq(cpu_to_be64(val),
-			phb->ioda.tce_inval_reg);
-}
-
 void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
 {
 	struct pnv_ioda_pe *gpe;
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 03be25d..a67d51e 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1811,9 +1811,23 @@ static struct iommu_table_ops pnv_ioda1_iommu_ops = {
 	.get = pnv_tce_get,
 };
 
+#define TCE_KILL_INVAL_ALL  PPC_BIT(0)
 #define TCE_KILL_INVAL_PE   PPC_BIT(1)
 #define TCE_KILL_INVAL_TCE  PPC_BIT(2)
 
+void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm)
+{
+	const unsigned long val = TCE_KILL_INVAL_ALL;
+
+	mb(); /* Ensure previous TCE table stores are visible */
+	if (rm)
+		__raw_rm_writeq(cpu_to_be64(val),
+				(__be64 __iomem *)
+				phb->ioda.tce_inval_reg_phys);
+	else
+		__raw_writeq(cpu_to_be64(val), phb->ioda.tce_inval_reg);
+}
+
 static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
 {
 	/* 01xb - invalidate TCEs that match the specified PE# */
@@ -1834,7 +1848,7 @@ static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
 			if (!npe || npe->phb->type != PNV_PHB_NPU)
 				continue;
 
-			pnv_npu_tce_invalidate_entire(npe);
+			pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
 		}
 }
 
@@ -1883,14 +1897,19 @@ static void pnv_pci_ioda2_tce_invalidate(struct iommu_table *tbl,
 			index, npages);
 
 		if (pe->flags & PNV_IODA_PE_PEER)
-			/* Invalidate PEs using the same TCE table */
+			/*
+			 * The NVLink hardware does not support TCE kill
+			 * per TCE entry so we have to invalidate
+			 * the entire cache for it.
+			 */
 			for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
 				npe = pe->peers[i];
-				if (!npe || npe->phb->type != PNV_PHB_NPU)
+				if (!npe || npe->phb->type != PNV_PHB_NPU ||
+						!npe->phb->ioda.tce_inval_reg)
 					continue;
 
-				pnv_npu_tce_invalidate(npe, tbl, index,
-							npages, rm);
+				pnv_pci_ioda2_tce_invalidate_entire(npe->phb,
+						rm);
 			}
 	}
 }
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index 3f814f3..0b89a4c 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -237,15 +237,10 @@ extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type);
 extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
 
 /* Nvlink functions */
-extern void pnv_npu_tce_invalidate_entire(struct pnv_ioda_pe *npe);
-extern void pnv_npu_tce_invalidate(struct pnv_ioda_pe *npe,
-				       struct iommu_table *tbl,
-				       unsigned long index,
-				       unsigned long npages,
-				       bool rm);
 extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
 extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
 extern int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe, bool enabled);
 extern int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask);
+extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
 
 #endif /* __POWERNV_PCI_H */
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH kernel v3 5/9] powerpc/powernv/npu: Use the correct IOMMU page size
  2016-04-12  8:37 [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (3 preceding siblings ...)
  2016-04-12  8:37 ` [PATCH kernel v3 4/9] powerpc/powernv/npu: TCE Kill helpers cleanup Alexey Kardashevskiy
@ 2016-04-12  8:37 ` Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 6/9] powerpc/powernv/npu: Simplify DMA setup Alexey Kardashevskiy
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-12  8:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey

This uses the page size from iommu_table instead of hard-coded 4K.
This should cause no change in behavior.

While we are here, move bits around to prepare for further rework
which will define and use iommu_table_group_ops.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alistair Popple <alistair@popple.id.au>
---
 arch/powerpc/platforms/powernv/npu-dma.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 778570c..5bd5fee 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -204,8 +204,7 @@ static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
 	struct pnv_phb *phb = npe->phb;
 	struct pci_dev *gpdev;
 	struct pnv_ioda_pe *gpe;
-	void *addr;
-	unsigned int size;
+	struct iommu_table *tbl;
 	int64_t rc;
 
 	/*
@@ -219,11 +218,11 @@ static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
 	if (!gpe)
 		return;
 
-	addr = (void *)gpe->table_group.tables[0]->it_base;
-	size = gpe->table_group.tables[0]->it_size << 3;
+	tbl = gpe->table_group.tables[0];
 	rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
-					npe->pe_number, 1, __pa(addr),
-					size, 0x1000);
+					npe->pe_number, 1, __pa(tbl->it_base),
+					tbl->it_size << 3,
+					IOMMU_PAGE_SIZE(tbl));
 	if (rc != OPAL_SUCCESS)
 		pr_warn("%s: Error %lld setting DMA window on PHB#%d-PE#%d\n",
 			__func__, rc, phb->hose->global_number, npe->pe_number);
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH kernel v3 6/9] powerpc/powernv/npu: Simplify DMA setup
  2016-04-12  8:37 [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (4 preceding siblings ...)
  2016-04-12  8:37 ` [PATCH kernel v3 5/9] powerpc/powernv/npu: Use the correct IOMMU page size Alexey Kardashevskiy
@ 2016-04-12  8:37 ` Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 7/9] powerpc/powernv/npu: Rework TCE Kill handling Alexey Kardashevskiy
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-12  8:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey

NPU devices are emulated in firmware and mainly used for NPU NVLink
training; one NPU device is per a hardware link. Their DMA/TCE setup
must match the GPU which is connected via PCIe and NVLink so any changes
to the DMA/TCE setup on the GPU PCIe device need to be propagated to
the NVLink device as this is what device drivers expect and it doesn't
make much sense to do anything else.

This makes NPU DMA setup explicit.
pnv_npu_ioda_controller_ops::pnv_npu_dma_set_mask is moved to pci-ioda,
made static and prints warning as dma_set_mask() should never be called
on this function as in any case it will not configure GPU; so we make
this explicit.

Instead of using PNV_IODA_PE_PEER and peers[] (which the next patch will
remove), we test every PCI device if there are corresponding NVLink
devices. If there are any, we propagate bypass mode to just found NPU
devices by calling the setup helper directly (which takes @bypass) and
avoid guessing (i.e. calculating from DMA mask) whether we need bypass
or not on NPU devices. Since DMA setup happens in very rare occasion,
this will not slow down booting or VFIO start/stop much.

This renames pnv_npu_disable_bypass to pnv_npu_dma_set_32 to make it
more clear what the function really does which is programming 32bit
table address to the TVT ("disabling bypass" means writing zeroes to
the TVT).

This removes pnv_npu_dma_set_bypass() from pnv_npu_ioda_fixup() as
the DMA configuration on NPU does not matter until dma_set_mask() is
called on GPU and that will do the NPU DMA configuration.

This removes phb->dma_dev_setup initialization for NPU as
pnv_pci_ioda_dma_dev_setup is no-op for it anyway.

This stops using npe->tce_bypass_base as it never changes and values
other than zero are not supported.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alistair Popple <alistair@popple.id.au>
---
Changes:
v2:
* changed first paragraph of the commit log from Alistair comment
* removed npe->tce_bypass_base
---
 arch/powerpc/platforms/powernv/npu-dma.c  | 89 ++++++++++++++-----------------
 arch/powerpc/platforms/powernv/pci-ioda.c | 30 +++++------
 arch/powerpc/platforms/powernv/pci.h      |  3 +-
 3 files changed, 53 insertions(+), 69 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 5bd5fee..bec9267 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -196,10 +196,9 @@ void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
 }
 
 /*
- * For the NPU we want to point the TCE table at the same table as the
- * real PCI device.
+ * Enables 32 bit DMA on NPU.
  */
-static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
+static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
 {
 	struct pnv_phb *phb = npe->phb;
 	struct pci_dev *gpdev;
@@ -235,72 +234,62 @@ static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
 }
 
 /*
- * Enable/disable bypass mode on the NPU. The NPU only supports one
+ * Enables bypass mode on the NPU. The NPU only supports one
  * window per link, so bypass needs to be explicitly enabled or
  * disabled. Unlike for a PHB3 bypass and non-bypass modes can't be
  * active at the same time.
  */
-int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe, bool enable)
+static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe)
 {
 	struct pnv_phb *phb = npe->phb;
 	int64_t rc = 0;
+	phys_addr_t top = memblock_end_of_DRAM();
 
 	if (phb->type != PNV_PHB_NPU || !npe->pdev)
 		return -EINVAL;
 
-	if (enable) {
-		/* Enable the bypass window */
-		phys_addr_t top = memblock_end_of_DRAM();
+	/* Enable the bypass window */
 
-		npe->tce_bypass_base = 0;
-		top = roundup_pow_of_two(top);
-		dev_info(&npe->pdev->dev, "Enabling bypass for PE %d\n",
-			 npe->pe_number);
-		rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
-					npe->pe_number, npe->pe_number,
-					npe->tce_bypass_base, top);
-	} else {
-		/*
-		 * Disable the bypass window by replacing it with the
-		 * TCE32 window.
-		 */
-		pnv_npu_disable_bypass(npe);
-	}
+	top = roundup_pow_of_two(top);
+	dev_info(&npe->pdev->dev, "Enabling bypass for PE %d\n",
+			npe->pe_number);
+	rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
+			npe->pe_number, npe->pe_number,
+			0 /* bypass base */, top);
 
 	return rc;
 }
 
-int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask)
+void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
 {
-	struct pci_controller *hose = pci_bus_to_host(npdev->bus);
-	struct pnv_phb *phb = hose->private_data;
-	struct pci_dn *pdn = pci_get_pdn(npdev);
-	struct pnv_ioda_pe *npe, *gpe;
-	struct pci_dev *gpdev;
-	uint64_t top;
-	bool bypass = false;
+	int i;
+	struct pnv_phb *phb;
+	struct pci_dn *pdn;
+	struct pnv_ioda_pe *npe;
+	struct pci_dev *npdev;
 
-	if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
-		return -ENXIO;
+	for (i = 0; ; ++i) {
+		npdev = pnv_pci_get_npu_dev(gpdev, i);
 
-	/* We only do bypass if it's enabled on the linked device */
-	npe = &phb->ioda.pe_array[pdn->pe_number];
-	gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
-	if (!gpe)
-		return -ENODEV;
+		if (!npdev)
+			break;
 
-	if (gpe->tce_bypass_enabled) {
-		top = gpe->tce_bypass_base + memblock_end_of_DRAM() - 1;
-		bypass = (dma_mask >= top);
+		pdn = pci_get_pdn(npdev);
+		if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
+			return;
+
+		phb = pci_bus_to_host(npdev->bus)->private_data;
+
+		/* We only do bypass if it's enabled on the linked device */
+		npe = &phb->ioda.pe_array[pdn->pe_number];
+
+		if (bypass) {
+			dev_info(&npdev->dev,
+					"Using 64-bit DMA iommu bypass\n");
+			pnv_npu_dma_set_bypass(npe);
+		} else {
+			dev_info(&npdev->dev, "Using 32-bit DMA via iommu\n");
+			pnv_npu_dma_set_32(npe);
+		}
 	}
-
-	if (bypass)
-		dev_info(&npdev->dev, "Using 64-bit DMA iommu bypass\n");
-	else
-		dev_info(&npdev->dev, "Using 32-bit DMA via iommu\n");
-
-	pnv_npu_dma_set_bypass(npe, bypass);
-	*npdev->dev.dma_mask = dma_mask;
-
-	return 0;
 }
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index a67d51e..272521e 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1640,8 +1640,6 @@ static int pnv_pci_ioda_dma_set_mask(struct pci_dev *pdev, u64 dma_mask)
 	struct pnv_ioda_pe *pe;
 	uint64_t top;
 	bool bypass = false;
-	struct pci_dev *linked_npu_dev;
-	int i;
 
 	if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
 		return -ENODEV;;
@@ -1662,15 +1660,7 @@ static int pnv_pci_ioda_dma_set_mask(struct pci_dev *pdev, u64 dma_mask)
 	*pdev->dev.dma_mask = dma_mask;
 
 	/* Update peer npu devices */
-	if (pe->flags & PNV_IODA_PE_PEER)
-		for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
-			if (!pe->peers[i])
-				continue;
-
-			linked_npu_dev = pe->peers[i]->pdev;
-			if (dma_get_mask(&linked_npu_dev->dev) != dma_mask)
-				dma_set_mask(&linked_npu_dev->dev, dma_mask);
-		}
+	pnv_npu_try_dma_set_bypass(pdev, bypass);
 
 	return 0;
 }
@@ -3094,7 +3084,6 @@ static void pnv_npu_ioda_fixup(void)
 			enable_bypass = dma_get_mask(&pe->pdev->dev) ==
 				DMA_BIT_MASK(64);
 			pnv_npu_init_dma_pe(pe);
-			pnv_npu_dma_set_bypass(pe, enable_bypass);
 		}
 	}
 }
@@ -3246,6 +3235,14 @@ static const struct pci_controller_ops pnv_pci_ioda_controller_ops = {
        .shutdown = pnv_pci_ioda_shutdown,
 };
 
+static int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask)
+{
+	dev_err_once(&npdev->dev,
+			"%s operation unsupported for NVLink devices\n",
+			__func__);
+	return -EPERM;
+}
+
 static const struct pci_controller_ops pnv_npu_ioda_controller_ops = {
 	.dma_dev_setup = pnv_pci_dma_dev_setup,
 #ifdef CONFIG_PCI_MSI
@@ -3402,9 +3399,6 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
 	/* Setup RID -> PE mapping function */
 	phb->bdfn_to_pe = pnv_ioda_bdfn_to_pe;
 
-	/* Setup TCEs */
-	phb->dma_dev_setup = pnv_pci_ioda_dma_dev_setup;
-
 	/* Setup MSI support */
 	pnv_pci_init_ioda_msis(phb);
 
@@ -3417,10 +3411,12 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
 	 */
 	ppc_md.pcibios_fixup = pnv_pci_ioda_fixup;
 
-	if (phb->type == PNV_PHB_NPU)
+	if (phb->type == PNV_PHB_NPU) {
 		hose->controller_ops = pnv_npu_ioda_controller_ops;
-	else
+	} else {
+		phb->dma_dev_setup = pnv_pci_ioda_dma_dev_setup;
 		hose->controller_ops = pnv_pci_ioda_controller_ops;
+	}
 
 #ifdef CONFIG_PCI_IOV
 	ppc_md.pcibios_fixup_sriov = pnv_pci_ioda_fixup_iov_resources;
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index 0b89a4c..d574a9d 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -239,8 +239,7 @@ extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
 /* Nvlink functions */
 extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
 extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
-extern int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe, bool enabled);
-extern int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask);
+extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
 
 #endif /* __POWERNV_PCI_H */
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH kernel v3 7/9] powerpc/powernv/npu: Rework TCE Kill handling
  2016-04-12  8:37 [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (5 preceding siblings ...)
  2016-04-12  8:37 ` [PATCH kernel v3 6/9] powerpc/powernv/npu: Simplify DMA setup Alexey Kardashevskiy
@ 2016-04-12  8:37 ` Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 8/9] powerpc/powernv/ioda2: Export debug helper pe_level_printk() Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 9/9] powerpc/powernv/npu: Enable NVLink pass through Alexey Kardashevskiy
  8 siblings, 0 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-12  8:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey

The pnv_ioda_pe struct keeps an array of peers. At the moment it is only
used to link GPU and NPU for 2 purposes:

1. Access NPU quickly when configuring DMA for GPU - this was addressed
in the previos patch by removing use of it as DMA setup is not what
the kernel would constantly do.

2. Invalidate TCE cache for NPU when it is invalidated for GPU.
GPU and NPU are in different PE. There is already a mechanism to
attach multiple iommu_table_group to the same iommu_table (used for VFIO),
we can reuse it here so does this patch.

This gets rid of peers[] array and PNV_IODA_PE_PEER flag as they are
not needed anymore.

While we are here, add TCE cache invalidation after changing TVT.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-By: Alistair Popple <alistair@popple.id.au>
---
 arch/powerpc/platforms/powernv/npu-dma.c  | 75 +++++++++----------------------
 arch/powerpc/platforms/powernv/pci-ioda.c | 57 +++--------------------
 arch/powerpc/platforms/powernv/pci.h      |  6 ---
 3 files changed, 29 insertions(+), 109 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index bec9267..8e70221 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -136,22 +136,17 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
 	struct pnv_ioda_pe *pe;
 	struct pci_dn *pdn;
 
-	if (npe->flags & PNV_IODA_PE_PEER) {
-		pe = npe->peers[0];
-		pdev = pe->pdev;
-	} else {
-		pdev = pnv_pci_get_gpu_dev(npe->pdev);
-		if (!pdev)
-			return NULL;
+	pdev = pnv_pci_get_gpu_dev(npe->pdev);
+	if (!pdev)
+		return NULL;
 
-		pdn = pci_get_pdn(pdev);
-		if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
-			return NULL;
+	pdn = pci_get_pdn(pdev);
+	if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
+		return NULL;
 
-		hose = pci_bus_to_host(pdev->bus);
-		phb = hose->private_data;
-		pe = &phb->ioda.pe_array[pdn->pe_number];
-	}
+	hose = pci_bus_to_host(pdev->bus);
+	phb = hose->private_data;
+	pe = &phb->ioda.pe_array[pdn->pe_number];
 
 	if (gpdev)
 		*gpdev = pdev;
@@ -159,42 +154,6 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
 	return pe;
 }
 
-void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
-{
-	struct pnv_ioda_pe *gpe;
-	struct pci_dev *gpdev;
-	int i, avail = -1;
-
-	if (!npe->pdev || !(npe->flags & PNV_IODA_PE_DEV))
-		return;
-
-	gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
-	if (!gpe)
-		return;
-
-	for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
-		/* Nothing to do if the PE is already connected. */
-		if (gpe->peers[i] == npe)
-			return;
-
-		if (!gpe->peers[i])
-			avail = i;
-	}
-
-	if (WARN_ON(avail < 0))
-		return;
-
-	gpe->peers[avail] = npe;
-	gpe->flags |= PNV_IODA_PE_PEER;
-
-	/*
-	 * We assume that the NPU devices only have a single peer PE
-	 * (the GPU PCIe device PE).
-	 */
-	npe->peers[0] = gpe;
-	npe->flags |= PNV_IODA_PE_PEER;
-}
-
 /*
  * Enables 32 bit DMA on NPU.
  */
@@ -225,6 +184,13 @@ static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
 	if (rc != OPAL_SUCCESS)
 		pr_warn("%s: Error %lld setting DMA window on PHB#%d-PE#%d\n",
 			__func__, rc, phb->hose->global_number, npe->pe_number);
+	else
+		pnv_pci_ioda2_tce_invalidate_entire(phb, false);
+
+	/* Add the table to the list so its TCE cache will get invalidated */
+	npe->table_group.tables[0] = tbl;
+	pnv_pci_link_table_and_group(phb->hose->node, 0,
+			tbl, &npe->table_group);
 
 	/*
 	 * We don't initialise npu_pe->tce32_table as we always use
@@ -245,10 +211,10 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe)
 	int64_t rc = 0;
 	phys_addr_t top = memblock_end_of_DRAM();
 
-	if (phb->type != PNV_PHB_NPU || !npe->pdev)
-		return -EINVAL;
-
 	/* Enable the bypass window */
+	pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
+			&npe->table_group);
+	npe->table_group.tables[0] = NULL;
 
 	top = roundup_pow_of_two(top);
 	dev_info(&npe->pdev->dev, "Enabling bypass for PE %d\n",
@@ -257,6 +223,9 @@ static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe)
 			npe->pe_number, npe->pe_number,
 			0 /* bypass base */, top);
 
+	if (rc == OPAL_SUCCESS)
+		pnv_pci_ioda2_tce_invalidate_entire(phb, false);
+
 	return rc;
 }
 
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 272521e..0b80366 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -1823,23 +1823,12 @@ static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
 	/* 01xb - invalidate TCEs that match the specified PE# */
 	unsigned long val = TCE_KILL_INVAL_PE | (pe->pe_number & 0xFF);
 	struct pnv_phb *phb = pe->phb;
-	struct pnv_ioda_pe *npe;
-	int i;
 
 	if (!phb->ioda.tce_inval_reg)
 		return;
 
 	mb(); /* Ensure above stores are visible */
 	__raw_writeq(cpu_to_be64(val), phb->ioda.tce_inval_reg);
-
-	if (pe->flags & PNV_IODA_PE_PEER)
-		for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
-			npe = pe->peers[i];
-			if (!npe || npe->phb->type != PNV_PHB_NPU)
-				continue;
-
-			pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
-		}
 }
 
 static void pnv_pci_ioda2_do_tce_invalidate(unsigned pe_number, bool rm,
@@ -1874,33 +1863,24 @@ static void pnv_pci_ioda2_tce_invalidate(struct iommu_table *tbl,
 	struct iommu_table_group_link *tgl;
 
 	list_for_each_entry_rcu(tgl, &tbl->it_group_list, next) {
-		struct pnv_ioda_pe *npe;
 		struct pnv_ioda_pe *pe = container_of(tgl->table_group,
 				struct pnv_ioda_pe, table_group);
 		__be64 __iomem *invalidate = rm ?
 			(__be64 __iomem *)pe->phb->ioda.tce_inval_reg_phys :
 			pe->phb->ioda.tce_inval_reg;
-		int i;
 
-		pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
-			invalidate, tbl->it_page_shift,
-			index, npages);
-
-		if (pe->flags & PNV_IODA_PE_PEER)
+		if (pe->phb->type == PNV_PHB_NPU) {
 			/*
 			 * The NVLink hardware does not support TCE kill
 			 * per TCE entry so we have to invalidate
 			 * the entire cache for it.
 			 */
-			for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
-				npe = pe->peers[i];
-				if (!npe || npe->phb->type != PNV_PHB_NPU ||
-						!npe->phb->ioda.tce_inval_reg)
-					continue;
-
-				pnv_pci_ioda2_tce_invalidate_entire(npe->phb,
-						rm);
-			}
+			pnv_pci_ioda2_tce_invalidate_entire(pe->phb, rm);
+			continue;
+		}
+		pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
+			invalidate, tbl->it_page_shift,
+			index, npages);
 	}
 }
 
@@ -3068,26 +3048,6 @@ static void pnv_pci_ioda_create_dbgfs(void)
 #endif /* CONFIG_DEBUG_FS */
 }
 
-static void pnv_npu_ioda_fixup(void)
-{
-	bool enable_bypass;
-	struct pci_controller *hose, *tmp;
-	struct pnv_phb *phb;
-	struct pnv_ioda_pe *pe;
-
-	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
-		phb = hose->private_data;
-		if (phb->type != PNV_PHB_NPU)
-			continue;
-
-		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
-			enable_bypass = dma_get_mask(&pe->pdev->dev) ==
-				DMA_BIT_MASK(64);
-			pnv_npu_init_dma_pe(pe);
-		}
-	}
-}
-
 static void pnv_pci_ioda_fixup(void)
 {
 	pnv_pci_ioda_setup_PEs();
@@ -3100,9 +3060,6 @@ static void pnv_pci_ioda_fixup(void)
 	eeh_init();
 	eeh_addr_cache_build();
 #endif
-
-	/* Link NPU IODA tables to their PCI devices. */
-	pnv_npu_ioda_fixup();
 }
 
 /*
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index d574a9d..06405fd 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -24,7 +24,6 @@ enum pnv_phb_model {
 #define PNV_IODA_PE_MASTER	(1 << 3)	/* Master PE in compound case	*/
 #define PNV_IODA_PE_SLAVE	(1 << 4)	/* Slave PE in compound case	*/
 #define PNV_IODA_PE_VF		(1 << 5)	/* PE for one VF 		*/
-#define PNV_IODA_PE_PEER	(1 << 6)	/* PE has peers			*/
 
 /* Data associated with a PE, including IOMMU tracking etc.. */
 struct pnv_phb;
@@ -32,9 +31,6 @@ struct pnv_ioda_pe {
 	unsigned long		flags;
 	struct pnv_phb		*phb;
 
-#define PNV_IODA_MAX_PEER_PES	8
-	struct pnv_ioda_pe	*peers[PNV_IODA_MAX_PEER_PES];
-
 	/* A PE can be associated with a single device or an
 	 * entire bus (& children). In the former case, pdev
 	 * is populated, in the later case, pbus is.
@@ -237,8 +233,6 @@ extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type);
 extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
 
 /* Nvlink functions */
-extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
-extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
 extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
 
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH kernel v3 8/9] powerpc/powernv/ioda2: Export debug helper pe_level_printk()
  2016-04-12  8:37 [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (6 preceding siblings ...)
  2016-04-12  8:37 ` [PATCH kernel v3 7/9] powerpc/powernv/npu: Rework TCE Kill handling Alexey Kardashevskiy
@ 2016-04-12  8:37 ` Alexey Kardashevskiy
  2016-04-12  8:37 ` [PATCH kernel v3 9/9] powerpc/powernv/npu: Enable NVLink pass through Alexey Kardashevskiy
  8 siblings, 0 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-12  8:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey

This exports debugging helper pe_level_printk() and corresponding macroses
so they can be used in npu-dma.c.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 arch/powerpc/platforms/powernv/pci-ioda.c | 9 +--------
 arch/powerpc/platforms/powernv/pci.h      | 9 +++++++++
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 0b80366..922c74c 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -56,7 +56,7 @@
 
 static void pnv_pci_ioda2_table_free_pages(struct iommu_table *tbl);
 
-static void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
+void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 			    const char *fmt, ...)
 {
 	struct va_format vaf;
@@ -87,13 +87,6 @@ static void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 	va_end(args);
 }
 
-#define pe_err(pe, fmt, ...)					\
-	pe_level_printk(pe, KERN_ERR, fmt, ##__VA_ARGS__)
-#define pe_warn(pe, fmt, ...)					\
-	pe_level_printk(pe, KERN_WARNING, fmt, ##__VA_ARGS__)
-#define pe_info(pe, fmt, ...)					\
-	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
-
 static bool pnv_iommu_bypass_disabled __read_mostly;
 
 static int __init iommu_setup(char *str)
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index 06405fd..e117bd8 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -232,6 +232,15 @@ extern void pnv_pci_dma_bus_setup(struct pci_bus *bus);
 extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type);
 extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
 
+extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
+			    const char *fmt, ...);
+#define pe_err(pe, fmt, ...)					\
+	pe_level_printk(pe, KERN_ERR, fmt, ##__VA_ARGS__)
+#define pe_warn(pe, fmt, ...)					\
+	pe_level_printk(pe, KERN_WARNING, fmt, ##__VA_ARGS__)
+#define pe_info(pe, fmt, ...)					\
+	pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
+
 /* Nvlink functions */
 extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH kernel v3 9/9] powerpc/powernv/npu: Enable NVLink pass through
  2016-04-12  8:37 [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
                   ` (7 preceding siblings ...)
  2016-04-12  8:37 ` [PATCH kernel v3 8/9] powerpc/powernv/ioda2: Export debug helper pe_level_printk() Alexey Kardashevskiy
@ 2016-04-12  8:37 ` Alexey Kardashevskiy
  2016-04-12  9:49   ` kbuild test robot
  2016-04-15  4:40   ` David Gibson
  8 siblings, 2 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-12  8:37 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Alexey Kardashevskiy, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, David Gibson, Gavin Shan,
	Russell Currey

IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
also has a couple of fast speed links (NVLink). The interface to links
is exposed as an emulated PCI bridge which is included into the same
IOMMU group as the corresponding GPU.

In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE.

In order to make these links work when GPU is passed to the guest,
these bridges need to be passed as well; otherwise performance will
degrade.

This implements and exports API to manage NPU state in regard to VFIO;
it replicates iommu_table_group_ops.

This defines a new pnv_pci_ioda2_npu_ops which is assigned to
the IODA2 bridge if there are NPUs for a GPU on the bridge.
The new callbacks call the default IODA2 callbacks plus new NPU API.
This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
table_group, it is not expected to fail as the helper is only called
from the pnv_pci_ioda2_npu_ops.

This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
the GPU group if any found. The helper uses helpers to look for
the "ibm,gpu" property in the device tree which is a phandle of
the corresponding GPU.

This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
loop skips NPU PEs as they do not have 32bit DMA segments.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v3:
* moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
* removed hack to make iommu_add_device() work, iommu_group_add_device()
is used instead
* cleanup in gpe_table_group_to_npe_cb()

v2:
* reimplemented to support NPU + GPU in the same group
* merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
"powerpc/powernv/npu: Enable passing through via VFIO" into this patch
---
 arch/powerpc/platforms/powernv/npu-dma.c  | 126 ++++++++++++++++++++++++++++++
 arch/powerpc/platforms/powernv/pci-ioda.c | 105 +++++++++++++++++++++++++
 arch/powerpc/platforms/powernv/pci.h      |   6 ++
 3 files changed, 237 insertions(+)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 8e70221..7cb9f6a 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -12,6 +12,7 @@
 #include <linux/export.h>
 #include <linux/pci.h>
 #include <linux/memblock.h>
+#include <linux/iommu.h>
 
 #include <asm/iommu.h>
 #include <asm/pnv-pci.h>
@@ -262,3 +263,128 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
 		}
 	}
 }
+
+long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
+		struct iommu_table *tbl)
+{
+	struct pnv_phb *phb = npe->phb;
+	int64_t rc;
+	const unsigned long size = tbl->it_indirect_levels ?
+		tbl->it_level_size : tbl->it_size;
+	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
+	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
+
+	pe_info(npe, "Setting up window#%d %llx..%llx pg=%lx\n", num,
+			start_addr, start_addr + win_size - 1,
+			IOMMU_PAGE_SIZE(tbl));
+
+	/* Ignore @num as there is just one window per NPU */
+	rc = opal_pci_map_pe_dma_window(phb->opal_id,
+			npe->pe_number,
+			npe->pe_number,
+			tbl->it_indirect_levels + 1,
+			__pa(tbl->it_base),
+			size << 3,
+			IOMMU_PAGE_SIZE(tbl));
+	if (rc) {
+		pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
+		return rc;
+	}
+
+	pnv_pci_link_table_and_group(phb->hose->node, num,
+			tbl, &npe->table_group);
+	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+
+	return rc;
+}
+
+long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
+{
+	struct pnv_phb *phb = npe->phb;
+	long ret;
+
+	pe_info(npe, "Removing DMA window #%d\n", num);
+
+	/* Ignore @num as there is just one window per NPU */
+	ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
+			npe->pe_number,
+			0/* levels */, 0/* table address */,
+			0/* table size */, 0/* page size */);
+	if (ret)
+		pe_warn(npe, "Unmapping failed, ret = %ld\n", ret);
+	else
+		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+
+	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
+			&npe->table_group);
+
+	return ret;
+}
+
+/* Switch ownership from platform code to external user (e.g. VFIO) */
+void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
+{
+	struct pnv_phb *phb = npe->phb;
+	int64_t ret;
+
+	if (npe->table_group.tables[0]) {
+		/* Disable 32bit window */
+		pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
+				&npe->table_group);
+		npe->table_group.tables[0] = NULL;
+		ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
+				npe->pe_number,
+				0/* levels */, 0/* table address */,
+				0/* table size */, 0/* page size */);
+	} else {
+		/* Disable bypass */
+		ret = opal_pci_map_pe_dma_window_real(phb->opal_id,
+				npe->pe_number, npe->pe_number,
+				0 /* bypass base */, 0);
+	}
+
+	if (ret != OPAL_SUCCESS)
+		pe_err(npe, "Failed to remove DMA window");
+	else
+		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+}
+
+/* Switch ownership from external user (e.g. VFIO) back to core */
+void pnv_npu_release_ownership(struct pnv_ioda_pe *npe)
+{
+	struct pnv_phb *phb = npe->phb;
+	int64_t ret;
+
+	/* Disable a window (not bypass) as external users do not use bypass */
+	ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
+			npe->pe_number,
+			0/* levels */, 0/* table address */,
+			0/* table size */, 0/* page size */);
+	if (ret != OPAL_SUCCESS)
+		pe_err(npe, "Failed to remove DMA window");
+	else
+		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+}
+
+struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
+{
+	struct pnv_phb *phb = npe->phb;
+	struct pci_bus *pbus = phb->hose->bus;
+	struct pci_dev *npdev, *gpdev = NULL, *gptmp;
+	struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
+
+	if (!gpe || !gpdev)
+		return NULL;
+
+	list_for_each_entry(npdev, &pbus->devices, bus_list) {
+		gptmp = pnv_pci_get_gpu_dev(npdev);
+
+		if (gptmp != gpdev)
+			continue;
+
+		pe_info(gpe, "Attached NPU %s\n", dev_name(&npdev->dev));
+		iommu_group_add_device(gpe->table_group.group, &npdev->dev);
+	}
+
+	return gpe;
+}
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 922c74c..95f9e19 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -2273,6 +2273,93 @@ static struct iommu_table_group_ops pnv_pci_ioda2_ops = {
 	.take_ownership = pnv_ioda2_take_ownership,
 	.release_ownership = pnv_ioda2_release_ownership,
 };
+
+static int gpe_table_group_to_npe_cb(struct device *dev, void *opaque)
+{
+	struct pci_controller *hose;
+	struct pnv_phb *phb;
+	struct pnv_ioda_pe **ptmppe = opaque;
+	struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+	struct pci_dn *pdn = pci_get_pdn(pdev);
+
+	if (!pdn || pdn->pe_number == IODA_INVALID_PE)
+		return 0;
+
+	hose = pci_bus_to_host(pdev->bus);
+	phb = hose->private_data;
+	if (phb->type != PNV_PHB_NPU)
+		return 0;
+
+	*ptmppe = &phb->ioda.pe_array[pdn->pe_number];
+
+	return 1;
+}
+
+/*
+ * This returns PE of associated NPU.
+ * This assumes that NPU is in the same IOMMU group with GPU and there is
+ * no other PEs.
+ */
+static struct pnv_ioda_pe *gpe_table_group_to_npe(
+		struct iommu_table_group *table_group)
+{
+	struct pnv_ioda_pe *npe = NULL;
+	int ret = iommu_group_for_each_dev(table_group->group, &npe,
+			gpe_table_group_to_npe_cb);
+
+	BUG_ON(!ret || !npe);
+
+	return npe;
+}
+
+static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group,
+		int num, struct iommu_table *tbl)
+{
+	long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
+
+	if (ret)
+		return ret;
+
+	ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num, tbl);
+	if (ret)
+		pnv_pci_ioda2_unset_window(table_group, num);
+
+	return ret;
+}
+
+static long pnv_pci_ioda2_npu_unset_window(
+		struct iommu_table_group *table_group,
+		int num)
+{
+	long ret = pnv_pci_ioda2_unset_window(table_group, num);
+
+	if (ret)
+		return ret;
+
+	return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
+}
+
+static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
+{
+	pnv_ioda2_take_ownership(table_group);
+	pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
+}
+
+static void pnv_ioda2_npu_release_ownership(
+		struct iommu_table_group *table_group)
+{
+	pnv_npu_release_ownership(gpe_table_group_to_npe(table_group));
+	pnv_ioda2_release_ownership(table_group);
+}
+
+static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
+	.get_table_size = pnv_pci_ioda2_get_table_size,
+	.create_table = pnv_pci_ioda2_create_table,
+	.set_window = pnv_pci_ioda2_npu_set_window,
+	.unset_window = pnv_pci_ioda2_npu_unset_window,
+	.take_ownership = pnv_ioda2_npu_take_ownership,
+	.release_ownership = pnv_ioda2_npu_release_ownership,
+};
 #endif
 
 static void pnv_pci_ioda_setup_opal_tce_kill(struct pnv_phb *phb)
@@ -3012,6 +3099,7 @@ static void pnv_pci_ioda_setup_DMA(void)
 {
 	struct pci_controller *hose, *tmp;
 	struct pnv_phb *phb;
+	struct pnv_ioda_pe *pe, *gpe;
 
 	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
 		pnv_ioda_setup_dma(hose->private_data);
@@ -3020,6 +3108,23 @@ static void pnv_pci_ioda_setup_DMA(void)
 		phb = hose->private_data;
 		phb->initialized = 1;
 	}
+
+	/*
+	 * Now we have all PHBs discovered, time to add NPU devices to
+	 * the corresponding IOMMU groups.
+	 */
+	list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
+		phb = hose->private_data;
+
+		if (phb->type != PNV_PHB_NPU)
+			continue;
+
+		list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
+			gpe = pnv_pci_npu_setup_iommu(pe);
+			if (gpe)
+				gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
+		}
+	}
 }
 
 static void pnv_pci_ioda_create_dbgfs(void)
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index e117bd8..ebc6ee4 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -244,5 +244,11 @@ extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
 /* Nvlink functions */
 extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
 extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
+extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe);
+extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
+		struct iommu_table *tbl);
+extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
+extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
+extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
 
 #endif /* __POWERNV_PCI_H */
-- 
2.5.0.rc3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH kernel v3 9/9] powerpc/powernv/npu: Enable NVLink pass through
  2016-04-12  8:37 ` [PATCH kernel v3 9/9] powerpc/powernv/npu: Enable NVLink pass through Alexey Kardashevskiy
@ 2016-04-12  9:49   ` kbuild test robot
  2016-04-15  4:40   ` David Gibson
  1 sibling, 0 replies; 14+ messages in thread
From: kbuild test robot @ 2016-04-12  9:49 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: kbuild-all, linuxppc-dev, Alexey Kardashevskiy, Alistair Popple,
	Gavin Shan, Alex Williamson, David Gibson, Daniel Axtens

[-- Attachment #1: Type: text/plain, Size: 1730 bytes --]

Hi Alexey,

[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.6-rc3 next-20160412]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Alexey-Kardashevskiy/powerpc-powernv-npu-Enable-PCI-pass-through-for-NVLink/20160412-165337
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=powerpc 

All errors (new ones prefixed by >>):

   arch/powerpc/platforms/powernv/pci-ioda.c: In function 'pnv_pci_ioda_setup_DMA':
>> arch/powerpc/platforms/powernv/pci-ioda.c:3125:29: error: 'pnv_pci_ioda2_npu_ops' undeclared (first use in this function)
        gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
                                ^
   arch/powerpc/platforms/powernv/pci-ioda.c:3125:29: note: each undeclared identifier is reported only once for each function it appears in

vim +/pnv_pci_ioda2_npu_ops +3125 arch/powerpc/platforms/powernv/pci-ioda.c

  3119			if (phb->type != PNV_PHB_NPU)
  3120				continue;
  3121	
  3122			list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
  3123				gpe = pnv_pci_npu_setup_iommu(pe);
  3124				if (gpe)
> 3125					gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
  3126			}
  3127		}
  3128	}

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 22171 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH kernel v3 9/9] powerpc/powernv/npu: Enable NVLink pass through
  2016-04-12  8:37 ` [PATCH kernel v3 9/9] powerpc/powernv/npu: Enable NVLink pass through Alexey Kardashevskiy
  2016-04-12  9:49   ` kbuild test robot
@ 2016-04-15  4:40   ` David Gibson
  2016-04-18  1:52     ` Alistair Popple
  1 sibling, 1 reply; 14+ messages in thread
From: David Gibson @ 2016-04-15  4:40 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: linuxppc-dev, Alex Williamson, Alistair Popple,
	Benjamin Herrenschmidt, Daniel Axtens, Gavin Shan, Russell Currey

[-- Attachment #1: Type: text/plain, Size: 6142 bytes --]

On Tue, Apr 12, 2016 at 06:37:50PM +1000, Alexey Kardashevskiy wrote:
> IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
> also has a couple of fast speed links (NVLink). The interface to links
> is exposed as an emulated PCI bridge which is included into the same
> IOMMU group as the corresponding GPU.
> 
> In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE.
> 
> In order to make these links work when GPU is passed to the guest,
> these bridges need to be passed as well; otherwise performance will
> degrade.
> 
> This implements and exports API to manage NPU state in regard to VFIO;
> it replicates iommu_table_group_ops.
> 
> This defines a new pnv_pci_ioda2_npu_ops which is assigned to
> the IODA2 bridge if there are NPUs for a GPU on the bridge.
> The new callbacks call the default IODA2 callbacks plus new NPU API.
> This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
> table_group, it is not expected to fail as the helper is only called
> from the pnv_pci_ioda2_npu_ops.
> 
> This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
> the GPU group if any found. The helper uses helpers to look for
> the "ibm,gpu" property in the device tree which is a phandle of
> the corresponding GPU.
> 
> This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
> loop skips NPU PEs as they do not have 32bit DMA segments.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
> Changes:
> v3:
> * moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
> * removed hack to make iommu_add_device() work, iommu_group_add_device()
> is used instead
> * cleanup in gpe_table_group_to_npe_cb()
> 
> v2:
> * reimplemented to support NPU + GPU in the same group
> * merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
> "powerpc/powernv/npu: Enable passing through via VFIO" into this patch
> ---
>  arch/powerpc/platforms/powernv/npu-dma.c  | 126 ++++++++++++++++++++++++++++++
>  arch/powerpc/platforms/powernv/pci-ioda.c | 105 +++++++++++++++++++++++++
>  arch/powerpc/platforms/powernv/pci.h      |   6 ++
>  3 files changed, 237 insertions(+)
> 
> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
> index 8e70221..7cb9f6a 100644
> --- a/arch/powerpc/platforms/powernv/npu-dma.c
> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> @@ -12,6 +12,7 @@
>  #include <linux/export.h>
>  #include <linux/pci.h>
>  #include <linux/memblock.h>
> +#include <linux/iommu.h>
>  
>  #include <asm/iommu.h>
>  #include <asm/pnv-pci.h>
> @@ -262,3 +263,128 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
>  		}
>  	}
>  }
> +
> +long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
> +		struct iommu_table *tbl)
> +{
> +	struct pnv_phb *phb = npe->phb;
> +	int64_t rc;
> +	const unsigned long size = tbl->it_indirect_levels ?
> +		tbl->it_level_size : tbl->it_size;
> +	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
> +	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
> +
> +	pe_info(npe, "Setting up window#%d %llx..%llx pg=%lx\n", num,
> +			start_addr, start_addr + win_size - 1,
> +			IOMMU_PAGE_SIZE(tbl));
> +
> +	/* Ignore @num as there is just one window per NPU */
> +	rc = opal_pci_map_pe_dma_window(phb->opal_id,
> +			npe->pe_number,
> +			npe->pe_number,
> +			tbl->it_indirect_levels + 1,
> +			__pa(tbl->it_base),
> +			size << 3,
> +			IOMMU_PAGE_SIZE(tbl));
> +	if (rc) {
> +		pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
> +		return rc;
> +	}
> +
> +	pnv_pci_link_table_and_group(phb->hose->node, num,
> +			tbl, &npe->table_group);
> +	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> +
> +	return rc;
> +}
> +
> +long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
> +{
> +	struct pnv_phb *phb = npe->phb;
> +	long ret;
> +
> +	pe_info(npe, "Removing DMA window #%d\n", num);
> +
> +	/* Ignore @num as there is just one window per NPU */
> +	ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
> +			npe->pe_number,
> +			0/* levels */, 0/* table address */,
> +			0/* table size */, 0/* page size */);
> +	if (ret)
> +		pe_warn(npe, "Unmapping failed, ret = %ld\n", ret);
> +	else
> +		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> +
> +	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
> +			&npe->table_group);
> +
> +	return ret;
> +}
> +
> +/* Switch ownership from platform code to external user (e.g. VFIO) */
> +void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
> +{
> +	struct pnv_phb *phb = npe->phb;
> +	int64_t ret;
> +
> +	if (npe->table_group.tables[0]) {
> +		/* Disable 32bit window */
> +		pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
> +				&npe->table_group);
> +		npe->table_group.tables[0] = NULL;
> +		ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
> +				npe->pe_number,
> +				0/* levels */, 0/* table address */,
> +				0/* table size */, 0/* page size */);
> +	} else {
> +		/* Disable bypass */
> +		ret = opal_pci_map_pe_dma_window_real(phb->opal_id,
> +				npe->pe_number, npe->pe_number,
> +				0 /* bypass base */, 0);
> +	}

It's not immediately obvious to me why this is an if/else.  I'm
assuming that the way the kernel uses the NPE IOMMU it always either
has a 32-bit DMA window, or it's in bypass mode.  Is that inherent to
the way the hardware works, or just the way Linux uses it?

I'm just worrying if this could open an exploitable hole if the host
kernel ever changes so that it could have bypass windows and TCE
windows simultaneously active.  Is there any way to unconditionally
disable bypass *and* disable any existing DMA window.

Apart from that nit,

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH kernel v3 9/9] powerpc/powernv/npu: Enable NVLink pass through
  2016-04-15  4:40   ` David Gibson
@ 2016-04-18  1:52     ` Alistair Popple
  2016-04-19  1:31       ` Alexey Kardashevskiy
  0 siblings, 1 reply; 14+ messages in thread
From: Alistair Popple @ 2016-04-18  1:52 UTC (permalink / raw)
  To: David Gibson
  Cc: Alexey Kardashevskiy, linuxppc-dev, Alex Williamson,
	Benjamin Herrenschmidt, Daniel Axtens, Gavin Shan, Russell Currey

Hi David,

On Fri, 15 Apr 2016 14:40:20 David Gibson wrote:
> On Tue, Apr 12, 2016 at 06:37:50PM +1000, Alexey Kardashevskiy wrote:
> > IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
> > also has a couple of fast speed links (NVLink). The interface to links
> > is exposed as an emulated PCI bridge which is included into the same
> > IOMMU group as the corresponding GPU.
> > 
> > In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE.
> > 
> > In order to make these links work when GPU is passed to the guest,
> > these bridges need to be passed as well; otherwise performance will
> > degrade.
> > 
> > This implements and exports API to manage NPU state in regard to VFIO;
> > it replicates iommu_table_group_ops.
> > 
> > This defines a new pnv_pci_ioda2_npu_ops which is assigned to
> > the IODA2 bridge if there are NPUs for a GPU on the bridge.
> > The new callbacks call the default IODA2 callbacks plus new NPU API.
> > This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
> > table_group, it is not expected to fail as the helper is only called
> > from the pnv_pci_ioda2_npu_ops.
> > 
> > This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
> > the GPU group if any found. The helper uses helpers to look for
> > the "ibm,gpu" property in the device tree which is a phandle of
> > the corresponding GPU.
> > 
> > This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
> > loop skips NPU PEs as they do not have 32bit DMA segments.
> > 
> > Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> > ---
> > Changes:
> > v3:
> > * moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
> > * removed hack to make iommu_add_device() work, iommu_group_add_device()
> > is used instead
> > * cleanup in gpe_table_group_to_npe_cb()
> > 
> > v2:
> > * reimplemented to support NPU + GPU in the same group
> > * merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
> > "powerpc/powernv/npu: Enable passing through via VFIO" into this patch
> > ---
> >  arch/powerpc/platforms/powernv/npu-dma.c  | 126 ++++++++++++++++++++++++++++++
> >  arch/powerpc/platforms/powernv/pci-ioda.c | 105 +++++++++++++++++++++++++
> >  arch/powerpc/platforms/powernv/pci.h      |   6 ++
> >  3 files changed, 237 insertions(+)
> > 
> > diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
> > index 8e70221..7cb9f6a 100644
> > --- a/arch/powerpc/platforms/powernv/npu-dma.c
> > +++ b/arch/powerpc/platforms/powernv/npu-dma.c
> > @@ -12,6 +12,7 @@
> >  #include <linux/export.h>
> >  #include <linux/pci.h>
> >  #include <linux/memblock.h>
> > +#include <linux/iommu.h>
> >  
> >  #include <asm/iommu.h>
> >  #include <asm/pnv-pci.h>
> > @@ -262,3 +263,128 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
> >  		}
> >  	}
> >  }
> > +
> > +long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
> > +		struct iommu_table *tbl)
> > +{
> > +	struct pnv_phb *phb = npe->phb;
> > +	int64_t rc;
> > +	const unsigned long size = tbl->it_indirect_levels ?
> > +		tbl->it_level_size : tbl->it_size;
> > +	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
> > +	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
> > +
> > +	pe_info(npe, "Setting up window#%d %llx..%llx pg=%lx\n", num,
> > +			start_addr, start_addr + win_size - 1,
> > +			IOMMU_PAGE_SIZE(tbl));
> > +
> > +	/* Ignore @num as there is just one window per NPU */
> > +	rc = opal_pci_map_pe_dma_window(phb->opal_id,
> > +			npe->pe_number,
> > +			npe->pe_number,
> > +			tbl->it_indirect_levels + 1,
> > +			__pa(tbl->it_base),
> > +			size << 3,
> > +			IOMMU_PAGE_SIZE(tbl));
> > +	if (rc) {
> > +		pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
> > +		return rc;
> > +	}
> > +
> > +	pnv_pci_link_table_and_group(phb->hose->node, num,
> > +			tbl, &npe->table_group);
> > +	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> > +
> > +	return rc;
> > +}
> > +
> > +long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
> > +{
> > +	struct pnv_phb *phb = npe->phb;
> > +	long ret;
> > +
> > +	pe_info(npe, "Removing DMA window #%d\n", num);
> > +
> > +	/* Ignore @num as there is just one window per NPU */
> > +	ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
> > +			npe->pe_number,
> > +			0/* levels */, 0/* table address */,
> > +			0/* table size */, 0/* page size */);
> > +	if (ret)
> > +		pe_warn(npe, "Unmapping failed, ret = %ld\n", ret);
> > +	else
> > +		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
> > +
> > +	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
> > +			&npe->table_group);
> > +
> > +	return ret;
> > +}
> > +
> > +/* Switch ownership from platform code to external user (e.g. VFIO) */
> > +void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
> > +{
> > +	struct pnv_phb *phb = npe->phb;
> > +	int64_t ret;
> > +
> > +	if (npe->table_group.tables[0]) {
> > +		/* Disable 32bit window */
> > +		pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
> > +				&npe->table_group);
> > +		npe->table_group.tables[0] = NULL;
> > +		ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
> > +				npe->pe_number,
> > +				0/* levels */, 0/* table address */,
> > +				0/* table size */, 0/* page size */);
> > +	} else {
> > +		/* Disable bypass */
> > +		ret = opal_pci_map_pe_dma_window_real(phb->opal_id,
> > +				npe->pe_number, npe->pe_number,
> > +				0 /* bypass base */, 0);
> > +	}
> 
> It's not immediately obvious to me why this is an if/else.  I'm
> assuming that the way the kernel uses the NPE IOMMU it always either
> has a 32-bit DMA window, or it's in bypass mode.  Is that inherent to
> the way the hardware works, or just the way Linux uses it?

Unlike the IODA2 the NPU only has a single TVE/window meaning the NPU IOMMU is
either disabled, in bypass or has a 32-bit DMA window. Alexey can comment on
the if/else but I agree that it isn't strictly necessary - calling either 
map_pe_dma_window() or map_pe_dma_window_real() with the arguments in both the
if and else cases will zero the TVE effectively disabling DMA.

> I'm just worrying if this could open an exploitable hole if the host
> kernel ever changes so that it could have bypass windows and TCE
> windows simultaneously active.  Is there any way to unconditionally
> disable bypass *and* disable any existing DMA window.

I'm not a VFIO expert, but from the perspective of the NPU HW this situation 
couldn't exist for the reasons described above. Calling
opal_pci_map_pe_dma_window/_real() with either a zero table size or PCI memory
size will unconditionally disable all DMA windows for that PE# on the NPU.

- Alistair
 
> Apart from that nit,
> 
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH kernel v3 9/9] powerpc/powernv/npu: Enable NVLink pass through
  2016-04-18  1:52     ` Alistair Popple
@ 2016-04-19  1:31       ` Alexey Kardashevskiy
  0 siblings, 0 replies; 14+ messages in thread
From: Alexey Kardashevskiy @ 2016-04-19  1:31 UTC (permalink / raw)
  To: Alistair Popple, David Gibson
  Cc: linuxppc-dev, Alex Williamson, Benjamin Herrenschmidt,
	Daniel Axtens, Gavin Shan, Russell Currey

On 04/18/2016 11:52 AM, Alistair Popple wrote:
> Hi David,
>
> On Fri, 15 Apr 2016 14:40:20 David Gibson wrote:
>> On Tue, Apr 12, 2016 at 06:37:50PM +1000, Alexey Kardashevskiy wrote:
>>> IBM POWER8 NVlink systems come with Tesla K40-ish GPUs each of which
>>> also has a couple of fast speed links (NVLink). The interface to links
>>> is exposed as an emulated PCI bridge which is included into the same
>>> IOMMU group as the corresponding GPU.
>>>
>>> In the kernel, NPUs get a separate PHB of the PNV_PHB_NPU type and a PE.
>>>
>>> In order to make these links work when GPU is passed to the guest,
>>> these bridges need to be passed as well; otherwise performance will
>>> degrade.
>>>
>>> This implements and exports API to manage NPU state in regard to VFIO;
>>> it replicates iommu_table_group_ops.
>>>
>>> This defines a new pnv_pci_ioda2_npu_ops which is assigned to
>>> the IODA2 bridge if there are NPUs for a GPU on the bridge.
>>> The new callbacks call the default IODA2 callbacks plus new NPU API.
>>> This adds a gpe_table_group_to_npe() helper to find NPU PE for the IODA2
>>> table_group, it is not expected to fail as the helper is only called
>>> from the pnv_pci_ioda2_npu_ops.
>>>
>>> This adds a pnv_pci_npu_setup_iommu() helper which adds NPUs to
>>> the GPU group if any found. The helper uses helpers to look for
>>> the "ibm,gpu" property in the device tree which is a phandle of
>>> the corresponding GPU.
>>>
>>> This adds an additional loop over PEs in pnv_ioda_setup_dma() as the main
>>> loop skips NPU PEs as they do not have 32bit DMA segments.
>>>
>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>> ---
>>> Changes:
>>> v3:
>>> * moved NPU-to-GPU IOMMU grouping later after all PHBs are discovered
>>> * removed hack to make iommu_add_device() work, iommu_group_add_device()
>>> is used instead
>>> * cleanup in gpe_table_group_to_npe_cb()
>>>
>>> v2:
>>> * reimplemented to support NPU + GPU in the same group
>>> * merged "powerpc/powernv/npu: Add NPU devices to IOMMU group" and
>>> "powerpc/powernv/npu: Enable passing through via VFIO" into this patch
>>> ---
>>>   arch/powerpc/platforms/powernv/npu-dma.c  | 126 ++++++++++++++++++++++++++++++
>>>   arch/powerpc/platforms/powernv/pci-ioda.c | 105 +++++++++++++++++++++++++
>>>   arch/powerpc/platforms/powernv/pci.h      |   6 ++
>>>   3 files changed, 237 insertions(+)
>>>
>>> diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
>>> index 8e70221..7cb9f6a 100644
>>> --- a/arch/powerpc/platforms/powernv/npu-dma.c
>>> +++ b/arch/powerpc/platforms/powernv/npu-dma.c
>>> @@ -12,6 +12,7 @@
>>>   #include <linux/export.h>
>>>   #include <linux/pci.h>
>>>   #include <linux/memblock.h>
>>> +#include <linux/iommu.h>
>>>
>>>   #include <asm/iommu.h>
>>>   #include <asm/pnv-pci.h>
>>> @@ -262,3 +263,128 @@ void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
>>>   		}
>>>   	}
>>>   }
>>> +
>>> +long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
>>> +		struct iommu_table *tbl)
>>> +{
>>> +	struct pnv_phb *phb = npe->phb;
>>> +	int64_t rc;
>>> +	const unsigned long size = tbl->it_indirect_levels ?
>>> +		tbl->it_level_size : tbl->it_size;
>>> +	const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
>>> +	const __u64 win_size = tbl->it_size << tbl->it_page_shift;
>>> +
>>> +	pe_info(npe, "Setting up window#%d %llx..%llx pg=%lx\n", num,
>>> +			start_addr, start_addr + win_size - 1,
>>> +			IOMMU_PAGE_SIZE(tbl));
>>> +
>>> +	/* Ignore @num as there is just one window per NPU */
>>> +	rc = opal_pci_map_pe_dma_window(phb->opal_id,
>>> +			npe->pe_number,
>>> +			npe->pe_number,
>>> +			tbl->it_indirect_levels + 1,
>>> +			__pa(tbl->it_base),
>>> +			size << 3,
>>> +			IOMMU_PAGE_SIZE(tbl));
>>> +	if (rc) {
>>> +		pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
>>> +		return rc;
>>> +	}
>>> +
>>> +	pnv_pci_link_table_and_group(phb->hose->node, num,
>>> +			tbl, &npe->table_group);
>>> +	pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
>>> +
>>> +	return rc;
>>> +}
>>> +
>>> +long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
>>> +{
>>> +	struct pnv_phb *phb = npe->phb;
>>> +	long ret;
>>> +
>>> +	pe_info(npe, "Removing DMA window #%d\n", num);
>>> +
>>> +	/* Ignore @num as there is just one window per NPU */
>>> +	ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
>>> +			npe->pe_number,
>>> +			0/* levels */, 0/* table address */,
>>> +			0/* table size */, 0/* page size */);
>>> +	if (ret)
>>> +		pe_warn(npe, "Unmapping failed, ret = %ld\n", ret);
>>> +	else
>>> +		pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
>>> +
>>> +	pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
>>> +			&npe->table_group);
>>> +
>>> +	return ret;
>>> +}
>>> +
>>> +/* Switch ownership from platform code to external user (e.g. VFIO) */
>>> +void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
>>> +{
>>> +	struct pnv_phb *phb = npe->phb;
>>> +	int64_t ret;
>>> +
>>> +	if (npe->table_group.tables[0]) {
>>> +		/* Disable 32bit window */
>>> +		pnv_pci_unlink_table_and_group(npe->table_group.tables[0],
>>> +				&npe->table_group);
>>> +		npe->table_group.tables[0] = NULL;
>>> +		ret = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
>>> +				npe->pe_number,
>>> +				0/* levels */, 0/* table address */,
>>> +				0/* table size */, 0/* page size */);
>>> +	} else {
>>> +		/* Disable bypass */
>>> +		ret = opal_pci_map_pe_dma_window_real(phb->opal_id,
>>> +				npe->pe_number, npe->pe_number,
>>> +				0 /* bypass base */, 0);
>>> +	}
>>
>> It's not immediately obvious to me why this is an if/else.  I'm
>> assuming that the way the kernel uses the NPE IOMMU it always either
>> has a 32-bit DMA window, or it's in bypass mode.  Is that inherent to
>> the way the hardware works, or just the way Linux uses it?
>
> Unlike the IODA2 the NPU only has a single TVE/window meaning the NPU IOMMU is
> either disabled, in bypass or has a 32-bit DMA window.


... or has a 64-bit DMA window.


> Alexey can comment on
> the if/else but I agree that it isn't strictly necessary - calling either
> map_pe_dma_window() or map_pe_dma_window_real() with the arguments in both the
> if and else cases will zero the TVE effectively disabling DMA.

Right, OPAL does essentially the same thing when I disable DMA (window or 
bypass).


>> I'm just worrying if this could open an exploitable hole if the host
>> kernel ever changes so that it could have bypass windows and TCE
>> windows simultaneously active.  Is there any way to unconditionally
>> disable bypass *and* disable any existing DMA window.
>
> I'm not a VFIO expert, but from the perspective of the NPU HW this situation
> couldn't exist for the reasons described above. Calling
> opal_pci_map_pe_dma_window/_real() with either a zero table size or PCI memory
> size will unconditionally disable all DMA windows for that PE# on the NPU.
>
> - Alistair
>
>> Apart from that nit,
>>
>> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>


Thanks!

However there are still issues with the patchset which I am planning to fix 
and at least one bug with continuing DMA after GPU was reset, when I figure 
out what is going on, I'll respin.



-- 
Alexey

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2016-04-19  1:31 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-04-12  8:37 [PATCH kernel v3 0/9] powerpc/powernv/npu: Enable PCI pass through for NVLink Alexey Kardashevskiy
2016-04-12  8:37 ` [PATCH kernel v3 1/9] vfio/spapr: Relax the IOMMU compatibility check Alexey Kardashevskiy
2016-04-12  8:37 ` [PATCH kernel v3 2/9] powerpc/powernv: Rename pnv_pci_ioda2_tce_invalidate_entire Alexey Kardashevskiy
2016-04-12  8:37 ` [PATCH kernel v3 3/9] powerpc/powernv: Define TCE Kill flags Alexey Kardashevskiy
2016-04-12  8:37 ` [PATCH kernel v3 4/9] powerpc/powernv/npu: TCE Kill helpers cleanup Alexey Kardashevskiy
2016-04-12  8:37 ` [PATCH kernel v3 5/9] powerpc/powernv/npu: Use the correct IOMMU page size Alexey Kardashevskiy
2016-04-12  8:37 ` [PATCH kernel v3 6/9] powerpc/powernv/npu: Simplify DMA setup Alexey Kardashevskiy
2016-04-12  8:37 ` [PATCH kernel v3 7/9] powerpc/powernv/npu: Rework TCE Kill handling Alexey Kardashevskiy
2016-04-12  8:37 ` [PATCH kernel v3 8/9] powerpc/powernv/ioda2: Export debug helper pe_level_printk() Alexey Kardashevskiy
2016-04-12  8:37 ` [PATCH kernel v3 9/9] powerpc/powernv/npu: Enable NVLink pass through Alexey Kardashevskiy
2016-04-12  9:49   ` kbuild test robot
2016-04-15  4:40   ` David Gibson
2016-04-18  1:52     ` Alistair Popple
2016-04-19  1:31       ` Alexey Kardashevskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).