linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Alexey Kardashevskiy <aik@ozlabs.ru>
To: linuxppc-dev@lists.ozlabs.org
Cc: "Jose Ricardo Ziviani" <joserz@linux.ibm.com>,
	"Alexey Kardashevskiy" <aik@ozlabs.ru>,
	"Alistair Popple" <alistair@popple.id.au>,
	"Daniel Henrique Barboza" <danielhb413@gmail.com>,
	"Alex Williamson" <alex.williamson@redhat.com>,
	kvm-ppc@vger.kernel.org, "Sam Bobroff" <sbobroff@linux.ibm.com>,
	"Piotr Jaroszynski" <pjaroszynski@nvidia.com>,
	"Oliver O'Halloran" <oohall@gmail.com>,
	"Andrew Donnellan" <andrew.donnellan@au1.ibm.com>,
	"Leonardo Augusto Guimarães Garcia" <lagarcia@br.ibm.com>,
	"Reza Arbab" <arbab@linux.ibm.com>,
	"David Gibson" <david@gibson.dropbear.id.au>
Subject: [PATCH kernel v4 06/19] powerpc/pseries/iommu: Use memory@ nodes in max RAM address calculation
Date: Fri, 23 Nov 2018 16:52:51 +1100	[thread overview]
Message-ID: <20181123055304.25116-7-aik@ozlabs.ru> (raw)
In-Reply-To: <20181123055304.25116-1-aik@ozlabs.ru>

We might have memory@ nodes with "linux,usable-memory" set to zero
(for example, to replicate powernv's behaviour for GPU coherent memory)
which means that the memory needs an extra initialization but since
it can be used afterwards, the pseries platform will try mapping it
for DMA so the DMA window needs to cover those memory regions too;
if the window cannot cover new memory regions, the memory onlining fails.

This walks through the memory nodes to find the highest RAM address to
let a huge DMA window cover that too in case this memory gets onlined
later.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* uses of_read_number directly instead of cut-n-pasted read_n_cells
---
 arch/powerpc/platforms/pseries/iommu.c | 34 +++++++++++++++++++++++++-
 1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 06f0296..7da74b5 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -964,6 +964,38 @@ struct failed_ddw_pdn {
 
 static LIST_HEAD(failed_ddw_pdn_list);
 
+static phys_addr_t ddw_memory_hotplug_max(void)
+{
+	phys_addr_t max_addr = memory_hotplug_max();
+	struct device_node *memory;
+
+	for_each_node_by_type(memory, "memory") {
+		unsigned long start, size;
+		int ranges, n_mem_addr_cells, n_mem_size_cells, len;
+		const __be32 *memcell_buf;
+
+		memcell_buf = of_get_property(memory, "reg", &len);
+		if (!memcell_buf || len <= 0)
+			continue;
+
+		n_mem_addr_cells = of_n_addr_cells(memory);
+		n_mem_size_cells = of_n_size_cells(memory);
+
+		/* ranges in cell */
+		ranges = (len >> 2) / (n_mem_addr_cells + n_mem_size_cells);
+
+		/* these are order-sensitive, and modify the buffer pointer */
+		start = of_read_number(memcell_buf, n_mem_addr_cells);
+		memcell_buf += n_mem_addr_cells;
+		size = of_read_number(memcell_buf, n_mem_size_cells);
+		memcell_buf += n_mem_size_cells;
+
+		max_addr = max_t(phys_addr_t, max_addr, start + size);
+	}
+
+	return max_addr;
+}
+
 /*
  * If the PE supports dynamic dma windows, and there is space for a table
  * that can map all pages in a linear offset, then setup such a table,
@@ -1053,7 +1085,7 @@ static u64 enable_ddw(struct pci_dev *dev, struct device_node *pdn)
 	}
 	/* verify the window * number of ptes will map the partition */
 	/* check largest block * page size > max memory hotplug addr */
-	max_addr = memory_hotplug_max();
+	max_addr = ddw_memory_hotplug_max();
 	if (query.largest_available_block < (max_addr >> page_shift)) {
 		dev_dbg(&dev->dev, "can't map partition max 0x%llx with %u "
 			  "%llu-sized pages\n", max_addr,  query.largest_available_block,
-- 
2.17.1


  parent reply	other threads:[~2018-11-23  6:16 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-23  5:52 [PATCH kernel v4 00/19] powerpc/powernv/npu, vfio: NVIDIA V100 + P9 passthrough Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 01/19] powerpc/ioda/npu: Call skiboot's hot reset hook when disabling NPU2 Alexey Kardashevskiy
2018-12-05  4:21   ` David Gibson
2018-11-23  5:52 ` [PATCH kernel v4 02/19] powerpc/mm/iommu/vfio_spapr_tce: Change mm_iommu_get to reference a region Alexey Kardashevskiy
2018-12-05  4:25   ` David Gibson
2018-11-23  5:52 ` [PATCH kernel v4 03/19] powerpc/vfio/iommu/kvm: Do not pin device memory Alexey Kardashevskiy
2018-12-05  4:35   ` David Gibson
2018-12-13  3:25   ` Paul Mackerras
2018-11-23  5:52 ` [PATCH kernel v4 04/19] powerpc/powernv: Move npu struct from pnv_phb to pci_controller Alexey Kardashevskiy
2018-12-05  5:14   ` David Gibson
2018-12-05  5:47     ` Alexey Kardashevskiy
2018-12-05  6:17       ` Alexey Kardashevskiy
2018-12-05 22:40         ` David Gibson
2018-12-10  2:50           ` Alexey Kardashevskiy
2018-12-10  3:42             ` David Gibson
2018-11-23  5:52 ` [PATCH kernel v4 05/19] powerpc/powernv/npu: Move OPAL calls away from context manipulation Alexey Kardashevskiy
2018-11-23  5:52 ` Alexey Kardashevskiy [this message]
2018-11-23  5:52 ` [PATCH kernel v4 07/19] powerpc/pseries/npu: Enable platform support Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 08/19] powerpc/pseries: Remove IOMMU API support for non-LPAR systems Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 09/19] powerpc/powernv/pseries: Rework device adding to IOMMU groups Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 10/19] powerpc/iommu_api: Move IOMMU groups setup to a single place Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 11/19] powerpc/powernv: Reference iommu_table while it is linked to a group Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 12/19] powerpc/powernv: Add purge cache OPAL call Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 13/19] powerpc/powernv/npu: Move single TVE handling to NPU PE Alexey Kardashevskiy
2018-11-23  5:52 ` [PATCH kernel v4 14/19] powerpc/powernv/npu: Convert NPU IOMMU helpers to iommu_table_group_ops Alexey Kardashevskiy
2018-11-23  5:53 ` [PATCH kernel v4 15/19] powerpc/powernv/npu: Add compound IOMMU groups Alexey Kardashevskiy
2018-11-23  5:53 ` [PATCH kernel v4 16/19] powerpc/powernv/npu: Add release_ownership hook Alexey Kardashevskiy
2018-11-23  5:53 ` [PATCH kernel v4 17/19] vfio_pci: Allow mapping extra regions Alexey Kardashevskiy
2018-12-11  0:09   ` Alex Williamson
2018-11-23  5:53 ` [PATCH kernel v4 18/19] vfio_pci: Allow regions to add own capabilities Alexey Kardashevskiy
2018-12-11  0:10   ` Alex Williamson
2018-11-23  5:53 ` [PATCH kernel v4 19/19] vfio_pci: Add NVIDIA GV100GL [Tesla V100 SXM2] subdriver Alexey Kardashevskiy
2018-12-11  0:08   ` Alex Williamson
2018-12-11  0:57     ` Alexey Kardashevskiy
2018-12-11  1:27       ` Alex Williamson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181123055304.25116-7-aik@ozlabs.ru \
    --to=aik@ozlabs.ru \
    --cc=alex.williamson@redhat.com \
    --cc=alistair@popple.id.au \
    --cc=andrew.donnellan@au1.ibm.com \
    --cc=arbab@linux.ibm.com \
    --cc=danielhb413@gmail.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=joserz@linux.ibm.com \
    --cc=kvm-ppc@vger.kernel.org \
    --cc=lagarcia@br.ibm.com \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=oohall@gmail.com \
    --cc=pjaroszynski@nvidia.com \
    --cc=sbobroff@linux.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).