linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] cma: factor out allocation logic from __cma_declare_contiguous_nid
@ 2025-07-03 18:47 Mike Rapoport
  2025-07-03 18:47 ` [PATCH v2 1/3] cma: move __cma_declare_contiguous_nid() before its usage Mike Rapoport
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Mike Rapoport @ 2025-07-03 18:47 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexandre Ghiti, David Hildenbrand, Mike Rapoport, Oscar Salvador,
	Pratyush Yadav, linux-kernel, linux-mm

From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>

Hi,

We've discussed earlier that HIGHMEM related logic is spread all over
__cma_declare_contiguous_nid().

These patches decouple it into helper functions.

v2 changes:
* use already declared 'ret' rather than 'err' (Oscar)
* rather than factor out only HIGHMEM allocation move all of the
  allocation logic to a helper (Oscar and David)
* add Acks, thanks!

v1: https://lore.kernel.org/all/20250702173605.2198924-1-rppt@kernel.org

[1] https://lore.kernel.org/all/aCw9mpmhx9SrL8Oy@localhost.localdomain

Mike Rapoport (Microsoft) (3):
  cma: move __cma_declare_contiguous_nid() before its usage
  cma: split resrvation of fixed area into a helper function
  cma: move allocation from HIGHMEM to a helper function

 mm/cma.c | 315 +++++++++++++++++++++++++++++--------------------------
 1 file changed, 165 insertions(+), 150 deletions(-)


base-commit: 86731a2a651e58953fc949573895f2fa6d456841
-- 
2.47.2

*** BLURB HERE ***

Mike Rapoport (Microsoft) (3):
  cma: move __cma_declare_contiguous_nid() before its usage
  cma: split resrvation of fixed area into a helper function
  cma: move memory allocation to a helper function

 mm/cma.c | 312 +++++++++++++++++++++++++++++--------------------------
 1 file changed, 162 insertions(+), 150 deletions(-)


base-commit: 86731a2a651e58953fc949573895f2fa6d456841
-- 
2.47.2



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/3] cma: move __cma_declare_contiguous_nid() before its usage
  2025-07-03 18:47 [PATCH v2 0/3] cma: factor out allocation logic from __cma_declare_contiguous_nid Mike Rapoport
@ 2025-07-03 18:47 ` Mike Rapoport
  2025-07-03 18:47 ` [PATCH v2 2/3] cma: split resrvation of fixed area into a helper function Mike Rapoport
  2025-07-03 18:47 ` [PATCH v2 3/3] cma: move memory allocation to " Mike Rapoport
  2 siblings, 0 replies; 6+ messages in thread
From: Mike Rapoport @ 2025-07-03 18:47 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexandre Ghiti, David Hildenbrand, Mike Rapoport, Oscar Salvador,
	Pratyush Yadav, linux-kernel, linux-mm

From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>

and kill static declaration

Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Acked-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Hildenbrand <david@redhat.com>
---
 mm/cma.c | 294 +++++++++++++++++++++++++++----------------------------
 1 file changed, 144 insertions(+), 150 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index 397567883a10..9bf95f8f0f33 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -35,12 +35,6 @@
 struct cma cma_areas[MAX_CMA_AREAS];
 unsigned int cma_area_count;
 
-static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
-			phys_addr_t size, phys_addr_t limit,
-			phys_addr_t alignment, unsigned int order_per_bit,
-			bool fixed, const char *name, struct cma **res_cma,
-			int nid);
-
 phys_addr_t cma_get_base(const struct cma *cma)
 {
 	WARN_ON_ONCE(cma->nranges != 1);
@@ -358,6 +352,150 @@ static void __init list_insert_sorted(
 	}
 }
 
+static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
+			phys_addr_t size, phys_addr_t limit,
+			phys_addr_t alignment, unsigned int order_per_bit,
+			bool fixed, const char *name, struct cma **res_cma,
+			int nid)
+{
+	phys_addr_t memblock_end = memblock_end_of_DRAM();
+	phys_addr_t highmem_start, base = *basep;
+	int ret;
+
+	/*
+	 * We can't use __pa(high_memory) directly, since high_memory
+	 * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
+	 * complain. Find the boundary by adding one to the last valid
+	 * address.
+	 */
+	if (IS_ENABLED(CONFIG_HIGHMEM))
+		highmem_start = __pa(high_memory - 1) + 1;
+	else
+		highmem_start = memblock_end_of_DRAM();
+	pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
+		__func__, &size, &base, &limit, &alignment);
+
+	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
+		pr_err("Not enough slots for CMA reserved regions!\n");
+		return -ENOSPC;
+	}
+
+	if (!size)
+		return -EINVAL;
+
+	if (alignment && !is_power_of_2(alignment))
+		return -EINVAL;
+
+	if (!IS_ENABLED(CONFIG_NUMA))
+		nid = NUMA_NO_NODE;
+
+	/* Sanitise input arguments. */
+	alignment = max_t(phys_addr_t, alignment, CMA_MIN_ALIGNMENT_BYTES);
+	if (fixed && base & (alignment - 1)) {
+		pr_err("Region at %pa must be aligned to %pa bytes\n",
+			&base, &alignment);
+		return -EINVAL;
+	}
+	base = ALIGN(base, alignment);
+	size = ALIGN(size, alignment);
+	limit &= ~(alignment - 1);
+
+	if (!base)
+		fixed = false;
+
+	/* size should be aligned with order_per_bit */
+	if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
+		return -EINVAL;
+
+	/*
+	 * If allocating at a fixed base the request region must not cross the
+	 * low/high memory boundary.
+	 */
+	if (fixed && base < highmem_start && base + size > highmem_start) {
+		pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
+			&base, &highmem_start);
+		return -EINVAL;
+	}
+
+	/*
+	 * If the limit is unspecified or above the memblock end, its effective
+	 * value will be the memblock end. Set it explicitly to simplify further
+	 * checks.
+	 */
+	if (limit == 0 || limit > memblock_end)
+		limit = memblock_end;
+
+	if (base + size > limit) {
+		pr_err("Size (%pa) of region at %pa exceeds limit (%pa)\n",
+			&size, &base, &limit);
+		return -EINVAL;
+	}
+
+	/* Reserve memory */
+	if (fixed) {
+		if (memblock_is_region_reserved(base, size) ||
+		    memblock_reserve(base, size) < 0) {
+			return -EBUSY;
+		}
+	} else {
+		phys_addr_t addr = 0;
+
+		/*
+		 * If there is enough memory, try a bottom-up allocation first.
+		 * It will place the new cma area close to the start of the node
+		 * and guarantee that the compaction is moving pages out of the
+		 * cma area and not into it.
+		 * Avoid using first 4GB to not interfere with constrained zones
+		 * like DMA/DMA32.
+		 */
+#ifdef CONFIG_PHYS_ADDR_T_64BIT
+		if (!memblock_bottom_up() && memblock_end >= SZ_4G + size) {
+			memblock_set_bottom_up(true);
+			addr = memblock_alloc_range_nid(size, alignment, SZ_4G,
+							limit, nid, true);
+			memblock_set_bottom_up(false);
+		}
+#endif
+
+		/*
+		 * All pages in the reserved area must come from the same zone.
+		 * If the requested region crosses the low/high memory boundary,
+		 * try allocating from high memory first and fall back to low
+		 * memory in case of failure.
+		 */
+		if (!addr && base < highmem_start && limit > highmem_start) {
+			addr = memblock_alloc_range_nid(size, alignment,
+					highmem_start, limit, nid, true);
+			limit = highmem_start;
+		}
+
+		if (!addr) {
+			addr = memblock_alloc_range_nid(size, alignment, base,
+					limit, nid, true);
+			if (!addr)
+				return -ENOMEM;
+		}
+
+		/*
+		 * kmemleak scans/reads tracked objects for pointers to other
+		 * objects but this address isn't mapped and accessible
+		 */
+		kmemleak_ignore_phys(addr);
+		base = addr;
+	}
+
+	ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma);
+	if (ret) {
+		memblock_phys_free(base, size);
+		return ret;
+	}
+
+	(*res_cma)->nid = nid;
+	*basep = base;
+
+	return 0;
+}
+
 /*
  * Create CMA areas with a total size of @total_size. A normal allocation
  * for one area is tried first. If that fails, the biggest memblock
@@ -593,150 +731,6 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
 	return ret;
 }
 
-static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
-			phys_addr_t size, phys_addr_t limit,
-			phys_addr_t alignment, unsigned int order_per_bit,
-			bool fixed, const char *name, struct cma **res_cma,
-			int nid)
-{
-	phys_addr_t memblock_end = memblock_end_of_DRAM();
-	phys_addr_t highmem_start, base = *basep;
-	int ret;
-
-	/*
-	 * We can't use __pa(high_memory) directly, since high_memory
-	 * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
-	 * complain. Find the boundary by adding one to the last valid
-	 * address.
-	 */
-	if (IS_ENABLED(CONFIG_HIGHMEM))
-		highmem_start = __pa(high_memory - 1) + 1;
-	else
-		highmem_start = memblock_end_of_DRAM();
-	pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
-		__func__, &size, &base, &limit, &alignment);
-
-	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
-		pr_err("Not enough slots for CMA reserved regions!\n");
-		return -ENOSPC;
-	}
-
-	if (!size)
-		return -EINVAL;
-
-	if (alignment && !is_power_of_2(alignment))
-		return -EINVAL;
-
-	if (!IS_ENABLED(CONFIG_NUMA))
-		nid = NUMA_NO_NODE;
-
-	/* Sanitise input arguments. */
-	alignment = max_t(phys_addr_t, alignment, CMA_MIN_ALIGNMENT_BYTES);
-	if (fixed && base & (alignment - 1)) {
-		pr_err("Region at %pa must be aligned to %pa bytes\n",
-			&base, &alignment);
-		return -EINVAL;
-	}
-	base = ALIGN(base, alignment);
-	size = ALIGN(size, alignment);
-	limit &= ~(alignment - 1);
-
-	if (!base)
-		fixed = false;
-
-	/* size should be aligned with order_per_bit */
-	if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
-		return -EINVAL;
-
-	/*
-	 * If allocating at a fixed base the request region must not cross the
-	 * low/high memory boundary.
-	 */
-	if (fixed && base < highmem_start && base + size > highmem_start) {
-		pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
-			&base, &highmem_start);
-		return -EINVAL;
-	}
-
-	/*
-	 * If the limit is unspecified or above the memblock end, its effective
-	 * value will be the memblock end. Set it explicitly to simplify further
-	 * checks.
-	 */
-	if (limit == 0 || limit > memblock_end)
-		limit = memblock_end;
-
-	if (base + size > limit) {
-		pr_err("Size (%pa) of region at %pa exceeds limit (%pa)\n",
-			&size, &base, &limit);
-		return -EINVAL;
-	}
-
-	/* Reserve memory */
-	if (fixed) {
-		if (memblock_is_region_reserved(base, size) ||
-		    memblock_reserve(base, size) < 0) {
-			return -EBUSY;
-		}
-	} else {
-		phys_addr_t addr = 0;
-
-		/*
-		 * If there is enough memory, try a bottom-up allocation first.
-		 * It will place the new cma area close to the start of the node
-		 * and guarantee that the compaction is moving pages out of the
-		 * cma area and not into it.
-		 * Avoid using first 4GB to not interfere with constrained zones
-		 * like DMA/DMA32.
-		 */
-#ifdef CONFIG_PHYS_ADDR_T_64BIT
-		if (!memblock_bottom_up() && memblock_end >= SZ_4G + size) {
-			memblock_set_bottom_up(true);
-			addr = memblock_alloc_range_nid(size, alignment, SZ_4G,
-							limit, nid, true);
-			memblock_set_bottom_up(false);
-		}
-#endif
-
-		/*
-		 * All pages in the reserved area must come from the same zone.
-		 * If the requested region crosses the low/high memory boundary,
-		 * try allocating from high memory first and fall back to low
-		 * memory in case of failure.
-		 */
-		if (!addr && base < highmem_start && limit > highmem_start) {
-			addr = memblock_alloc_range_nid(size, alignment,
-					highmem_start, limit, nid, true);
-			limit = highmem_start;
-		}
-
-		if (!addr) {
-			addr = memblock_alloc_range_nid(size, alignment, base,
-					limit, nid, true);
-			if (!addr)
-				return -ENOMEM;
-		}
-
-		/*
-		 * kmemleak scans/reads tracked objects for pointers to other
-		 * objects but this address isn't mapped and accessible
-		 */
-		kmemleak_ignore_phys(addr);
-		base = addr;
-	}
-
-	ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma);
-	if (ret) {
-		memblock_phys_free(base, size);
-		return ret;
-	}
-
-	(*res_cma)->nid = nid;
-	*basep = base;
-
-	return 0;
-}
-
 static void cma_debug_show_areas(struct cma *cma)
 {
 	unsigned long next_zero_bit, next_set_bit, nr_zero;
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/3] cma: split resrvation of fixed area into a helper function
  2025-07-03 18:47 [PATCH v2 0/3] cma: factor out allocation logic from __cma_declare_contiguous_nid Mike Rapoport
  2025-07-03 18:47 ` [PATCH v2 1/3] cma: move __cma_declare_contiguous_nid() before its usage Mike Rapoport
@ 2025-07-03 18:47 ` Mike Rapoport
  2025-07-03 18:47 ` [PATCH v2 3/3] cma: move memory allocation to " Mike Rapoport
  2 siblings, 0 replies; 6+ messages in thread
From: Mike Rapoport @ 2025-07-03 18:47 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexandre Ghiti, David Hildenbrand, Mike Rapoport, Oscar Salvador,
	Pratyush Yadav, linux-kernel, linux-mm

From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>

Move the check that verifies that reservation of fixed area does not
cross HIGHMEM boundary and the actual memblock_resrve() call into a
helper function.

This makes code more readable and decouples logic related to
CONFIG_HIGHMEM from the core functionality of
__cma_declare_contiguous_nid().

Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Acked-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Hildenbrand <david@redhat.com>
---
 mm/cma.c | 40 +++++++++++++++++++++++++++-------------
 1 file changed, 27 insertions(+), 13 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index 9bf95f8f0f33..40986722f2e2 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -352,6 +352,30 @@ static void __init list_insert_sorted(
 	}
 }
 
+static int __init cma_fixed_reserve(phys_addr_t base, phys_addr_t size)
+{
+	if (IS_ENABLED(CONFIG_HIGHMEM)) {
+		phys_addr_t highmem_start = __pa(high_memory - 1) + 1;
+
+		/*
+		 * If allocating at a fixed base the request region must not
+		 * cross the low/high memory boundary.
+		 */
+		if (base < highmem_start && base + size > highmem_start) {
+			pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
+			       &base, &highmem_start);
+			return -EINVAL;
+		}
+	}
+
+	if (memblock_is_region_reserved(base, size) ||
+	    memblock_reserve(base, size) < 0) {
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
 static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
 			phys_addr_t size, phys_addr_t limit,
 			phys_addr_t alignment, unsigned int order_per_bit,
@@ -407,15 +431,6 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
 	if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
 		return -EINVAL;
 
-	/*
-	 * If allocating at a fixed base the request region must not cross the
-	 * low/high memory boundary.
-	 */
-	if (fixed && base < highmem_start && base + size > highmem_start) {
-		pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
-			&base, &highmem_start);
-		return -EINVAL;
-	}
 
 	/*
 	 * If the limit is unspecified or above the memblock end, its effective
@@ -433,10 +448,9 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
 
 	/* Reserve memory */
 	if (fixed) {
-		if (memblock_is_region_reserved(base, size) ||
-		    memblock_reserve(base, size) < 0) {
-			return -EBUSY;
-		}
+		ret = cma_fixed_reserve(base, size);
+		if (ret)
+			return ret;
 	} else {
 		phys_addr_t addr = 0;
 
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 3/3] cma: move memory allocation to a helper function
  2025-07-03 18:47 [PATCH v2 0/3] cma: factor out allocation logic from __cma_declare_contiguous_nid Mike Rapoport
  2025-07-03 18:47 ` [PATCH v2 1/3] cma: move __cma_declare_contiguous_nid() before its usage Mike Rapoport
  2025-07-03 18:47 ` [PATCH v2 2/3] cma: split resrvation of fixed area into a helper function Mike Rapoport
@ 2025-07-03 18:47 ` Mike Rapoport
  2025-07-03 18:52   ` David Hildenbrand
  2025-07-03 18:54   ` Oscar Salvador
  2 siblings, 2 replies; 6+ messages in thread
From: Mike Rapoport @ 2025-07-03 18:47 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexandre Ghiti, David Hildenbrand, Mike Rapoport, Oscar Salvador,
	Pratyush Yadav, linux-kernel, linux-mm

From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>

__cma_declare_contiguous_nid() tries to allocate memory in several ways:
* on systems with 64 bit physical address and enough memory it first
  attempts to allocate memory just above 4GiB
* if that fails, on systems with HIGHMEM the next attempt is from high
  memory
* and at last, if none of the previous attempts succeeded, or was even
  tried because of incompatible configuration, the memory is allocated
  anywhere within specified limits.

Move all the allocation logic to a helper function to make these steps more
obvious.

Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
 mm/cma.c | 104 +++++++++++++++++++++++++++++--------------------------
 1 file changed, 54 insertions(+), 50 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index 40986722f2e2..38876ccc07cf 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -376,6 +376,55 @@ static int __init cma_fixed_reserve(phys_addr_t base, phys_addr_t size)
 	return 0;
 }
 
+static phys_addr_t __init cma_alloc_mem(phys_addr_t base, phys_addr_t size,
+			phys_addr_t align, phys_addr_t limit, int nid)
+{
+	phys_addr_t addr = 0;
+
+	/*
+	 * If there is enough memory, try a bottom-up allocation first.
+	 * It will place the new cma area close to the start of the node
+	 * and guarantee that the compaction is moving pages out of the
+	 * cma area and not into it.
+	 * Avoid using first 4GB to not interfere with constrained zones
+	 * like DMA/DMA32.
+	 */
+#ifdef CONFIG_PHYS_ADDR_T_64BIT
+	if (!memblock_bottom_up() && limit >= SZ_4G + size) {
+		memblock_set_bottom_up(true);
+		addr = memblock_alloc_range_nid(size, align, SZ_4G, limit,
+						nid, true);
+		memblock_set_bottom_up(false);
+	}
+#endif
+
+	/*
+	 * On systems with HIGHMEM try allocating from there before consuming
+	 * memory in lower zones.
+	 */
+	if (!addr && IS_ENABLED(CONFIG_HIGHMEM)) {
+		phys_addr_t highmem = __pa(high_memory - 1) + 1;
+
+		/*
+		 * All pages in the reserved area must come from the same zone.
+		 * If the requested region crosses the low/high memory boundary,
+		 * try allocating from high memory first and fall back to low
+		 * memory in case of failure.
+		 */
+		if (base < highmem && limit > highmem) {
+			addr = memblock_alloc_range_nid(size, align, highmem,
+							limit, nid, true);
+			limit = highmem;
+		}
+	}
+
+	if (!addr)
+		addr = memblock_alloc_range_nid(size, align, base, limit, nid,
+						true);
+
+	return addr;
+}
+
 static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
 			phys_addr_t size, phys_addr_t limit,
 			phys_addr_t alignment, unsigned int order_per_bit,
@@ -383,19 +432,9 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
 			int nid)
 {
 	phys_addr_t memblock_end = memblock_end_of_DRAM();
-	phys_addr_t highmem_start, base = *basep;
+	phys_addr_t base = *basep;
 	int ret;
 
-	/*
-	 * We can't use __pa(high_memory) directly, since high_memory
-	 * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
-	 * complain. Find the boundary by adding one to the last valid
-	 * address.
-	 */
-	if (IS_ENABLED(CONFIG_HIGHMEM))
-		highmem_start = __pa(high_memory - 1) + 1;
-	else
-		highmem_start = memblock_end_of_DRAM();
 	pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
 		__func__, &size, &base, &limit, &alignment);
 
@@ -452,50 +491,15 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
 		if (ret)
 			return ret;
 	} else {
-		phys_addr_t addr = 0;
-
-		/*
-		 * If there is enough memory, try a bottom-up allocation first.
-		 * It will place the new cma area close to the start of the node
-		 * and guarantee that the compaction is moving pages out of the
-		 * cma area and not into it.
-		 * Avoid using first 4GB to not interfere with constrained zones
-		 * like DMA/DMA32.
-		 */
-#ifdef CONFIG_PHYS_ADDR_T_64BIT
-		if (!memblock_bottom_up() && memblock_end >= SZ_4G + size) {
-			memblock_set_bottom_up(true);
-			addr = memblock_alloc_range_nid(size, alignment, SZ_4G,
-							limit, nid, true);
-			memblock_set_bottom_up(false);
-		}
-#endif
-
-		/*
-		 * All pages in the reserved area must come from the same zone.
-		 * If the requested region crosses the low/high memory boundary,
-		 * try allocating from high memory first and fall back to low
-		 * memory in case of failure.
-		 */
-		if (!addr && base < highmem_start && limit > highmem_start) {
-			addr = memblock_alloc_range_nid(size, alignment,
-					highmem_start, limit, nid, true);
-			limit = highmem_start;
-		}
-
-		if (!addr) {
-			addr = memblock_alloc_range_nid(size, alignment, base,
-					limit, nid, true);
-			if (!addr)
-				return -ENOMEM;
-		}
+		base = cma_alloc_mem(base, size, alignment, limit, nid);
+		if (!base)
+			return -ENOMEM;
 
 		/*
 		 * kmemleak scans/reads tracked objects for pointers to other
 		 * objects but this address isn't mapped and accessible
 		 */
-		kmemleak_ignore_phys(addr);
-		base = addr;
+		kmemleak_ignore_phys(base);
 	}
 
 	ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma);
-- 
2.47.2



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 3/3] cma: move memory allocation to a helper function
  2025-07-03 18:47 ` [PATCH v2 3/3] cma: move memory allocation to " Mike Rapoport
@ 2025-07-03 18:52   ` David Hildenbrand
  2025-07-03 18:54   ` Oscar Salvador
  1 sibling, 0 replies; 6+ messages in thread
From: David Hildenbrand @ 2025-07-03 18:52 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Alexandre Ghiti, Oscar Salvador, Pratyush Yadav, linux-kernel,
	linux-mm

On 03.07.25 20:47, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
> 
> __cma_declare_contiguous_nid() tries to allocate memory in several ways:
> * on systems with 64 bit physical address and enough memory it first
>    attempts to allocate memory just above 4GiB
> * if that fails, on systems with HIGHMEM the next attempt is from high
>    memory
> * and at last, if none of the previous attempts succeeded, or was even
>    tried because of incompatible configuration, the memory is allocated
>    anywhere within specified limits.
> 
> Move all the allocation logic to a helper function to make these steps more
> obvious.
> 
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---

LGTM, thanks!

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 3/3] cma: move memory allocation to a helper function
  2025-07-03 18:47 ` [PATCH v2 3/3] cma: move memory allocation to " Mike Rapoport
  2025-07-03 18:52   ` David Hildenbrand
@ 2025-07-03 18:54   ` Oscar Salvador
  1 sibling, 0 replies; 6+ messages in thread
From: Oscar Salvador @ 2025-07-03 18:54 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexandre Ghiti, David Hildenbrand, Pratyush Yadav,
	linux-kernel, linux-mm

On Thu, Jul 03, 2025 at 09:47:11PM +0300, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
> 
> __cma_declare_contiguous_nid() tries to allocate memory in several ways:
> * on systems with 64 bit physical address and enough memory it first
>   attempts to allocate memory just above 4GiB
> * if that fails, on systems with HIGHMEM the next attempt is from high
>   memory
> * and at last, if none of the previous attempts succeeded, or was even
>   tried because of incompatible configuration, the memory is allocated
>   anywhere within specified limits.
> 
> Move all the allocation logic to a helper function to make these steps more
> obvious.
> 
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>

Acked-by: Oscar Salvador <osalvador@suse.de>

Thanks Mike ;-)!

 

-- 
Oscar Salvador
SUSE Labs


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-07-03 18:54 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-03 18:47 [PATCH v2 0/3] cma: factor out allocation logic from __cma_declare_contiguous_nid Mike Rapoport
2025-07-03 18:47 ` [PATCH v2 1/3] cma: move __cma_declare_contiguous_nid() before its usage Mike Rapoport
2025-07-03 18:47 ` [PATCH v2 2/3] cma: split resrvation of fixed area into a helper function Mike Rapoport
2025-07-03 18:47 ` [PATCH v2 3/3] cma: move memory allocation to " Mike Rapoport
2025-07-03 18:52   ` David Hildenbrand
2025-07-03 18:54   ` Oscar Salvador

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).