* [PATCH 0/3] cma: factor out HIGMEM logic from __cma_declare_contiguous_nid
@ 2025-07-02 17:36 Mike Rapoport
2025-07-02 17:36 ` [PATCH 1/3] cma: move __cma_declare_contiguous_nid() before its usage Mike Rapoport
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Mike Rapoport @ 2025-07-02 17:36 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexandre Ghiti, David Hildenbrand, Mike Rapoport, Oscar Salvador,
Pratyush Yadav, linux-kernel, linux-mm
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Hi,
We've discussed earlier that HIGHMEM related logic is spread all over
__cma_declare_contiguous_nid().
These patches decouple it into helper functions.
[1] https://lore.kernel.org/all/aCw9mpmhx9SrL8Oy@localhost.localdomain
Mike Rapoport (Microsoft) (3):
cma: move __cma_declare_contiguous_nid() before its usage
cma: split resrvation of fixed area into a helper function
cma: move allocation from HIGHMEM to a helper function
mm/cma.c | 315 +++++++++++++++++++++++++++++--------------------------
1 file changed, 165 insertions(+), 150 deletions(-)
base-commit: 86731a2a651e58953fc949573895f2fa6d456841
--
2.47.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] cma: move __cma_declare_contiguous_nid() before its usage
2025-07-02 17:36 [PATCH 0/3] cma: factor out HIGMEM logic from __cma_declare_contiguous_nid Mike Rapoport
@ 2025-07-02 17:36 ` Mike Rapoport
2025-07-03 9:21 ` Oscar Salvador
2025-07-03 11:11 ` David Hildenbrand
2025-07-02 17:36 ` [PATCH 2/3] cma: split resrvation of fixed area into a helper function Mike Rapoport
2025-07-02 17:36 ` [PATCH 3/3] cma: move allocation from HIGHMEM to " Mike Rapoport
2 siblings, 2 replies; 11+ messages in thread
From: Mike Rapoport @ 2025-07-02 17:36 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexandre Ghiti, David Hildenbrand, Mike Rapoport, Oscar Salvador,
Pratyush Yadav, linux-kernel, linux-mm
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
and kill static declaration
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
mm/cma.c | 294 +++++++++++++++++++++++++++----------------------------
1 file changed, 144 insertions(+), 150 deletions(-)
diff --git a/mm/cma.c b/mm/cma.c
index 397567883a10..9bf95f8f0f33 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -35,12 +35,6 @@
struct cma cma_areas[MAX_CMA_AREAS];
unsigned int cma_area_count;
-static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
- phys_addr_t size, phys_addr_t limit,
- phys_addr_t alignment, unsigned int order_per_bit,
- bool fixed, const char *name, struct cma **res_cma,
- int nid);
-
phys_addr_t cma_get_base(const struct cma *cma)
{
WARN_ON_ONCE(cma->nranges != 1);
@@ -358,6 +352,150 @@ static void __init list_insert_sorted(
}
}
+static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
+ phys_addr_t size, phys_addr_t limit,
+ phys_addr_t alignment, unsigned int order_per_bit,
+ bool fixed, const char *name, struct cma **res_cma,
+ int nid)
+{
+ phys_addr_t memblock_end = memblock_end_of_DRAM();
+ phys_addr_t highmem_start, base = *basep;
+ int ret;
+
+ /*
+ * We can't use __pa(high_memory) directly, since high_memory
+ * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
+ * complain. Find the boundary by adding one to the last valid
+ * address.
+ */
+ if (IS_ENABLED(CONFIG_HIGHMEM))
+ highmem_start = __pa(high_memory - 1) + 1;
+ else
+ highmem_start = memblock_end_of_DRAM();
+ pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
+ __func__, &size, &base, &limit, &alignment);
+
+ if (cma_area_count == ARRAY_SIZE(cma_areas)) {
+ pr_err("Not enough slots for CMA reserved regions!\n");
+ return -ENOSPC;
+ }
+
+ if (!size)
+ return -EINVAL;
+
+ if (alignment && !is_power_of_2(alignment))
+ return -EINVAL;
+
+ if (!IS_ENABLED(CONFIG_NUMA))
+ nid = NUMA_NO_NODE;
+
+ /* Sanitise input arguments. */
+ alignment = max_t(phys_addr_t, alignment, CMA_MIN_ALIGNMENT_BYTES);
+ if (fixed && base & (alignment - 1)) {
+ pr_err("Region at %pa must be aligned to %pa bytes\n",
+ &base, &alignment);
+ return -EINVAL;
+ }
+ base = ALIGN(base, alignment);
+ size = ALIGN(size, alignment);
+ limit &= ~(alignment - 1);
+
+ if (!base)
+ fixed = false;
+
+ /* size should be aligned with order_per_bit */
+ if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
+ return -EINVAL;
+
+ /*
+ * If allocating at a fixed base the request region must not cross the
+ * low/high memory boundary.
+ */
+ if (fixed && base < highmem_start && base + size > highmem_start) {
+ pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
+ &base, &highmem_start);
+ return -EINVAL;
+ }
+
+ /*
+ * If the limit is unspecified or above the memblock end, its effective
+ * value will be the memblock end. Set it explicitly to simplify further
+ * checks.
+ */
+ if (limit == 0 || limit > memblock_end)
+ limit = memblock_end;
+
+ if (base + size > limit) {
+ pr_err("Size (%pa) of region at %pa exceeds limit (%pa)\n",
+ &size, &base, &limit);
+ return -EINVAL;
+ }
+
+ /* Reserve memory */
+ if (fixed) {
+ if (memblock_is_region_reserved(base, size) ||
+ memblock_reserve(base, size) < 0) {
+ return -EBUSY;
+ }
+ } else {
+ phys_addr_t addr = 0;
+
+ /*
+ * If there is enough memory, try a bottom-up allocation first.
+ * It will place the new cma area close to the start of the node
+ * and guarantee that the compaction is moving pages out of the
+ * cma area and not into it.
+ * Avoid using first 4GB to not interfere with constrained zones
+ * like DMA/DMA32.
+ */
+#ifdef CONFIG_PHYS_ADDR_T_64BIT
+ if (!memblock_bottom_up() && memblock_end >= SZ_4G + size) {
+ memblock_set_bottom_up(true);
+ addr = memblock_alloc_range_nid(size, alignment, SZ_4G,
+ limit, nid, true);
+ memblock_set_bottom_up(false);
+ }
+#endif
+
+ /*
+ * All pages in the reserved area must come from the same zone.
+ * If the requested region crosses the low/high memory boundary,
+ * try allocating from high memory first and fall back to low
+ * memory in case of failure.
+ */
+ if (!addr && base < highmem_start && limit > highmem_start) {
+ addr = memblock_alloc_range_nid(size, alignment,
+ highmem_start, limit, nid, true);
+ limit = highmem_start;
+ }
+
+ if (!addr) {
+ addr = memblock_alloc_range_nid(size, alignment, base,
+ limit, nid, true);
+ if (!addr)
+ return -ENOMEM;
+ }
+
+ /*
+ * kmemleak scans/reads tracked objects for pointers to other
+ * objects but this address isn't mapped and accessible
+ */
+ kmemleak_ignore_phys(addr);
+ base = addr;
+ }
+
+ ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma);
+ if (ret) {
+ memblock_phys_free(base, size);
+ return ret;
+ }
+
+ (*res_cma)->nid = nid;
+ *basep = base;
+
+ return 0;
+}
+
/*
* Create CMA areas with a total size of @total_size. A normal allocation
* for one area is tried first. If that fails, the biggest memblock
@@ -593,150 +731,6 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
return ret;
}
-static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
- phys_addr_t size, phys_addr_t limit,
- phys_addr_t alignment, unsigned int order_per_bit,
- bool fixed, const char *name, struct cma **res_cma,
- int nid)
-{
- phys_addr_t memblock_end = memblock_end_of_DRAM();
- phys_addr_t highmem_start, base = *basep;
- int ret;
-
- /*
- * We can't use __pa(high_memory) directly, since high_memory
- * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
- * complain. Find the boundary by adding one to the last valid
- * address.
- */
- if (IS_ENABLED(CONFIG_HIGHMEM))
- highmem_start = __pa(high_memory - 1) + 1;
- else
- highmem_start = memblock_end_of_DRAM();
- pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
- __func__, &size, &base, &limit, &alignment);
-
- if (cma_area_count == ARRAY_SIZE(cma_areas)) {
- pr_err("Not enough slots for CMA reserved regions!\n");
- return -ENOSPC;
- }
-
- if (!size)
- return -EINVAL;
-
- if (alignment && !is_power_of_2(alignment))
- return -EINVAL;
-
- if (!IS_ENABLED(CONFIG_NUMA))
- nid = NUMA_NO_NODE;
-
- /* Sanitise input arguments. */
- alignment = max_t(phys_addr_t, alignment, CMA_MIN_ALIGNMENT_BYTES);
- if (fixed && base & (alignment - 1)) {
- pr_err("Region at %pa must be aligned to %pa bytes\n",
- &base, &alignment);
- return -EINVAL;
- }
- base = ALIGN(base, alignment);
- size = ALIGN(size, alignment);
- limit &= ~(alignment - 1);
-
- if (!base)
- fixed = false;
-
- /* size should be aligned with order_per_bit */
- if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
- return -EINVAL;
-
- /*
- * If allocating at a fixed base the request region must not cross the
- * low/high memory boundary.
- */
- if (fixed && base < highmem_start && base + size > highmem_start) {
- pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
- &base, &highmem_start);
- return -EINVAL;
- }
-
- /*
- * If the limit is unspecified or above the memblock end, its effective
- * value will be the memblock end. Set it explicitly to simplify further
- * checks.
- */
- if (limit == 0 || limit > memblock_end)
- limit = memblock_end;
-
- if (base + size > limit) {
- pr_err("Size (%pa) of region at %pa exceeds limit (%pa)\n",
- &size, &base, &limit);
- return -EINVAL;
- }
-
- /* Reserve memory */
- if (fixed) {
- if (memblock_is_region_reserved(base, size) ||
- memblock_reserve(base, size) < 0) {
- return -EBUSY;
- }
- } else {
- phys_addr_t addr = 0;
-
- /*
- * If there is enough memory, try a bottom-up allocation first.
- * It will place the new cma area close to the start of the node
- * and guarantee that the compaction is moving pages out of the
- * cma area and not into it.
- * Avoid using first 4GB to not interfere with constrained zones
- * like DMA/DMA32.
- */
-#ifdef CONFIG_PHYS_ADDR_T_64BIT
- if (!memblock_bottom_up() && memblock_end >= SZ_4G + size) {
- memblock_set_bottom_up(true);
- addr = memblock_alloc_range_nid(size, alignment, SZ_4G,
- limit, nid, true);
- memblock_set_bottom_up(false);
- }
-#endif
-
- /*
- * All pages in the reserved area must come from the same zone.
- * If the requested region crosses the low/high memory boundary,
- * try allocating from high memory first and fall back to low
- * memory in case of failure.
- */
- if (!addr && base < highmem_start && limit > highmem_start) {
- addr = memblock_alloc_range_nid(size, alignment,
- highmem_start, limit, nid, true);
- limit = highmem_start;
- }
-
- if (!addr) {
- addr = memblock_alloc_range_nid(size, alignment, base,
- limit, nid, true);
- if (!addr)
- return -ENOMEM;
- }
-
- /*
- * kmemleak scans/reads tracked objects for pointers to other
- * objects but this address isn't mapped and accessible
- */
- kmemleak_ignore_phys(addr);
- base = addr;
- }
-
- ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma);
- if (ret) {
- memblock_phys_free(base, size);
- return ret;
- }
-
- (*res_cma)->nid = nid;
- *basep = base;
-
- return 0;
-}
-
static void cma_debug_show_areas(struct cma *cma)
{
unsigned long next_zero_bit, next_set_bit, nr_zero;
--
2.47.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/3] cma: split resrvation of fixed area into a helper function
2025-07-02 17:36 [PATCH 0/3] cma: factor out HIGMEM logic from __cma_declare_contiguous_nid Mike Rapoport
2025-07-02 17:36 ` [PATCH 1/3] cma: move __cma_declare_contiguous_nid() before its usage Mike Rapoport
@ 2025-07-02 17:36 ` Mike Rapoport
2025-07-03 9:34 ` Oscar Salvador
2025-07-03 11:12 ` David Hildenbrand
2025-07-02 17:36 ` [PATCH 3/3] cma: move allocation from HIGHMEM to " Mike Rapoport
2 siblings, 2 replies; 11+ messages in thread
From: Mike Rapoport @ 2025-07-02 17:36 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexandre Ghiti, David Hildenbrand, Mike Rapoport, Oscar Salvador,
Pratyush Yadav, linux-kernel, linux-mm
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move the check that verifies that reservation of fixed area does not
cross HIGHMEM boundary and the actual memblock_resrve() call into a
helper function.
This makes code more readable and decouples logic related to
CONFIG_HIGHMEM from the core functionality of
__cma_declare_contiguous_nid().
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
mm/cma.c | 41 ++++++++++++++++++++++++++++-------------
1 file changed, 28 insertions(+), 13 deletions(-)
diff --git a/mm/cma.c b/mm/cma.c
index 9bf95f8f0f33..1df8ff312d99 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -352,6 +352,30 @@ static void __init list_insert_sorted(
}
}
+static int __init cma_fixed_reserve(phys_addr_t base, phys_addr_t size)
+{
+ if (IS_ENABLED(CONFIG_HIGHMEM)) {
+ phys_addr_t highmem_start = __pa(high_memory - 1) + 1;
+
+ /*
+ * If allocating at a fixed base the request region must not
+ * cross the low/high memory boundary.
+ */
+ if (base < highmem_start && base + size > highmem_start) {
+ pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
+ &base, &highmem_start);
+ return -EINVAL;
+ }
+ }
+
+ if (memblock_is_region_reserved(base, size) ||
+ memblock_reserve(base, size) < 0) {
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
phys_addr_t size, phys_addr_t limit,
phys_addr_t alignment, unsigned int order_per_bit,
@@ -407,15 +431,6 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
return -EINVAL;
- /*
- * If allocating at a fixed base the request region must not cross the
- * low/high memory boundary.
- */
- if (fixed && base < highmem_start && base + size > highmem_start) {
- pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
- &base, &highmem_start);
- return -EINVAL;
- }
/*
* If the limit is unspecified or above the memblock end, its effective
@@ -433,10 +448,10 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
/* Reserve memory */
if (fixed) {
- if (memblock_is_region_reserved(base, size) ||
- memblock_reserve(base, size) < 0) {
- return -EBUSY;
- }
+ int err = cma_fixed_reserve(base, size);
+
+ if (err)
+ return err;
} else {
phys_addr_t addr = 0;
--
2.47.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/3] cma: move allocation from HIGHMEM to a helper function
2025-07-02 17:36 [PATCH 0/3] cma: factor out HIGMEM logic from __cma_declare_contiguous_nid Mike Rapoport
2025-07-02 17:36 ` [PATCH 1/3] cma: move __cma_declare_contiguous_nid() before its usage Mike Rapoport
2025-07-02 17:36 ` [PATCH 2/3] cma: split resrvation of fixed area into a helper function Mike Rapoport
@ 2025-07-02 17:36 ` Mike Rapoport
2025-07-03 9:53 ` Oscar Salvador
2 siblings, 1 reply; 11+ messages in thread
From: Mike Rapoport @ 2025-07-02 17:36 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexandre Ghiti, David Hildenbrand, Mike Rapoport, Oscar Salvador,
Pratyush Yadav, linux-kernel, linux-mm
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
When CONFIG_HIGMEM is enabled, __cma_declare_contiguous_nid() first
tries to allocate the area from HIGHMEM and if that fails it falls back
to allocation from low memory.
Split allocation from HIGMEM into a helper function to further decouple
logic related to CONFIG_HIGHMEM.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
mm/cma.c | 52 +++++++++++++++++++++++++++++-----------------------
1 file changed, 29 insertions(+), 23 deletions(-)
diff --git a/mm/cma.c b/mm/cma.c
index 1df8ff312d99..0a24c46f3296 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -376,6 +376,30 @@ static int __init cma_fixed_reserve(phys_addr_t base, phys_addr_t size)
return 0;
}
+static phys_addr_t __init cma_alloc_highmem(phys_addr_t base, phys_addr_t size,
+ phys_addr_t align, phys_addr_t *limit, int nid)
+{
+ phys_addr_t addr = 0;
+
+ if (IS_ENABLED(CONFIG_HIGHMEM)) {
+ phys_addr_t highmem = __pa(high_memory - 1) + 1;
+
+ /*
+ * All pages in the reserved area must come from the same zone.
+ * If the requested region crosses the low/high memory boundary,
+ * try allocating from high memory first and fall back to low
+ * memory in case of failure.
+ */
+ if (base < highmem && *limit > highmem) {
+ addr = memblock_alloc_range_nid(size, align, highmem,
+ *limit, nid, true);
+ *limit = highmem;
+ }
+ }
+
+ return addr;
+}
+
static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
phys_addr_t size, phys_addr_t limit,
phys_addr_t alignment, unsigned int order_per_bit,
@@ -383,19 +407,9 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
int nid)
{
phys_addr_t memblock_end = memblock_end_of_DRAM();
- phys_addr_t highmem_start, base = *basep;
+ phys_addr_t base = *basep;
int ret;
- /*
- * We can't use __pa(high_memory) directly, since high_memory
- * isn't a valid direct map VA, and DEBUG_VIRTUAL will (validly)
- * complain. Find the boundary by adding one to the last valid
- * address.
- */
- if (IS_ENABLED(CONFIG_HIGHMEM))
- highmem_start = __pa(high_memory - 1) + 1;
- else
- highmem_start = memblock_end_of_DRAM();
pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
__func__, &size, &base, &limit, &alignment);
@@ -472,18 +486,10 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
}
#endif
- /*
- * All pages in the reserved area must come from the same zone.
- * If the requested region crosses the low/high memory boundary,
- * try allocating from high memory first and fall back to low
- * memory in case of failure.
- */
- if (!addr && base < highmem_start && limit > highmem_start) {
- addr = memblock_alloc_range_nid(size, alignment,
- highmem_start, limit, nid, true);
- limit = highmem_start;
- }
-
+ /* On systems with HIGHMEM try allocating from there first */
+ if (!addr)
+ addr = cma_alloc_highmem(base, size, alignment, &limit,
+ nid);
if (!addr) {
addr = memblock_alloc_range_nid(size, alignment, base,
limit, nid, true);
--
2.47.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] cma: move __cma_declare_contiguous_nid() before its usage
2025-07-02 17:36 ` [PATCH 1/3] cma: move __cma_declare_contiguous_nid() before its usage Mike Rapoport
@ 2025-07-03 9:21 ` Oscar Salvador
2025-07-03 11:11 ` David Hildenbrand
1 sibling, 0 replies; 11+ messages in thread
From: Oscar Salvador @ 2025-07-03 9:21 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexandre Ghiti, David Hildenbrand, Pratyush Yadav,
linux-kernel, linux-mm
On Wed, Jul 02, 2025 at 08:36:03PM +0300, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>
> and kill static declaration
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Acked-by: Oscar Salvador <osalvador@suse.de>
Thanks Mike ;-)!
--
Oscar Salvador
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] cma: split resrvation of fixed area into a helper function
2025-07-02 17:36 ` [PATCH 2/3] cma: split resrvation of fixed area into a helper function Mike Rapoport
@ 2025-07-03 9:34 ` Oscar Salvador
2025-07-03 11:12 ` David Hildenbrand
1 sibling, 0 replies; 11+ messages in thread
From: Oscar Salvador @ 2025-07-03 9:34 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexandre Ghiti, David Hildenbrand, Pratyush Yadav,
linux-kernel, linux-mm
On Wed, Jul 02, 2025 at 08:36:04PM +0300, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>
> Move the check that verifies that reservation of fixed area does not
> cross HIGHMEM boundary and the actual memblock_resrve() call into a
> helper function.
>
> This makes code more readable and decouples logic related to
> CONFIG_HIGHMEM from the core functionality of
> __cma_declare_contiguous_nid().
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Looks good to me, nit below:
Acked-by: Oscar Salvador <osalvador@suse.de>
> @@ -433,10 +448,10 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
>
> /* Reserve memory */
> if (fixed) {
> - if (memblock_is_region_reserved(base, size) ||
> - memblock_reserve(base, size) < 0) {
> - return -EBUSY;
> - }
> + int err = cma_fixed_reserve(base, size);
There's no need for 'err', you can use the already declared 'ret'.
--
Oscar Salvador
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] cma: move allocation from HIGHMEM to a helper function
2025-07-02 17:36 ` [PATCH 3/3] cma: move allocation from HIGHMEM to " Mike Rapoport
@ 2025-07-03 9:53 ` Oscar Salvador
2025-07-03 11:14 ` David Hildenbrand
2025-07-03 17:27 ` Mike Rapoport
0 siblings, 2 replies; 11+ messages in thread
From: Oscar Salvador @ 2025-07-03 9:53 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexandre Ghiti, David Hildenbrand, Pratyush Yadav,
linux-kernel, linux-mm
On Wed, Jul 02, 2025 at 08:36:05PM +0300, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>
> When CONFIG_HIGMEM is enabled, __cma_declare_contiguous_nid() first
> tries to allocate the area from HIGHMEM and if that fails it falls back
> to allocation from low memory.
>
> Split allocation from HIGMEM into a helper function to further decouple
> logic related to CONFIG_HIGHMEM.
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
> mm/cma.c | 52 +++++++++++++++++++++++++++++-----------------------
> 1 file changed, 29 insertions(+), 23 deletions(-)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 1df8ff312d99..0a24c46f3296 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -376,6 +376,30 @@ static int __init cma_fixed_reserve(phys_addr_t base, phys_addr_t size)
> return 0;
> }
>
> +static phys_addr_t __init cma_alloc_highmem(phys_addr_t base, phys_addr_t size,
> + phys_addr_t align, phys_addr_t *limit, int nid)
> +{
> + phys_addr_t addr = 0;
> +
> + if (IS_ENABLED(CONFIG_HIGHMEM)) {
> + phys_addr_t highmem = __pa(high_memory - 1) + 1;
> +
> + /*
> + * All pages in the reserved area must come from the same zone.
> + * If the requested region crosses the low/high memory boundary,
> + * try allocating from high memory first and fall back to low
> + * memory in case of failure.
> + */
> + if (base < highmem && *limit > highmem) {
> + addr = memblock_alloc_range_nid(size, align, highmem,
> + *limit, nid, true);
> + *limit = highmem;
> + }
> + }
Not a big deal, but maybe better to do it in one function? Maybe even move
the CONFIG_PHYS_ADDR_T_64BIT block in there as well? So memblock_alloc_range_nid()
calls would be contained in one place and the X86_64/HIGHMEM comments as
well.
Just a thought.
diff --git a/mm/cma.c b/mm/cma.c
index dd7643fc01db..532b56e6971a 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -377,11 +377,12 @@ static int __init cma_fixed_reserve(phys_addr_t base, phys_addr_t size)
return 0;
}
-static phys_addr_t __init cma_alloc_highmem(phys_addr_t base, phys_addr_t size,
- phys_addr_t align, phys_addr_t *limit, int nid)
+static phys_addr_t __init cma_alloc_mem(phys_addr_t base, phys_addr_t size,
+ phys_addr_t align, phys_addr_t limit, int nid)
{
phys_addr_t addr = 0;
+ /* On systems with HIGHMEM try allocating from there first */
if (IS_ENABLED(CONFIG_HIGHMEM)) {
phys_addr_t highmem = __pa(high_memory - 1) + 1;
@@ -393,11 +394,15 @@ static phys_addr_t __init cma_alloc_highmem(phys_addr_t base, phys_addr_t size,
*/
if (base < highmem && *limit > highmem) {
addr = memblock_alloc_range_nid(size, align, highmem,
- *limit, nid, true);
+ limit, nid, true);
*limit = highmem;
}
}
+ if (!addr)
+ addr = memblock_alloc_range_nid(size, alignment, base,
+ limit, nid, true);
+
return addr;
}
@@ -487,16 +492,8 @@ static int __init __cma_declare_contiguous_nid(phys_addr_t *basep,
}
#endif
- /* On systems with HIGHMEM try allocating from there first */
if (!addr)
- addr = cma_alloc_highmem(base, size, alignment, &limit,
- nid);
- if (!addr) {
- addr = memblock_alloc_range_nid(size, alignment, base,
- limit, nid, true);
- if (!addr)
- return -ENOMEM;
- }
+ addr = cma_alloc_mem(base, size, alignment, limit, nid);
/*
* kmemleak scans/reads tracked objects for pointers to other
--
Oscar Salvador
SUSE Labs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] cma: move __cma_declare_contiguous_nid() before its usage
2025-07-02 17:36 ` [PATCH 1/3] cma: move __cma_declare_contiguous_nid() before its usage Mike Rapoport
2025-07-03 9:21 ` Oscar Salvador
@ 2025-07-03 11:11 ` David Hildenbrand
1 sibling, 0 replies; 11+ messages in thread
From: David Hildenbrand @ 2025-07-03 11:11 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: Alexandre Ghiti, Oscar Salvador, Pratyush Yadav, linux-kernel,
linux-mm
On 02.07.25 19:36, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>
> and kill static declaration
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] cma: split resrvation of fixed area into a helper function
2025-07-02 17:36 ` [PATCH 2/3] cma: split resrvation of fixed area into a helper function Mike Rapoport
2025-07-03 9:34 ` Oscar Salvador
@ 2025-07-03 11:12 ` David Hildenbrand
1 sibling, 0 replies; 11+ messages in thread
From: David Hildenbrand @ 2025-07-03 11:12 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: Alexandre Ghiti, Oscar Salvador, Pratyush Yadav, linux-kernel,
linux-mm
On 02.07.25 19:36, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>
> Move the check that verifies that reservation of fixed area does not
> cross HIGHMEM boundary and the actual memblock_resrve() call into a
> helper function.
>
> This makes code more readable and decouples logic related to
> CONFIG_HIGHMEM from the core functionality of
> __cma_declare_contiguous_nid().
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] cma: move allocation from HIGHMEM to a helper function
2025-07-03 9:53 ` Oscar Salvador
@ 2025-07-03 11:14 ` David Hildenbrand
2025-07-03 17:27 ` Mike Rapoport
1 sibling, 0 replies; 11+ messages in thread
From: David Hildenbrand @ 2025-07-03 11:14 UTC (permalink / raw)
To: Oscar Salvador, Mike Rapoport
Cc: Andrew Morton, Alexandre Ghiti, Pratyush Yadav, linux-kernel,
linux-mm
On 03.07.25 11:53, Oscar Salvador wrote:
> On Wed, Jul 02, 2025 at 08:36:05PM +0300, Mike Rapoport wrote:
>> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>>
>> When CONFIG_HIGMEM is enabled, __cma_declare_contiguous_nid() first
>> tries to allocate the area from HIGHMEM and if that fails it falls back
>> to allocation from low memory.
>>
>> Split allocation from HIGMEM into a helper function to further decouple
>> logic related to CONFIG_HIGHMEM.
>>
>> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
>> ---
>> mm/cma.c | 52 +++++++++++++++++++++++++++++-----------------------
>> 1 file changed, 29 insertions(+), 23 deletions(-)
>>
>> diff --git a/mm/cma.c b/mm/cma.c
>> index 1df8ff312d99..0a24c46f3296 100644
>> --- a/mm/cma.c
>> +++ b/mm/cma.c
>> @@ -376,6 +376,30 @@ static int __init cma_fixed_reserve(phys_addr_t base, phys_addr_t size)
>> return 0;
>> }
>>
>> +static phys_addr_t __init cma_alloc_highmem(phys_addr_t base, phys_addr_t size,
>> + phys_addr_t align, phys_addr_t *limit, int nid)
>> +{
>> + phys_addr_t addr = 0;
>> +
>> + if (IS_ENABLED(CONFIG_HIGHMEM)) {
>> + phys_addr_t highmem = __pa(high_memory - 1) + 1;
>> +
>> + /*
>> + * All pages in the reserved area must come from the same zone.
>> + * If the requested region crosses the low/high memory boundary,
>> + * try allocating from high memory first and fall back to low
>> + * memory in case of failure.
>> + */
>> + if (base < highmem && *limit > highmem) {
>> + addr = memblock_alloc_range_nid(size, align, highmem,
>> + *limit, nid, true);
>> + *limit = highmem;
>> + }
>> + }
>
> Not a big deal, but maybe better to do it in one function?
Yes, same thought here.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] cma: move allocation from HIGHMEM to a helper function
2025-07-03 9:53 ` Oscar Salvador
2025-07-03 11:14 ` David Hildenbrand
@ 2025-07-03 17:27 ` Mike Rapoport
1 sibling, 0 replies; 11+ messages in thread
From: Mike Rapoport @ 2025-07-03 17:27 UTC (permalink / raw)
To: Oscar Salvador
Cc: Andrew Morton, Alexandre Ghiti, David Hildenbrand, Pratyush Yadav,
linux-kernel, linux-mm
On Thu, Jul 03, 2025 at 11:53:06AM +0200, Oscar Salvador wrote:
> On Wed, Jul 02, 2025 at 08:36:05PM +0300, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
> >
> > When CONFIG_HIGMEM is enabled, __cma_declare_contiguous_nid() first
> > tries to allocate the area from HIGHMEM and if that fails it falls back
> > to allocation from low memory.
> >
> > Split allocation from HIGMEM into a helper function to further decouple
> > logic related to CONFIG_HIGHMEM.
> >
> > Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> > ---
> > mm/cma.c | 52 +++++++++++++++++++++++++++++-----------------------
> > 1 file changed, 29 insertions(+), 23 deletions(-)
> >
> > diff --git a/mm/cma.c b/mm/cma.c
> > index 1df8ff312d99..0a24c46f3296 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -376,6 +376,30 @@ static int __init cma_fixed_reserve(phys_addr_t base, phys_addr_t size)
> > return 0;
> > }
> >
> > +static phys_addr_t __init cma_alloc_highmem(phys_addr_t base, phys_addr_t size,
> > + phys_addr_t align, phys_addr_t *limit, int nid)
> > +{
> > + phys_addr_t addr = 0;
> > +
> > + if (IS_ENABLED(CONFIG_HIGHMEM)) {
> > + phys_addr_t highmem = __pa(high_memory - 1) + 1;
> > +
> > + /*
> > + * All pages in the reserved area must come from the same zone.
> > + * If the requested region crosses the low/high memory boundary,
> > + * try allocating from high memory first and fall back to low
> > + * memory in case of failure.
> > + */
> > + if (base < highmem && *limit > highmem) {
> > + addr = memblock_alloc_range_nid(size, align, highmem,
> > + *limit, nid, true);
> > + *limit = highmem;
> > + }
> > + }
>
> Not a big deal, but maybe better to do it in one function? Maybe even move
> the CONFIG_PHYS_ADDR_T_64BIT block in there as well? So memblock_alloc_range_nid()
> calls would be contained in one place and the X86_64/HIGHMEM comments as
> well.
> Just a thought.
Yeah, this will be neater, thanks!
Will send v2 shortly.
> --
> Oscar Salvador
> SUSE Labs
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-07-03 17:28 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-02 17:36 [PATCH 0/3] cma: factor out HIGMEM logic from __cma_declare_contiguous_nid Mike Rapoport
2025-07-02 17:36 ` [PATCH 1/3] cma: move __cma_declare_contiguous_nid() before its usage Mike Rapoport
2025-07-03 9:21 ` Oscar Salvador
2025-07-03 11:11 ` David Hildenbrand
2025-07-02 17:36 ` [PATCH 2/3] cma: split resrvation of fixed area into a helper function Mike Rapoport
2025-07-03 9:34 ` Oscar Salvador
2025-07-03 11:12 ` David Hildenbrand
2025-07-02 17:36 ` [PATCH 3/3] cma: move allocation from HIGHMEM to " Mike Rapoport
2025-07-03 9:53 ` Oscar Salvador
2025-07-03 11:14 ` David Hildenbrand
2025-07-03 17:27 ` Mike Rapoport
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).