linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range()
@ 2024-04-14  0:45 Wei Yang
  2024-04-14  0:45 ` [PATCH 2/6] memblock tests: add the 129th memory block at all possible position Wei Yang
                   ` (5 more replies)
  0 siblings, 6 replies; 16+ messages in thread
From: Wei Yang @ 2024-04-14  0:45 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang, Jinyu Tang, Peng Zhang

Current memblock_add_range() does the insertion with two round
iteration.

First round does the calculation of new region required, and second
round does the actual insertion. Between them, if current max can't meet
new region requirement, it is expanded.

The worst case is:

1. cnt == max
2. new range overlaps all existing region

This means the final cnt should be (2 * max + 1). Since we always double
the array size, this means we only need to double the array twice at the
worst case, which is fairly rare. For other cases, only once array
double is enough.

Let's double the array immediately when there is no room for new region.
This simplify the code a little.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
CC: Jinyu Tang <tjytimi@163.com>
CC: Peng Zhang <zhangpeng.00@bytedance.com>
---
 mm/memblock.c | 74 +++++++++++++++------------------------------------
 1 file changed, 22 insertions(+), 52 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 98d25689cf10..b46109300927 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -585,10 +585,9 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
 				phys_addr_t base, phys_addr_t size,
 				int nid, enum memblock_flags flags)
 {
-	bool insert = false;
 	phys_addr_t obase = base;
 	phys_addr_t end = base + memblock_cap_size(base, &size);
-	int idx, nr_new, start_rgn = -1, end_rgn;
+	int idx, start_rgn = -1, end_rgn;
 	struct memblock_region *rgn;
 
 	if (!size)
@@ -606,25 +605,6 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
 		return 0;
 	}
 
-	/*
-	 * The worst case is when new range overlaps all existing regions,
-	 * then we'll need type->cnt + 1 empty regions in @type. So if
-	 * type->cnt * 2 + 1 is less than or equal to type->max, we know
-	 * that there is enough empty regions in @type, and we can insert
-	 * regions directly.
-	 */
-	if (type->cnt * 2 + 1 <= type->max)
-		insert = true;
-
-repeat:
-	/*
-	 * The following is executed twice.  Once with %false @insert and
-	 * then with %true.  The first counts the number of regions needed
-	 * to accommodate the new area.  The second actually inserts them.
-	 */
-	base = obase;
-	nr_new = 0;
-
 	for_each_memblock_type(idx, type, rgn) {
 		phys_addr_t rbase = rgn->base;
 		phys_addr_t rend = rbase + rgn->size;
@@ -642,15 +622,17 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
 			WARN_ON(nid != memblock_get_region_node(rgn));
 #endif
 			WARN_ON(flags != rgn->flags);
-			nr_new++;
-			if (insert) {
-				if (start_rgn == -1)
-					start_rgn = idx;
-				end_rgn = idx + 1;
-				memblock_insert_region(type, idx++, base,
-						       rbase - base, nid,
-						       flags);
+			if (type->cnt >= type->max) {
+				if (memblock_double_array(type, obase, size) < 0)
+					return -ENOMEM;
 			}
+
+			if (start_rgn == -1)
+				start_rgn = idx;
+			end_rgn = idx + 1;
+			memblock_insert_region(type, idx++, base,
+					       rbase - base, nid,
+					       flags);
 		}
 		/* area below @rend is dealt with, forget about it */
 		base = min(rend, end);
@@ -658,33 +640,21 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
 
 	/* insert the remaining portion */
 	if (base < end) {
-		nr_new++;
-		if (insert) {
-			if (start_rgn == -1)
-				start_rgn = idx;
-			end_rgn = idx + 1;
-			memblock_insert_region(type, idx, base, end - base,
-					       nid, flags);
+		if (type->cnt >= type->max) {
+			if (memblock_double_array(type, obase, size) < 0)
+				return -ENOMEM;
 		}
-	}
 
-	if (!nr_new)
-		return 0;
+		if (start_rgn == -1)
+			start_rgn = idx;
+		end_rgn = idx + 1;
+		memblock_insert_region(type, idx, base, end - base,
+				       nid, flags);
+	}
 
-	/*
-	 * If this was the first round, resize array and repeat for actual
-	 * insertions; otherwise, merge and return.
-	 */
-	if (!insert) {
-		while (type->cnt + nr_new > type->max)
-			if (memblock_double_array(type, obase, size) < 0)
-				return -ENOMEM;
-		insert = true;
-		goto repeat;
-	} else {
+	if (start_rgn != -1)
 		memblock_merge_regions(type, start_rgn, end_rgn);
-		return 0;
-	}
+	return 0;
 }
 
 /**
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/6] memblock tests: add the 129th memory block at all possible position
  2024-04-14  0:45 [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Wei Yang
@ 2024-04-14  0:45 ` Wei Yang
  2024-04-15 15:19   ` Mike Rapoport
  2024-04-14  0:45 ` [PATCH 3/6] mm/memblock: fix comment for memblock_isolate_range() Wei Yang
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Wei Yang @ 2024-04-14  0:45 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

After previous change, we may double the array based on the position of
the new range.

Let's make sure the 129th memory block would double the size correctly
at all possible position.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 tools/testing/memblock/tests/basic_api.c | 132 +++++++++++++----------
 1 file changed, 73 insertions(+), 59 deletions(-)

diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
index f317fe691fc4..f1569ebb9872 100644
--- a/tools/testing/memblock/tests/basic_api.c
+++ b/tools/testing/memblock/tests/basic_api.c
@@ -431,84 +431,98 @@ static int memblock_add_near_max_check(void)
  */
 static int memblock_add_many_check(void)
 {
-	int i;
+	int i, skip;
 	void *orig_region;
 	struct region r = {
 		.base = SZ_16K,
 		.size = SZ_16K,
 	};
 	phys_addr_t new_memory_regions_size;
-	phys_addr_t base, size = SZ_64;
+	phys_addr_t base, base_start, size = SZ_64;
 	phys_addr_t gap_size = SZ_64;
 
 	PREFIX_PUSH();
 
-	reset_memblock_regions();
-	memblock_allow_resize();
-
-	dummy_physical_memory_init();
-	/*
-	 * We allocated enough memory by using dummy_physical_memory_init(), and
-	 * split it into small block. First we split a large enough memory block
-	 * as the memory region which will be choosed by memblock_double_array().
-	 */
-	base = PAGE_ALIGN(dummy_physical_memory_base());
-	new_memory_regions_size = PAGE_ALIGN(INIT_MEMBLOCK_REGIONS * 2 *
-					     sizeof(struct memblock_region));
-	memblock_add(base, new_memory_regions_size);
-
-	/* This is the base of small memory block. */
-	base += new_memory_regions_size + gap_size;
-
-	orig_region = memblock.memory.regions;
+	/* Add the 129th memory block for all possible positions*/
+	for (skip = 0; skip < INIT_MEMBLOCK_REGIONS; skip++) {
+		reset_memblock_regions();
+		memblock_allow_resize();
 
-	for (i = 0; i < INIT_MEMBLOCK_REGIONS; i++) {
+		dummy_physical_memory_init();
 		/*
-		 * Add these small block to fulfill the memblock. We keep a
-		 * gap between the nearby memory to avoid being merged.
+		 * We allocated enough memory by using dummy_physical_memory_init(), and
+		 * split it into small block. First we split a large enough memory block
+		 * as the memory region which will be choosed by memblock_double_array().
 		 */
-		memblock_add(base, size);
-		base += size + gap_size;
-
-		ASSERT_EQ(memblock.memory.cnt, i + 2);
+		base = PAGE_ALIGN(dummy_physical_memory_base());
+		new_memory_regions_size = PAGE_ALIGN(INIT_MEMBLOCK_REGIONS * 2 *
+						     sizeof(struct memblock_region));
+		memblock_add(base, new_memory_regions_size);
+
+		/* This is the base of small memory block. */
+		base_start = base += new_memory_regions_size + gap_size;
+		orig_region = memblock.memory.regions;
+
+		for (i = 0; i < INIT_MEMBLOCK_REGIONS; i++, base += size + gap_size) {
+			if (i == skip)
+				continue;
+			/*
+			 * Add these small block to fulfill the memblock. We keep a
+			 * gap between the nearby memory to avoid being merged.
+			 */
+			memblock_add(base, size);
+
+			if (i < skip) {
+				ASSERT_EQ(memblock.memory.cnt, i + 2);
+				ASSERT_EQ(memblock.memory.total_size,
+					  new_memory_regions_size + (i + 1) * size);
+			} else {
+				ASSERT_EQ(memblock.memory.cnt, i + 1);
+				ASSERT_EQ(memblock.memory.total_size,
+					  new_memory_regions_size + i * size);
+			}
+		}
+
+		/* Add the skipped one to trigger memblock_double_array() */
+		memblock_add(base_start + skip * (size + gap_size), size);
+		ASSERT_EQ(memblock.memory.cnt, i + 1);
 		ASSERT_EQ(memblock.memory.total_size, new_memory_regions_size +
-						      (i + 1) * size);
-	}
+							      i * size);
+		/*
+		 * At there, memblock_double_array() has been succeed, check if it
+		 * update the memory.max.
+		 */
+		ASSERT_EQ(memblock.memory.max, INIT_MEMBLOCK_REGIONS * 2);
 
-	/*
-	 * At there, memblock_double_array() has been succeed, check if it
-	 * update the memory.max.
-	 */
-	ASSERT_EQ(memblock.memory.max, INIT_MEMBLOCK_REGIONS * 2);
+		/* memblock_double_array() will reserve the memory it used. Check it. */
+		ASSERT_EQ(memblock.reserved.cnt, 1);
+		ASSERT_EQ(memblock.reserved.total_size, new_memory_regions_size);
 
-	/* memblock_double_array() will reserve the memory it used. Check it. */
-	ASSERT_EQ(memblock.reserved.cnt, 1);
-	ASSERT_EQ(memblock.reserved.total_size, new_memory_regions_size);
-
-	/*
-	 * Now memblock_double_array() works fine. Let's check after the
-	 * double_array(), the memblock_add() still works as normal.
-	 */
-	memblock_add(r.base, r.size);
-	ASSERT_EQ(memblock.memory.regions[0].base, r.base);
-	ASSERT_EQ(memblock.memory.regions[0].size, r.size);
+		/*
+		 * Now memblock_double_array() works fine. Let's check after the
+		 * double_array(), the memblock_add() still works as normal.
+		 */
+		memblock_add(r.base, r.size);
+		ASSERT_EQ(memblock.memory.regions[0].base, r.base);
+		ASSERT_EQ(memblock.memory.regions[0].size, r.size);
 
-	ASSERT_EQ(memblock.memory.cnt, INIT_MEMBLOCK_REGIONS + 2);
-	ASSERT_EQ(memblock.memory.total_size, INIT_MEMBLOCK_REGIONS * size +
-					      new_memory_regions_size +
-					      r.size);
-	ASSERT_EQ(memblock.memory.max, INIT_MEMBLOCK_REGIONS * 2);
+		ASSERT_EQ(memblock.memory.cnt, INIT_MEMBLOCK_REGIONS + 2);
+		ASSERT_EQ(memblock.memory.total_size, INIT_MEMBLOCK_REGIONS * size +
+						      new_memory_regions_size +
+						      r.size);
+		ASSERT_EQ(memblock.memory.max, INIT_MEMBLOCK_REGIONS * 2);
 
-	dummy_physical_memory_cleanup();
+		dummy_physical_memory_cleanup();
 
-	/*
-	 * The current memory.regions is occupying a range of memory that
-	 * allocated from dummy_physical_memory_init(). After free the memory,
-	 * we must not use it. So restore the origin memory region to make sure
-	 * the tests can run as normal and not affected by the double array.
-	 */
-	memblock.memory.regions = orig_region;
-	memblock.memory.cnt = INIT_MEMBLOCK_REGIONS;
+		/*
+		 * The current memory.regions is occupying a range of memory that
+		 * allocated from dummy_physical_memory_init(). After free the memory,
+		 * we must not use it. So restore the origin memory region to make sure
+		 * the tests can run as normal and not affected by the double array.
+		 */
+		memblock.memory.regions = orig_region;
+		memblock.memory.cnt = INIT_MEMBLOCK_REGIONS;
+	}
 
 	test_pass_pop();
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/6] mm/memblock: fix comment for memblock_isolate_range()
  2024-04-14  0:45 [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Wei Yang
  2024-04-14  0:45 ` [PATCH 2/6] memblock tests: add the 129th memory block at all possible position Wei Yang
@ 2024-04-14  0:45 ` Wei Yang
  2024-04-14  0:45 ` [PATCH 4/6] mm/memblock: remove consecutive regions at once Wei Yang
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Wei Yang @ 2024-04-14  0:45 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

The isolated range is [*@start_rgn, *@end_rgn - 1], while the comment says
"the end region inside the range" is *@end_rgn.

Let's correct it.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/memblock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index b46109300927..2d7a2431803f 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -742,12 +742,12 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
  * @base: base of range to isolate
  * @size: size of range to isolate
  * @start_rgn: out parameter for the start of isolated region
- * @end_rgn: out parameter for the end of isolated region
+ * @end_rgn: out parameter for the (end + 1) of isolated region
  *
  * Walk @type and ensure that regions don't cross the boundaries defined by
  * [@base, @base + @size).  Crossing regions are split at the boundaries,
  * which may create at most two more regions.  The index of the first
- * region inside the range is returned in *@start_rgn and end in *@end_rgn.
+ * region inside the range is returned in *@start_rgn and (end + 1) in *@end_rgn.
  *
  * Return:
  * 0 on success, -errno on failure.
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 4/6] mm/memblock: remove consecutive regions at once
  2024-04-14  0:45 [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Wei Yang
  2024-04-14  0:45 ` [PATCH 2/6] memblock tests: add the 129th memory block at all possible position Wei Yang
  2024-04-14  0:45 ` [PATCH 3/6] mm/memblock: fix comment for memblock_isolate_range() Wei Yang
@ 2024-04-14  0:45 ` Wei Yang
  2024-04-14  0:45 ` [PATCH 5/6] memblock tests: add memblock_overlaps_region_checks Wei Yang
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Wei Yang @ 2024-04-14  0:45 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

In memblock_remove_range, we would remove consecutive regions after
isolation.

Instead of removing each region from end to start one by one, we could
remove those consecutive regions at once.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/memblock.c | 24 ++++++++++++++++--------
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 2d7a2431803f..3fecee4bd41a 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -343,12 +343,15 @@ static phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start,
 	return ret;
 }
 
-static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r)
+static void __init_memblock memblock_remove_regions(struct memblock_type *type,
+						    unsigned long r,
+						    unsigned int num)
 {
-	type->total_size -= type->regions[r].size;
-	memmove(&type->regions[r], &type->regions[r + 1],
-		(type->cnt - (r + 1)) * sizeof(type->regions[r]));
-	type->cnt--;
+	for (int i = 0; i < num; i++)
+		type->total_size -= type->regions[r + i].size;
+	memmove(&type->regions[r], &type->regions[r + num],
+		(type->cnt - (r + num)) * sizeof(type->regions[r]));
+	type->cnt -= num;
 
 	/* Special case for empty arrays */
 	if (type->cnt == 0) {
@@ -360,6 +363,11 @@ static void __init_memblock memblock_remove_region(struct memblock_type *type, u
 	}
 }
 
+static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r)
+{
+	memblock_remove_regions(type, r, 1);
+}
+
 #ifndef CONFIG_ARCH_KEEP_MEMBLOCK
 /**
  * memblock_discard - discard memory and reserved arrays if they were allocated
@@ -816,14 +824,14 @@ static int __init_memblock memblock_remove_range(struct memblock_type *type,
 					  phys_addr_t base, phys_addr_t size)
 {
 	int start_rgn, end_rgn;
-	int i, ret;
+	int ret;
 
 	ret = memblock_isolate_range(type, base, size, &start_rgn, &end_rgn);
 	if (ret)
 		return ret;
 
-	for (i = end_rgn - 1; i >= start_rgn; i--)
-		memblock_remove_region(type, i);
+	if (end_rgn - start_rgn)
+		memblock_remove_regions(type, start_rgn, end_rgn - start_rgn);
 	return 0;
 }
 
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 5/6] memblock tests: add memblock_overlaps_region_checks
  2024-04-14  0:45 [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Wei Yang
                   ` (2 preceding siblings ...)
  2024-04-14  0:45 ` [PATCH 4/6] mm/memblock: remove consecutive regions at once Wei Yang
@ 2024-04-14  0:45 ` Wei Yang
  2024-04-14  0:45 ` [PATCH 6/6] mm/memblock: return true directly on finding overlap region Wei Yang
  2024-04-15 15:17 ` [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Mike Rapoport
  5 siblings, 0 replies; 16+ messages in thread
From: Wei Yang @ 2024-04-14  0:45 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Add a test case for memblock_overlaps_region().

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 tools/testing/memblock/tests/basic_api.c | 48 ++++++++++++++++++++++++
 tools/testing/memblock/tests/common.h    |  3 ++
 2 files changed, 51 insertions(+)

diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
index f1569ebb9872..5adb0c1eaab9 100644
--- a/tools/testing/memblock/tests/basic_api.c
+++ b/tools/testing/memblock/tests/basic_api.c
@@ -2143,6 +2143,53 @@ static int memblock_trim_memory_checks(void)
 	return 0;
 }
 
+static int memblock_overlaps_region_check(void)
+{
+	struct region r = {
+		.base = SZ_1G,
+		.size = SZ_4M
+	};
+
+	PREFIX_PUSH();
+
+	reset_memblock_regions();
+	memblock_add(r.base, r.size);
+
+	/* Far Away */
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_1M, SZ_1M));
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_2G, SZ_1M));
+
+	/* Neighbor */
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_1G - SZ_1M, SZ_1M));
+	ASSERT_FALSE(memblock_overlaps_region(&memblock.memory, SZ_1G + SZ_4M, SZ_1M));
+
+	/* Partial Overlap */
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G - SZ_1M, SZ_2M));
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G + SZ_2M, SZ_2M));
+
+	/* Totally Overlap */
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G, SZ_4M));
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G - SZ_2M, SZ_8M));
+	ASSERT_TRUE(memblock_overlaps_region(&memblock.memory, SZ_1G + SZ_1M, SZ_1M));
+
+	test_pass_pop();
+
+	return 0;
+}
+
+static int memblock_overlaps_region_checks(void)
+{
+	prefix_reset();
+	prefix_push("memblock_overlaps_region");
+	test_print("Running memblock_overlaps_region tests...\n");
+
+	memblock_overlaps_region_check();
+
+	prefix_pop();
+
+	return 0;
+}
+
 int memblock_basic_checks(void)
 {
 	memblock_initialization_check();
@@ -2152,6 +2199,7 @@ int memblock_basic_checks(void)
 	memblock_free_checks();
 	memblock_bottom_up_checks();
 	memblock_trim_memory_checks();
+	memblock_overlaps_region_checks();
 
 	return 0;
 }
diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h
index b5ec59aa62d7..d8f4c9e64055 100644
--- a/tools/testing/memblock/tests/common.h
+++ b/tools/testing/memblock/tests/common.h
@@ -39,6 +39,9 @@ enum test_flags {
 	assert((_expected) == (_seen)); \
 } while (0)
 
+#define ASSERT_TRUE(_seen) ASSERT_EQ(true, _seen)
+#define ASSERT_FALSE(_seen) ASSERT_EQ(false, _seen)
+
 /**
  * ASSERT_NE():
  * Check the condition
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 6/6] mm/memblock: return true directly on finding overlap region
  2024-04-14  0:45 [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Wei Yang
                   ` (3 preceding siblings ...)
  2024-04-14  0:45 ` [PATCH 5/6] memblock tests: add memblock_overlaps_region_checks Wei Yang
@ 2024-04-14  0:45 ` Wei Yang
  2024-04-15 15:17 ` [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Mike Rapoport
  5 siblings, 0 replies; 16+ messages in thread
From: Wei Yang @ 2024-04-14  0:45 UTC (permalink / raw)
  To: rppt, akpm; +Cc: linux-mm, Wei Yang

Not necessary to break and check i against type->cnt again.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/memblock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 3fecee4bd41a..77719126208a 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -194,8 +194,8 @@ bool __init_memblock memblock_overlaps_region(struct memblock_type *type,
 	for (i = 0; i < type->cnt; i++)
 		if (memblock_addrs_overlap(base, size, type->regions[i].base,
 					   type->regions[i].size))
-			break;
-	return i < type->cnt;
+			return true;
+	return false;
 }
 
 /**
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range()
  2024-04-14  0:45 [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Wei Yang
                   ` (4 preceding siblings ...)
  2024-04-14  0:45 ` [PATCH 6/6] mm/memblock: return true directly on finding overlap region Wei Yang
@ 2024-04-15 15:17 ` Mike Rapoport
  2024-04-22  2:55   ` Wei Yang
  5 siblings, 1 reply; 16+ messages in thread
From: Mike Rapoport @ 2024-04-15 15:17 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm, Jinyu Tang, Peng Zhang

On Sun, Apr 14, 2024 at 12:45:26AM +0000, Wei Yang wrote:
> Current memblock_add_range() does the insertion with two round
> iteration.
> 
> First round does the calculation of new region required, and second
> round does the actual insertion. Between them, if current max can't meet
> new region requirement, it is expanded.
> 
> The worst case is:
> 
> 1. cnt == max
> 2. new range overlaps all existing region
> 
> This means the final cnt should be (2 * max + 1). Since we always double
> the array size, this means we only need to double the array twice at the
> worst case, which is fairly rare. For other cases, only once array
> double is enough.
> 
> Let's double the array immediately when there is no room for new region.
> This simplify the code a little.

Very similar patch was posted a while ago:
https://lore.kernel.org/all/20221025070943.363578-1-yajun.deng@linux.dev

and it caused boot regression:
https://lore.kernel.org/linux-mm/Y2oLYB7Tu7J91tVm@linux.ibm.com

> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> CC: Jinyu Tang <tjytimi@163.com>
> CC: Peng Zhang <zhangpeng.00@bytedance.com>
> ---
>  mm/memblock.c | 74 +++++++++++++++------------------------------------
>  1 file changed, 22 insertions(+), 52 deletions(-)
> 
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 98d25689cf10..b46109300927 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -585,10 +585,9 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
>  				phys_addr_t base, phys_addr_t size,
>  				int nid, enum memblock_flags flags)
>  {
> -	bool insert = false;
>  	phys_addr_t obase = base;
>  	phys_addr_t end = base + memblock_cap_size(base, &size);
> -	int idx, nr_new, start_rgn = -1, end_rgn;
> +	int idx, start_rgn = -1, end_rgn;
>  	struct memblock_region *rgn;
>  
>  	if (!size)
> @@ -606,25 +605,6 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
>  		return 0;
>  	}
>  
> -	/*
> -	 * The worst case is when new range overlaps all existing regions,
> -	 * then we'll need type->cnt + 1 empty regions in @type. So if
> -	 * type->cnt * 2 + 1 is less than or equal to type->max, we know
> -	 * that there is enough empty regions in @type, and we can insert
> -	 * regions directly.
> -	 */
> -	if (type->cnt * 2 + 1 <= type->max)
> -		insert = true;
> -
> -repeat:
> -	/*
> -	 * The following is executed twice.  Once with %false @insert and
> -	 * then with %true.  The first counts the number of regions needed
> -	 * to accommodate the new area.  The second actually inserts them.
> -	 */
> -	base = obase;
> -	nr_new = 0;
> -
>  	for_each_memblock_type(idx, type, rgn) {
>  		phys_addr_t rbase = rgn->base;
>  		phys_addr_t rend = rbase + rgn->size;
> @@ -642,15 +622,17 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
>  			WARN_ON(nid != memblock_get_region_node(rgn));
>  #endif
>  			WARN_ON(flags != rgn->flags);
> -			nr_new++;
> -			if (insert) {
> -				if (start_rgn == -1)
> -					start_rgn = idx;
> -				end_rgn = idx + 1;
> -				memblock_insert_region(type, idx++, base,
> -						       rbase - base, nid,
> -						       flags);
> +			if (type->cnt >= type->max) {
> +				if (memblock_double_array(type, obase, size) < 0)
> +					return -ENOMEM;
>  			}
> +
> +			if (start_rgn == -1)
> +				start_rgn = idx;
> +			end_rgn = idx + 1;
> +			memblock_insert_region(type, idx++, base,
> +					       rbase - base, nid,
> +					       flags);
>  		}
>  		/* area below @rend is dealt with, forget about it */
>  		base = min(rend, end);
> @@ -658,33 +640,21 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
>  
>  	/* insert the remaining portion */
>  	if (base < end) {
> -		nr_new++;
> -		if (insert) {
> -			if (start_rgn == -1)
> -				start_rgn = idx;
> -			end_rgn = idx + 1;
> -			memblock_insert_region(type, idx, base, end - base,
> -					       nid, flags);
> +		if (type->cnt >= type->max) {
> +			if (memblock_double_array(type, obase, size) < 0)
> +				return -ENOMEM;
>  		}
> -	}
>  
> -	if (!nr_new)
> -		return 0;
> +		if (start_rgn == -1)
> +			start_rgn = idx;
> +		end_rgn = idx + 1;
> +		memblock_insert_region(type, idx, base, end - base,
> +				       nid, flags);
> +	}
>  
> -	/*
> -	 * If this was the first round, resize array and repeat for actual
> -	 * insertions; otherwise, merge and return.
> -	 */
> -	if (!insert) {
> -		while (type->cnt + nr_new > type->max)
> -			if (memblock_double_array(type, obase, size) < 0)
> -				return -ENOMEM;
> -		insert = true;
> -		goto repeat;
> -	} else {
> +	if (start_rgn != -1)
>  		memblock_merge_regions(type, start_rgn, end_rgn);
> -		return 0;
> -	}
> +	return 0;
>  }
>  
>  /**
> -- 
> 2.34.1
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/6] memblock tests: add the 129th memory block at all possible position
  2024-04-14  0:45 ` [PATCH 2/6] memblock tests: add the 129th memory block at all possible position Wei Yang
@ 2024-04-15 15:19   ` Mike Rapoport
  2024-04-16 12:55     ` Wei Yang
  0 siblings, 1 reply; 16+ messages in thread
From: Mike Rapoport @ 2024-04-15 15:19 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm

On Sun, Apr 14, 2024 at 12:45:27AM +0000, Wei Yang wrote:
> After previous change, we may double the array based on the position of
> the new range.
> 
> Let's make sure the 129th memory block would double the size correctly
> at all possible position.

Rather than rewrite an existing test, just add a new one.
Besides, it would be more interesting to test additions to
memblock.reserved and a mix of memblock_add() and memblock_reserve() that
will require resizing the memblock arrays.
 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  tools/testing/memblock/tests/basic_api.c | 132 +++++++++++++----------
>  1 file changed, 73 insertions(+), 59 deletions(-)
> 
> diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
> index f317fe691fc4..f1569ebb9872 100644
> --- a/tools/testing/memblock/tests/basic_api.c
> +++ b/tools/testing/memblock/tests/basic_api.c
> @@ -431,84 +431,98 @@ static int memblock_add_near_max_check(void)
>   */
>  static int memblock_add_many_check(void)
>  {
> -	int i;
> +	int i, skip;
>  	void *orig_region;
>  	struct region r = {
>  		.base = SZ_16K,
>  		.size = SZ_16K,
>  	};
>  	phys_addr_t new_memory_regions_size;
> -	phys_addr_t base, size = SZ_64;
> +	phys_addr_t base, base_start, size = SZ_64;
>  	phys_addr_t gap_size = SZ_64;
>  
>  	PREFIX_PUSH();
>  
> -	reset_memblock_regions();
> -	memblock_allow_resize();
> -
> -	dummy_physical_memory_init();
> -	/*
> -	 * We allocated enough memory by using dummy_physical_memory_init(), and
> -	 * split it into small block. First we split a large enough memory block
> -	 * as the memory region which will be choosed by memblock_double_array().
> -	 */
> -	base = PAGE_ALIGN(dummy_physical_memory_base());
> -	new_memory_regions_size = PAGE_ALIGN(INIT_MEMBLOCK_REGIONS * 2 *
> -					     sizeof(struct memblock_region));
> -	memblock_add(base, new_memory_regions_size);
> -
> -	/* This is the base of small memory block. */
> -	base += new_memory_regions_size + gap_size;
> -
> -	orig_region = memblock.memory.regions;
> +	/* Add the 129th memory block for all possible positions*/
> +	for (skip = 0; skip < INIT_MEMBLOCK_REGIONS; skip++) {
> +		reset_memblock_regions();
> +		memblock_allow_resize();
>  
> -	for (i = 0; i < INIT_MEMBLOCK_REGIONS; i++) {
> +		dummy_physical_memory_init();
>  		/*
> -		 * Add these small block to fulfill the memblock. We keep a
> -		 * gap between the nearby memory to avoid being merged.
> +		 * We allocated enough memory by using dummy_physical_memory_init(), and
> +		 * split it into small block. First we split a large enough memory block
> +		 * as the memory region which will be choosed by memblock_double_array().
>  		 */
> -		memblock_add(base, size);
> -		base += size + gap_size;
> -
> -		ASSERT_EQ(memblock.memory.cnt, i + 2);
> +		base = PAGE_ALIGN(dummy_physical_memory_base());
> +		new_memory_regions_size = PAGE_ALIGN(INIT_MEMBLOCK_REGIONS * 2 *
> +						     sizeof(struct memblock_region));
> +		memblock_add(base, new_memory_regions_size);
> +
> +		/* This is the base of small memory block. */
> +		base_start = base += new_memory_regions_size + gap_size;
> +		orig_region = memblock.memory.regions;
> +
> +		for (i = 0; i < INIT_MEMBLOCK_REGIONS; i++, base += size + gap_size) {
> +			if (i == skip)
> +				continue;
> +			/*
> +			 * Add these small block to fulfill the memblock. We keep a
> +			 * gap between the nearby memory to avoid being merged.
> +			 */
> +			memblock_add(base, size);
> +
> +			if (i < skip) {
> +				ASSERT_EQ(memblock.memory.cnt, i + 2);
> +				ASSERT_EQ(memblock.memory.total_size,
> +					  new_memory_regions_size + (i + 1) * size);
> +			} else {
> +				ASSERT_EQ(memblock.memory.cnt, i + 1);
> +				ASSERT_EQ(memblock.memory.total_size,
> +					  new_memory_regions_size + i * size);
> +			}
> +		}
> +
> +		/* Add the skipped one to trigger memblock_double_array() */
> +		memblock_add(base_start + skip * (size + gap_size), size);
> +		ASSERT_EQ(memblock.memory.cnt, i + 1);
>  		ASSERT_EQ(memblock.memory.total_size, new_memory_regions_size +
> -						      (i + 1) * size);
> -	}
> +							      i * size);
> +		/*
> +		 * At there, memblock_double_array() has been succeed, check if it
> +		 * update the memory.max.
> +		 */
> +		ASSERT_EQ(memblock.memory.max, INIT_MEMBLOCK_REGIONS * 2);
>  
> -	/*
> -	 * At there, memblock_double_array() has been succeed, check if it
> -	 * update the memory.max.
> -	 */
> -	ASSERT_EQ(memblock.memory.max, INIT_MEMBLOCK_REGIONS * 2);
> +		/* memblock_double_array() will reserve the memory it used. Check it. */
> +		ASSERT_EQ(memblock.reserved.cnt, 1);
> +		ASSERT_EQ(memblock.reserved.total_size, new_memory_regions_size);
>  
> -	/* memblock_double_array() will reserve the memory it used. Check it. */
> -	ASSERT_EQ(memblock.reserved.cnt, 1);
> -	ASSERT_EQ(memblock.reserved.total_size, new_memory_regions_size);
> -
> -	/*
> -	 * Now memblock_double_array() works fine. Let's check after the
> -	 * double_array(), the memblock_add() still works as normal.
> -	 */
> -	memblock_add(r.base, r.size);
> -	ASSERT_EQ(memblock.memory.regions[0].base, r.base);
> -	ASSERT_EQ(memblock.memory.regions[0].size, r.size);
> +		/*
> +		 * Now memblock_double_array() works fine. Let's check after the
> +		 * double_array(), the memblock_add() still works as normal.
> +		 */
> +		memblock_add(r.base, r.size);
> +		ASSERT_EQ(memblock.memory.regions[0].base, r.base);
> +		ASSERT_EQ(memblock.memory.regions[0].size, r.size);
>  
> -	ASSERT_EQ(memblock.memory.cnt, INIT_MEMBLOCK_REGIONS + 2);
> -	ASSERT_EQ(memblock.memory.total_size, INIT_MEMBLOCK_REGIONS * size +
> -					      new_memory_regions_size +
> -					      r.size);
> -	ASSERT_EQ(memblock.memory.max, INIT_MEMBLOCK_REGIONS * 2);
> +		ASSERT_EQ(memblock.memory.cnt, INIT_MEMBLOCK_REGIONS + 2);
> +		ASSERT_EQ(memblock.memory.total_size, INIT_MEMBLOCK_REGIONS * size +
> +						      new_memory_regions_size +
> +						      r.size);
> +		ASSERT_EQ(memblock.memory.max, INIT_MEMBLOCK_REGIONS * 2);
>  
> -	dummy_physical_memory_cleanup();
> +		dummy_physical_memory_cleanup();
>  
> -	/*
> -	 * The current memory.regions is occupying a range of memory that
> -	 * allocated from dummy_physical_memory_init(). After free the memory,
> -	 * we must not use it. So restore the origin memory region to make sure
> -	 * the tests can run as normal and not affected by the double array.
> -	 */
> -	memblock.memory.regions = orig_region;
> -	memblock.memory.cnt = INIT_MEMBLOCK_REGIONS;
> +		/*
> +		 * The current memory.regions is occupying a range of memory that
> +		 * allocated from dummy_physical_memory_init(). After free the memory,
> +		 * we must not use it. So restore the origin memory region to make sure
> +		 * the tests can run as normal and not affected by the double array.
> +		 */
> +		memblock.memory.regions = orig_region;
> +		memblock.memory.cnt = INIT_MEMBLOCK_REGIONS;
> +	}
>  
>  	test_pass_pop();
>  
> -- 
> 2.34.1
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/6] memblock tests: add the 129th memory block at all possible position
  2024-04-15 15:19   ` Mike Rapoport
@ 2024-04-16 12:55     ` Wei Yang
  2024-04-17  5:51       ` Mike Rapoport
  0 siblings, 1 reply; 16+ messages in thread
From: Wei Yang @ 2024-04-16 12:55 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm

On Mon, Apr 15, 2024 at 06:19:42PM +0300, Mike Rapoport wrote:
>On Sun, Apr 14, 2024 at 12:45:27AM +0000, Wei Yang wrote:
>> After previous change, we may double the array based on the position of
>> the new range.
>> 
>> Let's make sure the 129th memory block would double the size correctly
>> at all possible position.
>
>Rather than rewrite an existing test, just add a new one.

Ok, will add a new one for this.

>Besides, it would be more interesting to test additions to
>memblock.reserved and a mix of memblock_add() and memblock_reserve() that
>will require resizing the memblock arrays.

I don't get this very clearly. Would you mind give more hint?

> 
>> -- 
>> 2.34.1
>> 
>
>-- 
>Sincerely yours,
>Mike.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/6] memblock tests: add the 129th memory block at all possible position
  2024-04-16 12:55     ` Wei Yang
@ 2024-04-17  5:51       ` Mike Rapoport
  2024-04-18  9:02         ` Wei Yang
  2024-04-19  3:15         ` Wei Yang
  0 siblings, 2 replies; 16+ messages in thread
From: Mike Rapoport @ 2024-04-17  5:51 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm

On Tue, Apr 16, 2024 at 12:55:31PM +0000, Wei Yang wrote:
> On Mon, Apr 15, 2024 at 06:19:42PM +0300, Mike Rapoport wrote:
> >On Sun, Apr 14, 2024 at 12:45:27AM +0000, Wei Yang wrote:
> >> After previous change, we may double the array based on the position of
> >> the new range.
> >> 
> >> Let's make sure the 129th memory block would double the size correctly
> >> at all possible position.
> >
> >Rather than rewrite an existing test, just add a new one.
> 
> Ok, will add a new one for this.
> 
> >Besides, it would be more interesting to test additions to
> >memblock.reserved and a mix of memblock_add() and memblock_reserve() that
> >will require resizing the memblock arrays.
> 
> I don't get this very clearly. Would you mind give more hint?

There is memblock_reserve_many_check() that verifies that memblock.reserved
is properly resized. I think it's better to add test that adds 129th block
at multiple locations to memblock.reserved.
 
> -- 
> Wei Yang
> Help you, Help me

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/6] memblock tests: add the 129th memory block at all possible position
  2024-04-17  5:51       ` Mike Rapoport
@ 2024-04-18  9:02         ` Wei Yang
  2024-04-19  3:15         ` Wei Yang
  1 sibling, 0 replies; 16+ messages in thread
From: Wei Yang @ 2024-04-18  9:02 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm

On Wed, Apr 17, 2024 at 08:51:14AM +0300, Mike Rapoport wrote:
>On Tue, Apr 16, 2024 at 12:55:31PM +0000, Wei Yang wrote:
>> On Mon, Apr 15, 2024 at 06:19:42PM +0300, Mike Rapoport wrote:
>> >On Sun, Apr 14, 2024 at 12:45:27AM +0000, Wei Yang wrote:
>> >> After previous change, we may double the array based on the position of
>> >> the new range.
>> >> 
>> >> Let's make sure the 129th memory block would double the size correctly
>> >> at all possible position.
>> >
>> >Rather than rewrite an existing test, just add a new one.
>> 
>> Ok, will add a new one for this.
>> 
>> >Besides, it would be more interesting to test additions to
>> >memblock.reserved and a mix of memblock_add() and memblock_reserve() that
>> >will require resizing the memblock arrays.
>> 
>> I don't get this very clearly. Would you mind give more hint?
>
>There is memblock_reserve_many_check() that verifies that memblock.reserved
>is properly resized. I think it's better to add test that adds 129th block
>at multiple locations to memblock.reserved.
> 

I write a draft as below, is this what you expect?

diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
index f1569ebb9872..d2b8114921f9 100644
--- a/tools/testing/memblock/tests/basic_api.c
+++ b/tools/testing/memblock/tests/basic_api.c
@@ -912,84 +912,94 @@ static int memblock_reserve_near_max_check(void)
  * memblock.memory.max, find a new valid memory as
  * reserved.regions.
  */
+/* Keep the gap so these memory region will not be merged. */
+#define MEMORY_BASE(idx) (SZ_128K + (MEM_SIZE * 2) * (idx))
 static int memblock_reserve_many_check(void)
 {
-	int i;
+	int i, skip;
 	void *orig_region;
 	struct region r = {
 		.base = SZ_16K,
 		.size = SZ_16K,
 	};
-	phys_addr_t memory_base = SZ_128K;
 	phys_addr_t new_reserved_regions_size;
 
 	PREFIX_PUSH();
 
-	reset_memblock_regions();
-	memblock_allow_resize();
+	/* Reserve the 129th memory block for all possible positions*/
+	for (skip = 0; skip < INIT_MEMBLOCK_REGIONS + 1; skip++)
+	{
+		reset_memblock_regions();
+		memblock_allow_resize();
 
-	/* Add a valid memory region used by double_array(). */
-	dummy_physical_memory_init();
-	memblock_add(dummy_physical_memory_base(), MEM_SIZE);
+		/* Add a valid memory region used by double_array(). */
+		dummy_physical_memory_init();
+		memblock_add(dummy_physical_memory_base(), MEM_SIZE);
 
-	for (i = 0; i < INIT_MEMBLOCK_REGIONS; i++) {
-		/* Reserve some fakes memory region to fulfill the memblock. */
-		memblock_reserve(memory_base, MEM_SIZE);
+		for (i = 0; i < INIT_MEMBLOCK_REGIONS + 1; i++) {
+			if (i == skip)
+				continue;
 
-		ASSERT_EQ(memblock.reserved.cnt, i + 1);
-		ASSERT_EQ(memblock.reserved.total_size, (i + 1) * MEM_SIZE);
+			/* Reserve some fakes memory region to fulfill the memblock. */
+			memblock_reserve(MEMORY_BASE(i), MEM_SIZE);
 
-		/* Keep the gap so these memory region will not be merged. */
-		memory_base += MEM_SIZE * 2;
-	}
+			if (i < skip) {
+				ASSERT_EQ(memblock.reserved.cnt, i + 1);
+				ASSERT_EQ(memblock.reserved.total_size, (i + 1) * MEM_SIZE);
+			} else {
+				ASSERT_EQ(memblock.reserved.cnt, i);
+				ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE);
+			}
+		}
 
-	orig_region = memblock.reserved.regions;
-
-	/* This reserve the 129 memory_region, and makes it double array. */
-	memblock_reserve(memory_base, MEM_SIZE);
-
-	/*
-	 * This is the memory region size used by the doubled reserved.regions,
-	 * and it has been reserved due to it has been used. The size is used to
-	 * calculate the total_size that the memblock.reserved have now.
-	 */
-	new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
-					sizeof(struct memblock_region));
-	/*
-	 * The double_array() will find a free memory region as the new
-	 * reserved.regions, and the used memory region will be reserved, so
-	 * there will be one more region exist in the reserved memblock. And the
-	 * one more reserved region's size is new_reserved_regions_size.
-	 */
-	ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
-	ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
-						new_reserved_regions_size);
-	ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
-
-	/*
-	 * Now memblock_double_array() works fine. Let's check after the
-	 * double_array(), the memblock_reserve() still works as normal.
-	 */
-	memblock_reserve(r.base, r.size);
-	ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
-	ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
-
-	ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
-	ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
-						new_reserved_regions_size +
-						r.size);
-	ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
-
-	dummy_physical_memory_cleanup();
-
-	/*
-	 * The current reserved.regions is occupying a range of memory that
-	 * allocated from dummy_physical_memory_init(). After free the memory,
-	 * we must not use it. So restore the origin memory region to make sure
-	 * the tests can run as normal and not affected by the double array.
-	 */
-	memblock.reserved.regions = orig_region;
-	memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
+		orig_region = memblock.reserved.regions;
+
+		/* This reserve the 129 memory_region, and makes it double array. */
+		memblock_reserve(MEMORY_BASE(skip), MEM_SIZE);
+
+		/*
+		 * This is the memory region size used by the doubled reserved.regions,
+		 * and it has been reserved due to it has been used. The size is used to
+		 * calculate the total_size that the memblock.reserved have now.
+		 */
+		new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
+						sizeof(struct memblock_region));
+		/*
+		 * The double_array() will find a free memory region as the new
+		 * reserved.regions, and the used memory region will be reserved, so
+		 * there will be one more region exist in the reserved memblock. And the
+		 * one more reserved region's size is new_reserved_regions_size.
+		 */
+		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
+		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
+							new_reserved_regions_size);
+		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
+
+		/*
+		 * Now memblock_double_array() works fine. Let's check after the
+		 * double_array(), the memblock_reserve() still works as normal.
+		 */
+		memblock_reserve(r.base, r.size);
+		ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
+		ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
+
+		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
+		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
+							new_reserved_regions_size +
+							r.size);
+		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
+
+		dummy_physical_memory_cleanup();
+
+		/*
+		 * The current reserved.regions is occupying a range of memory that
+		 * allocated from dummy_physical_memory_init(). After free the memory,
+		 * we must not use it. So restore the origin memory region to make sure
+		 * the tests can run as normal and not affected by the double array.
+		 */
+		memblock.reserved.regions = orig_region;
+		memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
+	}
 
 	test_pass_pop();
 
-- 
2.34.1


>> -- 
>> Wei Yang
>> Help you, Help me
>
>-- 
>Sincerely yours,
>Mike.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/6] memblock tests: add the 129th memory block at all possible position
  2024-04-17  5:51       ` Mike Rapoport
  2024-04-18  9:02         ` Wei Yang
@ 2024-04-19  3:15         ` Wei Yang
  2024-04-24 13:13           ` Mike Rapoport
  1 sibling, 1 reply; 16+ messages in thread
From: Wei Yang @ 2024-04-19  3:15 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm

On Wed, Apr 17, 2024 at 08:51:14AM +0300, Mike Rapoport wrote:
>On Tue, Apr 16, 2024 at 12:55:31PM +0000, Wei Yang wrote:
>> On Mon, Apr 15, 2024 at 06:19:42PM +0300, Mike Rapoport wrote:
>> >On Sun, Apr 14, 2024 at 12:45:27AM +0000, Wei Yang wrote:
>> >> After previous change, we may double the array based on the position of
>> >> the new range.
>> >> 
>> >> Let's make sure the 129th memory block would double the size correctly
>> >> at all possible position.
>> >
>> >Rather than rewrite an existing test, just add a new one.
>> 
>> Ok, will add a new one for this.
>> 
>> >Besides, it would be more interesting to test additions to
>> >memblock.reserved and a mix of memblock_add() and memblock_reserve() that
>> >will require resizing the memblock arrays.
>> 
>> I don't get this very clearly. Would you mind give more hint?
>
>There is memblock_reserve_many_check() that verifies that memblock.reserved
>is properly resized. I think it's better to add test that adds 129th block
>at multiple locations to memblock.reserved.
> 

I come up with another version, which could address the bug fixed by commit
48c3b583bbdd ("mm/memblock: fix overlapping allocation when doubling reserved
array").

Comment out the fix, the test failed since cnt mismatch after double array.

Not sure you prefer to have both or just leave this version by replacing
current memblock_reserve_many_check().


diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
index d2b8114921f9..fb76471108b2 100644
--- a/tools/testing/memblock/tests/basic_api.c
+++ b/tools/testing/memblock/tests/basic_api.c
@@ -1006,6 +1006,119 @@ static int memblock_reserve_many_check(void)
 	return 0;
 }
 
+/* Keep the gap so these memory region will not be merged. */
+#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
+static int memblock_reserve_many_conflict_check(void)
+{
+	int i, skip;
+	void *orig_region;
+	struct region r = {
+		.base = SZ_16K,
+		.size = SZ_16K,
+	};
+	phys_addr_t new_reserved_regions_size;
+
+	/*
+	 *  0        1          129
+	 *  +---+    +---+      +---+
+	 *  |32K|    |32K|  ..  |32K|
+	 *  +---+    +---+      +---+
+	 *
+	 * Pre-allocate the range for 129 memory block + one range for double
+	 * memblock.reserved.regions at idx 0.
+	 * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation
+	 * when doubling reserved array")
+	 */
+	phys_addr_t memory_base = (phys_addr_t)malloc(130 * (2 * SZ_32K));
+	phys_addr_t offset = PAGE_ALIGN(memory_base);
+
+	PREFIX_PUSH();
+
+	/* Reserve the 129th memory block for all possible positions*/
+	for (skip = 1; skip <= INIT_MEMBLOCK_REGIONS + 1; skip++)
+	{
+		reset_memblock_regions();
+		memblock_allow_resize();
+
+		reset_memblock_attributes();
+		/* Add a valid memory region used by double_array(). */
+		memblock_add(MEMORY_BASE_OFFSET(0, offset), MEM_SIZE);
+		/*
+		 * Add a memory region which will be reserved as 129th memory
+		 * region. This is not expected to be used by double_array().
+		 */
+		memblock_add(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
+
+		for (i = 1; i <= INIT_MEMBLOCK_REGIONS + 1; i++) {
+			if (i == skip)
+				continue;
+
+			/* Reserve some fakes memory region to fulfill the memblock. */
+			memblock_reserve(MEMORY_BASE_OFFSET(i, offset), MEM_SIZE);
+
+			if (i < skip) {
+				ASSERT_EQ(memblock.reserved.cnt, i);
+				ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE);
+			} else {
+				ASSERT_EQ(memblock.reserved.cnt, i - 1);
+				ASSERT_EQ(memblock.reserved.total_size, (i - 1) * MEM_SIZE);
+			}
+		}
+
+		orig_region = memblock.reserved.regions;
+
+		/* This reserve the 129 memory_region, and makes it double array. */
+		memblock_reserve(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
+
+		/*
+		 * This is the memory region size used by the doubled reserved.regions,
+		 * and it has been reserved due to it has been used. The size is used to
+		 * calculate the total_size that the memblock.reserved have now.
+		 */
+		new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
+						sizeof(struct memblock_region));
+		/*
+		 * The double_array() will find a free memory region as the new
+		 * reserved.regions, and the used memory region will be reserved, so
+		 * there will be one more region exist in the reserved memblock. And the
+		 * one more reserved region's size is new_reserved_regions_size.
+		 */
+		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
+		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
+							new_reserved_regions_size);
+		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
+
+		/*
+		 * Now memblock_double_array() works fine. Let's check after the
+		 * double_array(), the memblock_reserve() still works as normal.
+		 */
+		memblock_reserve(r.base, r.size);
+		ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
+		ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
+
+		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
+		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
+							new_reserved_regions_size +
+							r.size);
+		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
+
+		/*
+		 * The current reserved.regions is occupying a range of memory that
+		 * allocated from dummy_physical_memory_init(). After free the memory,
+		 * we must not use it. So restore the origin memory region to make sure
+		 * the tests can run as normal and not affected by the double array.
+		 */
+		memblock.reserved.regions = orig_region;
+		memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
+	}
+
+	free((void *)memory_base);
+
+	test_pass_pop();
+
+	return 0;
+}
+
 static int memblock_reserve_checks(void)
 {
 	prefix_reset();
@@ -1021,6 +1134,7 @@ static int memblock_reserve_checks(void)
 	memblock_reserve_between_check();
 	memblock_reserve_near_max_check();
 	memblock_reserve_many_check();
+	memblock_reserve_many_conflict_check();
 
 	prefix_pop();
 
-- 
2.34.1

>> -- 
>> Wei Yang
>> Help you, Help me
>
>-- 
>Sincerely yours,
>Mike.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range()
  2024-04-15 15:17 ` [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Mike Rapoport
@ 2024-04-22  2:55   ` Wei Yang
  2024-04-24 13:15     ` Mike Rapoport
  0 siblings, 1 reply; 16+ messages in thread
From: Wei Yang @ 2024-04-22  2:55 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm, Jinyu Tang, Peng Zhang

On Mon, Apr 15, 2024 at 06:17:56PM +0300, Mike Rapoport wrote:
>On Sun, Apr 14, 2024 at 12:45:26AM +0000, Wei Yang wrote:
>> Current memblock_add_range() does the insertion with two round
>> iteration.
>> 
>> First round does the calculation of new region required, and second
>> round does the actual insertion. Between them, if current max can't meet
>> new region requirement, it is expanded.
>> 
>> The worst case is:
>> 
>> 1. cnt == max
>> 2. new range overlaps all existing region
>> 
>> This means the final cnt should be (2 * max + 1). Since we always double
>> the array size, this means we only need to double the array twice at the
>> worst case, which is fairly rare. For other cases, only once array
>> double is enough.
>> 
>> Let's double the array immediately when there is no room for new region.
>> This simplify the code a little.
>
>Very similar patch was posted a while ago:
>https://lore.kernel.org/all/20221025070943.363578-1-yajun.deng@linux.dev
>
>and it caused boot regression:
>https://lore.kernel.org/linux-mm/Y2oLYB7Tu7J91tVm@linux.ibm.com
>

Got the reason for the error.

When we add range to reserved and need double array, following call happends:

memblock_add_range()
    idx <- where we want to insert new range
    memblock_double_array()
        find available range for new regions
	memblock_reserve() available range
    memblock_insert_region() at idx

The error case happens when find available range below what we want to add,
the idx may change after memblock_reserve().

The following change looks could fix it.

diff --git a/mm/memblock.c b/mm/memblock.c
index 98d25689cf10..52bc9a4b20da 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -395,6 +395,7 @@ void __init memblock_discard(void)
 /**
  * memblock_double_array - double the size of the memblock regions array
  * @type: memblock type of the regions array being doubled
+ * @idx: current region index if we are iterating
  * @new_area_start: starting address of memory range to avoid overlap with
  * @new_area_size: size of memory range to avoid overlap with
  *
@@ -408,6 +409,7 @@ void __init memblock_discard(void)
  * 0 on success, -1 on failure.
  */
 static int __init_memblock memblock_double_array(struct memblock_type *type,
+						int *idx,
 						phys_addr_t new_area_start,
 						phys_addr_t new_area_size)
 {
@@ -490,8 +492,24 @@ static int __init_memblock memblock_double_array(struct memblock_type *type,
 	 * Reserve the new array if that comes from the memblock.  Otherwise, we
 	 * needn't do it
 	 */
-	if (!use_slab)
+	if (!use_slab) {
+		/*
+		 * When double array for reserved.regions, we may need to
+		 * adjust the index on finding new_array below
+		 * new_area_start. This is because memblock_reserve() below
+		 * will insert the range ahead of us.
+		 * While the insertion may result into three cases:
+		 * 1. not adjacent any region, increase one index
+		 * 2. adjacent one region, not change index
+		 * 3. adjacent two regions, reduce one index
+		 */
+		int ocnt = -1;
+		if (idx && type == &memblock.reserved && addr <= new_area_start)
+			ocnt = type->cnt;
 		BUG_ON(memblock_reserve(addr, new_alloc_size));
+		if (ocnt >= 0)
+			*idx += type->cnt - ocnt;
+	}
 
 	/* Update slab flag */
 	*in_slab = use_slab;


-- 
Wei Yang
Help you, Help me


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/6] memblock tests: add the 129th memory block at all possible position
  2024-04-19  3:15         ` Wei Yang
@ 2024-04-24 13:13           ` Mike Rapoport
  0 siblings, 0 replies; 16+ messages in thread
From: Mike Rapoport @ 2024-04-24 13:13 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm

On Fri, Apr 19, 2024 at 03:15:20AM +0000, Wei Yang wrote:
> On Wed, Apr 17, 2024 at 08:51:14AM +0300, Mike Rapoport wrote:
> >On Tue, Apr 16, 2024 at 12:55:31PM +0000, Wei Yang wrote:
> >> On Mon, Apr 15, 2024 at 06:19:42PM +0300, Mike Rapoport wrote:
> >> >On Sun, Apr 14, 2024 at 12:45:27AM +0000, Wei Yang wrote:
> >> >> After previous change, we may double the array based on the position of
> >> >> the new range.
> >> >> 
> >> >> Let's make sure the 129th memory block would double the size correctly
> >> >> at all possible position.
> >> >
> >> >Rather than rewrite an existing test, just add a new one.
> >> 
> >> Ok, will add a new one for this.
> >> 
> >> >Besides, it would be more interesting to test additions to
> >> >memblock.reserved and a mix of memblock_add() and memblock_reserve() that
> >> >will require resizing the memblock arrays.
> >> 
> >> I don't get this very clearly. Would you mind give more hint?
> >
> >There is memblock_reserve_many_check() that verifies that memblock.reserved
> >is properly resized. I think it's better to add test that adds 129th block
> >at multiple locations to memblock.reserved.
> > 
> 
> I come up with another version, which could address the bug fixed by commit
> 48c3b583bbdd ("mm/memblock: fix overlapping allocation when doubling reserved
> array").
> 
> Comment out the fix, the test failed since cnt mismatch after double array.
> 
> Not sure you prefer to have both or just leave this version by replacing
> current memblock_reserve_many_check().

Let's have both of them.
 
> diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c
> index d2b8114921f9..fb76471108b2 100644
> --- a/tools/testing/memblock/tests/basic_api.c
> +++ b/tools/testing/memblock/tests/basic_api.c
> @@ -1006,6 +1006,119 @@ static int memblock_reserve_many_check(void)
>  	return 0;
>  }
>  
> +/* Keep the gap so these memory region will not be merged. */
> +#define MEMORY_BASE_OFFSET(idx, offset) ((offset) + (MEM_SIZE * 2) * (idx))
> +static int memblock_reserve_many_conflict_check(void)
> +{
> +	int i, skip;
> +	void *orig_region;
> +	struct region r = {
> +		.base = SZ_16K,
> +		.size = SZ_16K,
> +	};
> +	phys_addr_t new_reserved_regions_size;
> +
> +	/*
> +	 *  0        1          129
> +	 *  +---+    +---+      +---+
> +	 *  |32K|    |32K|  ..  |32K|
> +	 *  +---+    +---+      +---+
> +	 *
> +	 * Pre-allocate the range for 129 memory block + one range for double
> +	 * memblock.reserved.regions at idx 0.
> +	 * See commit 48c3b583bbdd ("mm/memblock: fix overlapping allocation
> +	 * when doubling reserved array")
> +	 */
> +	phys_addr_t memory_base = (phys_addr_t)malloc(130 * (2 * SZ_32K));

Just increase MEM_SIZE to, say, SZ_1M and use dummy_physical_memory_init()
etc

> +	phys_addr_t offset = PAGE_ALIGN(memory_base);
> +
> +	PREFIX_PUSH();
> +
> +	/* Reserve the 129th memory block for all possible positions*/
> +	for (skip = 1; skip <= INIT_MEMBLOCK_REGIONS + 1; skip++)
> +	{
> +		reset_memblock_regions();
> +		memblock_allow_resize();
> +
> +		reset_memblock_attributes();
> +		/* Add a valid memory region used by double_array(). */
> +		memblock_add(MEMORY_BASE_OFFSET(0, offset), MEM_SIZE);
> +		/*
> +		 * Add a memory region which will be reserved as 129th memory
> +		 * region. This is not expected to be used by double_array().
> +		 */
> +		memblock_add(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
> +
> +		for (i = 1; i <= INIT_MEMBLOCK_REGIONS + 1; i++) {
> +			if (i == skip)
> +				continue;
> +
> +			/* Reserve some fakes memory region to fulfill the memblock. */
> +			memblock_reserve(MEMORY_BASE_OFFSET(i, offset), MEM_SIZE);
> +
> +			if (i < skip) {
> +				ASSERT_EQ(memblock.reserved.cnt, i);
> +				ASSERT_EQ(memblock.reserved.total_size, i * MEM_SIZE);
> +			} else {
> +				ASSERT_EQ(memblock.reserved.cnt, i - 1);
> +				ASSERT_EQ(memblock.reserved.total_size, (i - 1) * MEM_SIZE);
> +			}
> +		}
> +
> +		orig_region = memblock.reserved.regions;
> +
> +		/* This reserve the 129 memory_region, and makes it double array. */
> +		memblock_reserve(MEMORY_BASE_OFFSET(skip, offset), MEM_SIZE);
> +
> +		/*
> +		 * This is the memory region size used by the doubled reserved.regions,
> +		 * and it has been reserved due to it has been used. The size is used to
> +		 * calculate the total_size that the memblock.reserved have now.
> +		 */
> +		new_reserved_regions_size = PAGE_ALIGN((INIT_MEMBLOCK_REGIONS * 2) *
> +						sizeof(struct memblock_region));
> +		/*
> +		 * The double_array() will find a free memory region as the new
> +		 * reserved.regions, and the used memory region will be reserved, so
> +		 * there will be one more region exist in the reserved memblock. And the
> +		 * one more reserved region's size is new_reserved_regions_size.
> +		 */
> +		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 2);
> +		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
> +							new_reserved_regions_size);
> +		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
> +
> +		/*
> +		 * Now memblock_double_array() works fine. Let's check after the
> +		 * double_array(), the memblock_reserve() still works as normal.
> +		 */
> +		memblock_reserve(r.base, r.size);
> +		ASSERT_EQ(memblock.reserved.regions[0].base, r.base);
> +		ASSERT_EQ(memblock.reserved.regions[0].size, r.size);
> +
> +		ASSERT_EQ(memblock.reserved.cnt, INIT_MEMBLOCK_REGIONS + 3);
> +		ASSERT_EQ(memblock.reserved.total_size, (INIT_MEMBLOCK_REGIONS + 1) * MEM_SIZE +
> +							new_reserved_regions_size +
> +							r.size);
> +		ASSERT_EQ(memblock.reserved.max, INIT_MEMBLOCK_REGIONS * 2);
> +
> +		/*
> +		 * The current reserved.regions is occupying a range of memory that
> +		 * allocated from dummy_physical_memory_init(). After free the memory,
> +		 * we must not use it. So restore the origin memory region to make sure
> +		 * the tests can run as normal and not affected by the double array.
> +		 */
> +		memblock.reserved.regions = orig_region;
> +		memblock.reserved.cnt = INIT_MEMBLOCK_RESERVED_REGIONS;
> +	}
> +
> +	free((void *)memory_base);
> +
> +	test_pass_pop();
> +
> +	return 0;
> +}
> +
>  static int memblock_reserve_checks(void)
>  {
>  	prefix_reset();
> @@ -1021,6 +1134,7 @@ static int memblock_reserve_checks(void)
>  	memblock_reserve_between_check();
>  	memblock_reserve_near_max_check();
>  	memblock_reserve_many_check();
> +	memblock_reserve_many_conflict_check();
>  
>  	prefix_pop();
>  
> -- 
> 2.34.1

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range()
  2024-04-22  2:55   ` Wei Yang
@ 2024-04-24 13:15     ` Mike Rapoport
  2024-04-25  1:38       ` Wei Yang
  0 siblings, 1 reply; 16+ messages in thread
From: Mike Rapoport @ 2024-04-24 13:15 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, linux-mm, Jinyu Tang, Peng Zhang

On Mon, Apr 22, 2024 at 02:55:00AM +0000, Wei Yang wrote:
> On Mon, Apr 15, 2024 at 06:17:56PM +0300, Mike Rapoport wrote:
> >On Sun, Apr 14, 2024 at 12:45:26AM +0000, Wei Yang wrote:
> >> Current memblock_add_range() does the insertion with two round
> >> iteration.
> >> 
> >> First round does the calculation of new region required, and second
> >> round does the actual insertion. Between them, if current max can't meet
> >> new region requirement, it is expanded.
> >> 
> >> The worst case is:
> >> 
> >> 1. cnt == max
> >> 2. new range overlaps all existing region
> >> 
> >> This means the final cnt should be (2 * max + 1). Since we always double
> >> the array size, this means we only need to double the array twice at the
> >> worst case, which is fairly rare. For other cases, only once array
> >> double is enough.
> >> 
> >> Let's double the array immediately when there is no room for new region.
> >> This simplify the code a little.
> >
> >Very similar patch was posted a while ago:
> >https://lore.kernel.org/all/20221025070943.363578-1-yajun.deng@linux.dev
> >
> >and it caused boot regression:
> >https://lore.kernel.org/linux-mm/Y2oLYB7Tu7J91tVm@linux.ibm.com
> >
> 
> Got the reason for the error.
> 
> When we add range to reserved and need double array, following call happends:
> 
> memblock_add_range()
>     idx <- where we want to insert new range
>     memblock_double_array()
>         find available range for new regions
> 	memblock_reserve() available range
>     memblock_insert_region() at idx
> 
> The error case happens when find available range below what we want to add,
> the idx may change after memblock_reserve().
> 
> The following change looks could fix it.

I don't think the micro-optimization of the second loop in
memblock_add_range() worth the churn and potential bugs.
 
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 98d25689cf10..52bc9a4b20da 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -395,6 +395,7 @@ void __init memblock_discard(void)
>  /**
>   * memblock_double_array - double the size of the memblock regions array
>   * @type: memblock type of the regions array being doubled
> + * @idx: current region index if we are iterating
>   * @new_area_start: starting address of memory range to avoid overlap with
>   * @new_area_size: size of memory range to avoid overlap with
>   *
> @@ -408,6 +409,7 @@ void __init memblock_discard(void)
>   * 0 on success, -1 on failure.
>   */
>  static int __init_memblock memblock_double_array(struct memblock_type *type,
> +						int *idx,
>  						phys_addr_t new_area_start,
>  						phys_addr_t new_area_size)
>  {
> @@ -490,8 +492,24 @@ static int __init_memblock memblock_double_array(struct memblock_type *type,
>  	 * Reserve the new array if that comes from the memblock.  Otherwise, we
>  	 * needn't do it
>  	 */
> -	if (!use_slab)
> +	if (!use_slab) {
> +		/*
> +		 * When double array for reserved.regions, we may need to
> +		 * adjust the index on finding new_array below
> +		 * new_area_start. This is because memblock_reserve() below
> +		 * will insert the range ahead of us.
> +		 * While the insertion may result into three cases:
> +		 * 1. not adjacent any region, increase one index
> +		 * 2. adjacent one region, not change index
> +		 * 3. adjacent two regions, reduce one index
> +		 */
> +		int ocnt = -1;
> +		if (idx && type == &memblock.reserved && addr <= new_area_start)
> +			ocnt = type->cnt;
>  		BUG_ON(memblock_reserve(addr, new_alloc_size));
> +		if (ocnt >= 0)
> +			*idx += type->cnt - ocnt;
> +	}
>  
>  	/* Update slab flag */
>  	*in_slab = use_slab;
> 
> 
> -- 
> Wei Yang
> Help you, Help me

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range()
  2024-04-24 13:15     ` Mike Rapoport
@ 2024-04-25  1:38       ` Wei Yang
  0 siblings, 0 replies; 16+ messages in thread
From: Wei Yang @ 2024-04-25  1:38 UTC (permalink / raw)
  To: Mike Rapoport; +Cc: Wei Yang, akpm, linux-mm, Jinyu Tang, Peng Zhang

On Wed, Apr 24, 2024 at 04:15:30PM +0300, Mike Rapoport wrote:
>On Mon, Apr 22, 2024 at 02:55:00AM +0000, Wei Yang wrote:
>> On Mon, Apr 15, 2024 at 06:17:56PM +0300, Mike Rapoport wrote:
>> >On Sun, Apr 14, 2024 at 12:45:26AM +0000, Wei Yang wrote:
>> >> Current memblock_add_range() does the insertion with two round
>> >> iteration.
>> >> 
>> >> First round does the calculation of new region required, and second
>> >> round does the actual insertion. Between them, if current max can't meet
>> >> new region requirement, it is expanded.
>> >> 
>> >> The worst case is:
>> >> 
>> >> 1. cnt == max
>> >> 2. new range overlaps all existing region
>> >> 
>> >> This means the final cnt should be (2 * max + 1). Since we always double
>> >> the array size, this means we only need to double the array twice at the
>> >> worst case, which is fairly rare. For other cases, only once array
>> >> double is enough.
>> >> 
>> >> Let's double the array immediately when there is no room for new region.
>> >> This simplify the code a little.
>> >
>> >Very similar patch was posted a while ago:
>> >https://lore.kernel.org/all/20221025070943.363578-1-yajun.deng@linux.dev
>> >
>> >and it caused boot regression:
>> >https://lore.kernel.org/linux-mm/Y2oLYB7Tu7J91tVm@linux.ibm.com
>> >
>> 
>> Got the reason for the error.
>> 
>> When we add range to reserved and need double array, following call happends:
>> 
>> memblock_add_range()
>>     idx <- where we want to insert new range
>>     memblock_double_array()
>>         find available range for new regions
>> 	memblock_reserve() available range
>>     memblock_insert_region() at idx
>> 
>> The error case happens when find available range below what we want to add,
>> the idx may change after memblock_reserve().
>> 
>> The following change looks could fix it.
>
>I don't think the micro-optimization of the second loop in
>memblock_add_range() worth the churn and potential bugs.
> 

Sure, would drop this one.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2024-04-25  1:38 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-04-14  0:45 [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Wei Yang
2024-04-14  0:45 ` [PATCH 2/6] memblock tests: add the 129th memory block at all possible position Wei Yang
2024-04-15 15:19   ` Mike Rapoport
2024-04-16 12:55     ` Wei Yang
2024-04-17  5:51       ` Mike Rapoport
2024-04-18  9:02         ` Wei Yang
2024-04-19  3:15         ` Wei Yang
2024-04-24 13:13           ` Mike Rapoport
2024-04-14  0:45 ` [PATCH 3/6] mm/memblock: fix comment for memblock_isolate_range() Wei Yang
2024-04-14  0:45 ` [PATCH 4/6] mm/memblock: remove consecutive regions at once Wei Yang
2024-04-14  0:45 ` [PATCH 5/6] memblock tests: add memblock_overlaps_region_checks Wei Yang
2024-04-14  0:45 ` [PATCH 6/6] mm/memblock: return true directly on finding overlap region Wei Yang
2024-04-15 15:17 ` [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range() Mike Rapoport
2024-04-22  2:55   ` Wei Yang
2024-04-24 13:15     ` Mike Rapoport
2024-04-25  1:38       ` Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).