* [PATCH v3 1/4] memblock tests: add simulation of physical memory with multiple NUMA nodes
[not found] <cover.1661578435.git.remckee0@gmail.com>
@ 2022-08-27 5:52 ` Rebecca Mckeever
[not found] ` <d46ba668a9f3ba369f3402f107730b9629c01417.1661578435.git.remckee0@gmail.com>
1 sibling, 0 replies; 3+ messages in thread
From: Rebecca Mckeever @ 2022-08-27 5:52 UTC (permalink / raw)
To: Mike Rapoport, linux-mm, linux-kernel; +Cc: David Hildenbrand, Rebecca Mckeever
Add functions setup_numa_memblock_generic() and setup_numa_memblock()
for setting up a memory layout with multiple NUMA nodes in a previously
allocated dummy physical memory. These functions can be used in place of
setup_memblock() in tests that need to simulate a NUMA system.
setup_numa_memblock_generic():
- allows for setting up a custom memory layout by specifying the amount
of memory in each node, the number of nodes, and a factor that will be
used to scale the memory in each node
setup_numa_memblock():
- allows for setting up a default memory layout
Introduce constant MEM_FACTOR, which is used to scale the default memory
layout based on MEM_SIZE.
Set CONFIG_NODES_SHIFT to 4 when building with NUMA=1 to allow for up to
16 NUMA nodes.
Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
---
.../testing/memblock/scripts/Makefile.include | 2 +-
tools/testing/memblock/tests/common.c | 38 +++++++++++++++++++
tools/testing/memblock/tests/common.h | 9 ++++-
3 files changed, 47 insertions(+), 2 deletions(-)
diff --git a/tools/testing/memblock/scripts/Makefile.include b/tools/testing/memblock/scripts/Makefile.include
index aa6d82d56a23..998281723590 100644
--- a/tools/testing/memblock/scripts/Makefile.include
+++ b/tools/testing/memblock/scripts/Makefile.include
@@ -3,7 +3,7 @@
# Simulate CONFIG_NUMA=y
ifeq ($(NUMA), 1)
- CFLAGS += -D CONFIG_NUMA
+ CFLAGS += -D CONFIG_NUMA -D CONFIG_NODES_SHIFT=4
endif
# Use 32 bit physical addresses.
diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock/tests/common.c
index eec6901081af..15d8767dc70c 100644
--- a/tools/testing/memblock/tests/common.c
+++ b/tools/testing/memblock/tests/common.c
@@ -34,6 +34,10 @@ static const char * const help_opts[] = {
static int verbose;
+static const phys_addr_t node_sizes[] = {
+ SZ_4K, SZ_1K, SZ_2K, SZ_2K, SZ_1K, SZ_1K, SZ_4K, SZ_1K
+};
+
/* sets global variable returned by movable_node_is_enabled() stub */
bool movable_node_enabled;
@@ -72,6 +76,40 @@ void setup_memblock(void)
fill_memblock();
}
+/**
+ * setup_numa_memblock_generic:
+ * Set up a memory layout with multiple NUMA nodes in a previously allocated
+ * dummy physical memory.
+ * @nodes: an array containing the amount of memory in each node
+ * @node_cnt: the size of @nodes
+ * @factor: a factor that will be used to scale the memory in each node
+ *
+ * The nids will be set to 0 through node_cnt - 1.
+ */
+void setup_numa_memblock_generic(const phys_addr_t nodes[],
+ int node_cnt, int factor)
+{
+ phys_addr_t base;
+ int flags;
+
+ reset_memblock_regions();
+ base = (phys_addr_t)memory_block.base;
+ flags = (movable_node_is_enabled()) ? MEMBLOCK_NONE : MEMBLOCK_HOTPLUG;
+
+ for (int i = 0; i < node_cnt; i++) {
+ phys_addr_t size = factor * nodes[i];
+
+ memblock_add_node(base, size, i, flags);
+ base += size;
+ }
+ fill_memblock();
+}
+
+void setup_numa_memblock(void)
+{
+ setup_numa_memblock_generic(node_sizes, NUMA_NODES, MEM_FACTOR);
+}
+
void dummy_physical_memory_init(void)
{
memory_block.base = malloc(MEM_SIZE);
diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h
index 78128e109a95..2a17453107dc 100644
--- a/tools/testing/memblock/tests/common.h
+++ b/tools/testing/memblock/tests/common.h
@@ -10,7 +10,11 @@
#include <linux/printk.h>
#include <../selftests/kselftest.h>
-#define MEM_SIZE SZ_16K
+#define MEM_SIZE SZ_16K
+#define NUMA_NODES 8
+
+/* used to resize values that need to scale with MEM_SIZE */
+#define MEM_FACTOR (MEM_SIZE / SZ_16K)
enum test_flags {
/* No special request. */
@@ -102,6 +106,9 @@ struct region {
void reset_memblock_regions(void);
void reset_memblock_attributes(void);
void setup_memblock(void);
+void setup_numa_memblock_generic(const phys_addr_t nodes[],
+ int node_cnt, int factor);
+void setup_numa_memblock(void);
void dummy_physical_memory_init(void);
void dummy_physical_memory_cleanup(void);
void parse_args(int argc, char **argv);
--
2.25.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v3 2/4] memblock tests: add top-down NUMA tests for memblock_alloc_try_nid*
[not found] ` <d46ba668a9f3ba369f3402f107730b9629c01417.1661578435.git.remckee0@gmail.com>
@ 2022-09-01 8:59 ` Mike Rapoport
2022-09-02 0:51 ` Rebecca Mckeever
0 siblings, 1 reply; 3+ messages in thread
From: Mike Rapoport @ 2022-09-01 8:59 UTC (permalink / raw)
To: Rebecca Mckeever; +Cc: linux-mm, linux-kernel, David Hildenbrand
On Sat, Aug 27, 2022 at 12:53:00AM -0500, Rebecca Mckeever wrote:
> Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw()
> where the simulated physical memory is set up with multiple NUMA nodes.
> Additionally, all of these tests set nid != NUMA_NO_NODE. These tests are
> run with a top-down allocation direction.
>
> The tested scenarios are:
>
> Range unrestricted:
> - region can be allocated in the specific node requested:
> + there are no previously reserved regions
> + the requested node is partially reserved but has enough space
> - the specific node requested cannot accommodate the request, but the
> region can be allocated in a different node:
> + there are no previously reserved regions, but node is too small
> + the requested node is fully reserved
> + the requested node is partially reserved and does not have
> enough space
>
> Range restricted:
> - region can be allocated in the specific node requested after dropping
> min_addr:
> + range partially overlaps with two different nodes, where the first
> node is the requested node
> + range partially overlaps with two different nodes, where the
> requested node ends before min_addr
> - region cannot be allocated in the specific node requested, but it can be
> allocated in the requested range:
> + range overlaps with multiple nodes along node boundaries, and the
> requested node ends before min_addr
> + range overlaps with multiple nodes along node boundaries, and the
> requested node starts after max_addr
> - region cannot be allocated in the specific node requested, but it can be
> allocated after dropping min_addr:
> + range partially overlaps with two different nodes, where the
> second node is the requested node
>
> Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
> ---
> tools/testing/memblock/tests/alloc_nid_api.c | 702 ++++++++++++++++++-
> tools/testing/memblock/tests/alloc_nid_api.h | 16 +
> tools/testing/memblock/tests/common.h | 18 +
> 3 files changed, 725 insertions(+), 11 deletions(-)
>
> diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/memblock/tests/alloc_nid_api.c
> index 32b3c1594fdd..e5ef93ea1ce5 100644
> --- a/tools/testing/memblock/tests/alloc_nid_api.c
> +++ b/tools/testing/memblock/tests/alloc_nid_api.c
> @@ -1094,7 +1094,7 @@ static int alloc_try_nid_bottom_up_cap_min_check(void)
> return 0;
> }
>
> -/* Test case wrappers */
> +/* Test case wrappers for range tests */
> static int alloc_try_nid_simple_check(void)
> {
> test_print("\tRunning %s...\n", __func__);
> @@ -1226,17 +1226,10 @@ static int alloc_try_nid_low_max_check(void)
> return 0;
> }
>
> -static int memblock_alloc_nid_checks_internal(int flags)
> +static int memblock_alloc_nid_range_checks(void)
> {
> - const char *func = get_memblock_alloc_try_nid_name(flags);
> -
> - alloc_nid_test_flags = flags;
> - prefix_reset();
> - prefix_push(func);
> - test_print("Running %s tests...\n", func);
> -
> - reset_memblock_attributes();
> - dummy_physical_memory_init();
> + test_print("Running %s range tests...\n",
> + get_memblock_alloc_try_nid_name(alloc_nid_test_flags));
>
> alloc_try_nid_simple_check();
> alloc_try_nid_misaligned_check();
> @@ -1253,6 +1246,693 @@ static int memblock_alloc_nid_checks_internal(int flags)
> alloc_try_nid_reserved_all_check();
> alloc_try_nid_low_max_check();
>
> + return 0;
> +}
> +
> +/*
> + * A test that tries to allocate a memory region in a specific NUMA node that
> + * has enough memory to allocate a region of the requested size.
> + * Expect to allocate an aligned region at the end of the requested node.
> + */
> +static int alloc_try_nid_top_down_numa_simple_check(void)
> +{
> + int nid_req = 3;
> + struct memblock_region *new_rgn = &memblock.reserved.regions[0];
> + struct memblock_region *req_node = &memblock.memory.regions[nid_req];
> + void *allocated_ptr = NULL;
> +
> + PREFIX_PUSH();
> +
> + phys_addr_t size;
> + phys_addr_t min_addr;
> + phys_addr_t max_addr;
> +
> + setup_numa_memblock();
> +
> + ASSERT_LE(SZ_4, req_node->size);
> + size = req_node->size / SZ_4;
> + min_addr = memblock_start_of_DRAM();
> + max_addr = memblock_end_of_DRAM();
> +
> + allocated_ptr = run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES,
> + min_addr, max_addr, nid_req);
> +
> + ASSERT_NE(allocated_ptr, NULL);
> + assert_mem_content(allocated_ptr, size, alloc_nid_test_flags);
> +
> + ASSERT_EQ(new_rgn->size, size);
> + ASSERT_EQ(new_rgn->base, region_end(req_node) - size);
> + ASSERT_LE(req_node->base, new_rgn->base);
> +
> + ASSERT_EQ(memblock.reserved.cnt, 1);
> + ASSERT_EQ(memblock.reserved.total_size, size);
> +
> + test_pass_pop();
> +
> + return 0;
> +}
> +
> +/*
> + * A test that tries to allocate a memory region in a specific NUMA node that
> + * does not have enough memory to allocate a region of the requested size:
> + *
> + * | +-----+ +------------------+ |
> + * | | req | | expected | |
> + * +---+-----+----------+------------------+-----+
> + *
> + * | +---------+ |
> + * | | rgn | |
> + * +-----------------------------+---------+-----+
> + *
> + * Expect to allocate an aligned region at the end of the last node that has
> + * enough memory (in this case, nid = 6) after falling back to NUMA_NO_NODE.
> + */
> +static int alloc_try_nid_top_down_numa_small_node_check(void)
> +{
> + int nid_req = 1;
> + int nid_exp = 6;
> + struct memblock_region *new_rgn = &memblock.reserved.regions[0];
> + struct memblock_region *exp_node = &memblock.memory.regions[nid_exp];
AFAIU, having required and expected nodes here means very tight relation
between the NUMA layout used by setup_numa_memblock() and the test cases.
I believe it would be clearer and less error prone if the relation were
more explicit.
Can't say I have a great ideas how to achieve this, but maybe its worth
passing NUMA layout to setup_numa_memblock() every time, or setting the
requested and expected nid based on the NUMA layout, or maybe something
smarted than either of these.
> + void *allocated_ptr = NULL;
> +
> + PREFIX_PUSH();
> +
> + phys_addr_t size;
> + phys_addr_t min_addr;
> + phys_addr_t max_addr;
> +
> + setup_numa_memblock();
> +
> + size = SZ_2K * MEM_FACTOR;
> + min_addr = memblock_start_of_DRAM();
> + max_addr = memblock_end_of_DRAM();
> +
> + allocated_ptr = run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES,
> + min_addr, max_addr, nid_req);
> +
> + ASSERT_NE(allocated_ptr, NULL);
> + assert_mem_content(allocated_ptr, size, alloc_nid_test_flags);
> +
> + ASSERT_EQ(new_rgn->size, size);
> + ASSERT_EQ(new_rgn->base, region_end(exp_node) - size);
> + ASSERT_LE(exp_node->base, new_rgn->base);
> +
> + ASSERT_EQ(memblock.reserved.cnt, 1);
> + ASSERT_EQ(memblock.reserved.total_size, size);
> +
> + test_pass_pop();
> +
> + return 0;
> +}
> +
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v3 2/4] memblock tests: add top-down NUMA tests for memblock_alloc_try_nid*
2022-09-01 8:59 ` [PATCH v3 2/4] memblock tests: add top-down NUMA tests for memblock_alloc_try_nid* Mike Rapoport
@ 2022-09-02 0:51 ` Rebecca Mckeever
0 siblings, 0 replies; 3+ messages in thread
From: Rebecca Mckeever @ 2022-09-02 0:51 UTC (permalink / raw)
To: Mike Rapoport; +Cc: linux-mm, linux-kernel, David Hildenbrand
On Thu, Sep 01, 2022 at 11:59:42AM +0300, Mike Rapoport wrote:
> On Sat, Aug 27, 2022 at 12:53:00AM -0500, Rebecca Mckeever wrote:
> > Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw()
> > where the simulated physical memory is set up with multiple NUMA nodes.
> > Additionally, all of these tests set nid != NUMA_NO_NODE. These tests are
> > run with a top-down allocation direction.
> >
> > The tested scenarios are:
> >
> > Range unrestricted:
> > - region can be allocated in the specific node requested:
> > + there are no previously reserved regions
> > + the requested node is partially reserved but has enough space
> > - the specific node requested cannot accommodate the request, but the
> > region can be allocated in a different node:
> > + there are no previously reserved regions, but node is too small
> > + the requested node is fully reserved
> > + the requested node is partially reserved and does not have
> > enough space
> >
> > Range restricted:
> > - region can be allocated in the specific node requested after dropping
> > min_addr:
> > + range partially overlaps with two different nodes, where the first
> > node is the requested node
> > + range partially overlaps with two different nodes, where the
> > requested node ends before min_addr
> > - region cannot be allocated in the specific node requested, but it can be
> > allocated in the requested range:
> > + range overlaps with multiple nodes along node boundaries, and the
> > requested node ends before min_addr
> > + range overlaps with multiple nodes along node boundaries, and the
> > requested node starts after max_addr
> > - region cannot be allocated in the specific node requested, but it can be
> > allocated after dropping min_addr:
> > + range partially overlaps with two different nodes, where the
> > second node is the requested node
> >
> > Signed-off-by: Rebecca Mckeever <remckee0@gmail.com>
> > ---
> > tools/testing/memblock/tests/alloc_nid_api.c | 702 ++++++++++++++++++-
> > tools/testing/memblock/tests/alloc_nid_api.h | 16 +
> > tools/testing/memblock/tests/common.h | 18 +
> > 3 files changed, 725 insertions(+), 11 deletions(-)
> >
> > diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/memblock/tests/alloc_nid_api.c
> > index 32b3c1594fdd..e5ef93ea1ce5 100644
> > --- a/tools/testing/memblock/tests/alloc_nid_api.c
> > +++ b/tools/testing/memblock/tests/alloc_nid_api.c
> > @@ -1094,7 +1094,7 @@ static int alloc_try_nid_bottom_up_cap_min_check(void)
> > return 0;
> > }
> >
> > -/* Test case wrappers */
> > +/* Test case wrappers for range tests */
> > static int alloc_try_nid_simple_check(void)
> > {
> > test_print("\tRunning %s...\n", __func__);
> > @@ -1226,17 +1226,10 @@ static int alloc_try_nid_low_max_check(void)
> > return 0;
> > }
> >
> > -static int memblock_alloc_nid_checks_internal(int flags)
> > +static int memblock_alloc_nid_range_checks(void)
> > {
> > - const char *func = get_memblock_alloc_try_nid_name(flags);
> > -
> > - alloc_nid_test_flags = flags;
> > - prefix_reset();
> > - prefix_push(func);
> > - test_print("Running %s tests...\n", func);
> > -
> > - reset_memblock_attributes();
> > - dummy_physical_memory_init();
> > + test_print("Running %s range tests...\n",
> > + get_memblock_alloc_try_nid_name(alloc_nid_test_flags));
> >
> > alloc_try_nid_simple_check();
> > alloc_try_nid_misaligned_check();
> > @@ -1253,6 +1246,693 @@ static int memblock_alloc_nid_checks_internal(int flags)
> > alloc_try_nid_reserved_all_check();
> > alloc_try_nid_low_max_check();
> >
> > + return 0;
> > +}
> > +
> > +/*
> > + * A test that tries to allocate a memory region in a specific NUMA node that
> > + * has enough memory to allocate a region of the requested size.
> > + * Expect to allocate an aligned region at the end of the requested node.
> > + */
> > +static int alloc_try_nid_top_down_numa_simple_check(void)
> > +{
> > + int nid_req = 3;
> > + struct memblock_region *new_rgn = &memblock.reserved.regions[0];
> > + struct memblock_region *req_node = &memblock.memory.regions[nid_req];
> > + void *allocated_ptr = NULL;
> > +
> > + PREFIX_PUSH();
> > +
> > + phys_addr_t size;
> > + phys_addr_t min_addr;
> > + phys_addr_t max_addr;
> > +
> > + setup_numa_memblock();
> > +
> > + ASSERT_LE(SZ_4, req_node->size);
> > + size = req_node->size / SZ_4;
> > + min_addr = memblock_start_of_DRAM();
> > + max_addr = memblock_end_of_DRAM();
> > +
> > + allocated_ptr = run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES,
> > + min_addr, max_addr, nid_req);
> > +
> > + ASSERT_NE(allocated_ptr, NULL);
> > + assert_mem_content(allocated_ptr, size, alloc_nid_test_flags);
> > +
> > + ASSERT_EQ(new_rgn->size, size);
> > + ASSERT_EQ(new_rgn->base, region_end(req_node) - size);
> > + ASSERT_LE(req_node->base, new_rgn->base);
> > +
> > + ASSERT_EQ(memblock.reserved.cnt, 1);
> > + ASSERT_EQ(memblock.reserved.total_size, size);
> > +
> > + test_pass_pop();
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * A test that tries to allocate a memory region in a specific NUMA node that
> > + * does not have enough memory to allocate a region of the requested size:
> > + *
> > + * | +-----+ +------------------+ |
> > + * | | req | | expected | |
> > + * +---+-----+----------+------------------+-----+
> > + *
> > + * | +---------+ |
> > + * | | rgn | |
> > + * +-----------------------------+---------+-----+
> > + *
> > + * Expect to allocate an aligned region at the end of the last node that has
> > + * enough memory (in this case, nid = 6) after falling back to NUMA_NO_NODE.
> > + */
> > +static int alloc_try_nid_top_down_numa_small_node_check(void)
> > +{
> > + int nid_req = 1;
> > + int nid_exp = 6;
> > + struct memblock_region *new_rgn = &memblock.reserved.regions[0];
> > + struct memblock_region *exp_node = &memblock.memory.regions[nid_exp];
>
> AFAIU, having required and expected nodes here means very tight relation
> between the NUMA layout used by setup_numa_memblock() and the test cases.
>
> I believe it would be clearer and less error prone if the relation were
> more explicit.
>
I agree.
> Can't say I have a great ideas how to achieve this, but maybe its worth
> passing NUMA layout to setup_numa_memblock() every time, or setting the
> requested and expected nid based on the NUMA layout, or maybe something
> smarted than either of these.
>
I like the first option. I'll pass the NUMA layout if I can't think of a
better idea.
> > + void *allocated_ptr = NULL;
> > +
> > + PREFIX_PUSH();
> > +
> > + phys_addr_t size;
> > + phys_addr_t min_addr;
> > + phys_addr_t max_addr;
> > +
> > + setup_numa_memblock();
> > +
> > + size = SZ_2K * MEM_FACTOR;
> > + min_addr = memblock_start_of_DRAM();
> > + max_addr = memblock_end_of_DRAM();
> > +
> > + allocated_ptr = run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES,
> > + min_addr, max_addr, nid_req);
> > +
> > + ASSERT_NE(allocated_ptr, NULL);
> > + assert_mem_content(allocated_ptr, size, alloc_nid_test_flags);
> > +
> > + ASSERT_EQ(new_rgn->size, size);
> > + ASSERT_EQ(new_rgn->base, region_end(exp_node) - size);
> > + ASSERT_LE(exp_node->base, new_rgn->base);
> > +
> > + ASSERT_EQ(memblock.reserved.cnt, 1);
> > + ASSERT_EQ(memblock.reserved.total_size, size);
> > +
> > + test_pass_pop();
> > +
> > + return 0;
> > +}
> > +
>
> --
> Sincerely yours,
> Mike.
Thanks,
Rebecca
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2022-09-02 0:51 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <cover.1661578435.git.remckee0@gmail.com>
2022-08-27 5:52 ` [PATCH v3 1/4] memblock tests: add simulation of physical memory with multiple NUMA nodes Rebecca Mckeever
[not found] ` <d46ba668a9f3ba369f3402f107730b9629c01417.1661578435.git.remckee0@gmail.com>
2022-09-01 8:59 ` [PATCH v3 2/4] memblock tests: add top-down NUMA tests for memblock_alloc_try_nid* Mike Rapoport
2022-09-02 0:51 ` Rebecca Mckeever
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).