From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 639CE2C0274 for ; Wed, 15 Apr 2026 16:16:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776269810; cv=none; b=pZmFGcMSIm8DuEQ+7YcJPZk93tfcF2fuJ5AiblcYuoUkTWS8Tb7g/ohSIoQ1YS+XusDxPxpfStRtMdO+irLC0VT7Bvxk5AvcJZRBEQxumNN9F34ovstzbooNhFB6PwkRaaoQzspX+UVGj0rtCKCDgUbyaklM3EN/cinN36cXZ74= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776269810; c=relaxed/simple; bh=BYzmB8AMTn+l7fVaMR9XEBHosEJ+cHfReEz/ISoReJM=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=kAI8f8eStdmICxfbJmXOjXKHTxNCgMr9dgtelrHlpD2f9dnB8WK3GNteA+FGzzflL2uCukgmsKNq7vgDV1uNiv0SFK+8LTwSONv92EGm2B+X4qlKm35YgQzYqJ50FRDwIuFwOHnEa8R8RVsEx9UtT530czHUP+On/CIcHINLBOU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=s38oCA76; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="s38oCA76" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2BF6C19424; Wed, 15 Apr 2026 16:16:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776269809; bh=BYzmB8AMTn+l7fVaMR9XEBHosEJ+cHfReEz/ISoReJM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=s38oCA764aLQkbmWhMkN4zvvwN/6QqwzqM1bNOcd+01RXf02hI+6o8xXG0UNwE8DD lfgmiEHUc+6/y/vWdbSx0SQ8rktpitZSCIwz8zdnzKQUW+Ns7gZyCsXeT6qWB+F2yu ZtLPXWuYn8M+lIN8b9/VXlkgYy1W/Kfp3fHnQ/BI9FuCF+4C5FwLY+Z5VPmvCBtUrZ ynMjTuanQS/unbHiJnV+M/OTolYTPW2ENyzDpvHfWyMLmjzzIveR0CtJWommhYk3Em rqdU/qxfooRO1y84xZLSkniknZXsH7g9mVXjYey8SXAjmIf8uYEVE6EFD2rDS9iFBx phiXWwy8SecSw== Date: Wed, 15 Apr 2026 19:16:44 +0300 From: Mike Rapoport To: priyanshukumarpu@gmail.com Cc: akpm@linux-foundation.org, changyuanl@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] tools/testing/memblock: fix stale NUMA reservation tests Message-ID: References: <20260413091458.774770-1-priyanshukumarpu@gmail.com> <20260415122731.1768912-1-priyanshukumarpu@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260415122731.1768912-1-priyanshukumarpu@gmail.com> Hi, Please next time send v2 as a different mail rather than replying to v1 and add description of the changes between the versions: https://docs.kernel.org/process/submitting-patches.html#commentary On Wed, Apr 15, 2026 at 12:27:31PM +0000, priyanshukumarpu@gmail.com wrote: > From: Priyanshu Kumar > > memblock allocations now reserve memory with MEMBLOCK_RSRV_KERN and, > on NUMA configurations, record the requested node on the reserved > region. Several memblock simulator NUMA tests still expected merges > that only worked before those reservation semantics changed, so the > suite aborted even though the allocator behavior was correct. > > Update the NUMA merge expectations in the memblock_alloc_try_nid() > and memblock_alloc_exact_nid_raw() tests to match the current reserved > region metadata rules. For cases that should still merge, create the > pre-existing reservation with matching nid and MEMBLOCK_RSRV_KERN > metadata. Also strengthen the memblock_alloc_node() coverage by > checking the newly created reserved region directly instead of > re-reading the source memory node descriptor. > > Finally, drop the stale README/TODO notes that still claimed > memblock_alloc_node() could not be tested. > > The memblock simulator passes again with NUMA enabled after these > updates. > > Signed-off-by: Priyanshu Kumar > --- > tools/testing/memblock/README | 5 +---- > tools/testing/memblock/TODO | 4 ++-- > .../memblock/tests/alloc_exact_nid_api.c | 6 +++--- > tools/testing/memblock/tests/alloc_nid_api.c | 21 +++++++++++++------ > 4 files changed, 21 insertions(+), 15 deletions(-) > > diff --git a/tools/testing/memblock/README b/tools/testing/memblock/README > index 7ca437d81806..b435f48d8a70 100644 > --- a/tools/testing/memblock/README > +++ b/tools/testing/memblock/README > @@ -104,10 +104,7 @@ called at the beginning of each test. > Known issues > ============ > > -1. Requesting a specific NUMA node via memblock_alloc_node() does not work as > - intended. Once the fix is in place, tests for this function can be added. > - > -2. Tests for memblock_alloc_low() can't be easily implemented. The function uses > +1. Tests for memblock_alloc_low() can't be easily implemented. The function uses > ARCH_LOW_ADDRESS_LIMIT marco, which can't be changed to point at the low > memory of the memory_block. > > diff --git a/tools/testing/memblock/TODO b/tools/testing/memblock/TODO > index e306c90c535f..c13ad0dae776 100644 > --- a/tools/testing/memblock/TODO > +++ b/tools/testing/memblock/TODO > @@ -1,5 +1,5 @@ > TODO > ===== > > -1. Add tests for memblock_alloc_node() to check if the correct NUMA node is set > - for the new region > +1. Add tests for memblock_alloc_low() once the simulator can model > + ARCH_LOW_ADDRESS_LIMIT against the low memory in memory_block > diff --git a/tools/testing/memblock/tests/alloc_exact_nid_api.c b/tools/testing/memblock/tests/alloc_exact_nid_api.c > index 6e14447da6e1..0c46c73b5e04 100644 > --- a/tools/testing/memblock/tests/alloc_exact_nid_api.c > +++ b/tools/testing/memblock/tests/alloc_exact_nid_api.c > @@ -368,7 +368,7 @@ static int alloc_exact_nid_bottom_up_numa_part_reserved_check(void) > max_addr = memblock_end_of_DRAM(); > total_size = size + r1.size; > > - memblock_reserve(r1.base, r1.size); > + __memblock_reserve(r1.base, r1.size, nid_req, MEMBLOCK_RSRV_KERN); > allocated_ptr = memblock_alloc_exact_nid_raw(size, SMP_CACHE_BYTES, > min_addr, max_addr, > nid_req); > @@ -861,8 +861,8 @@ static int alloc_exact_nid_numa_reserved_full_merge_generic_check(void) > min_addr = r2.base + r2.size; > max_addr = r1.base; > > - memblock_reserve(r1.base, r1.size); > - memblock_reserve(r2.base, r2.size); > + __memblock_reserve(r1.base, r1.size, nid_req, MEMBLOCK_RSRV_KERN); > + __memblock_reserve(r2.base, r2.size, nid_req, MEMBLOCK_RSRV_KERN); > > allocated_ptr = memblock_alloc_exact_nid_raw(size, SMP_CACHE_BYTES, > min_addr, max_addr, > diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/memblock/tests/alloc_nid_api.c > index 562e4701b0e0..c23652727976 100644 > --- a/tools/testing/memblock/tests/alloc_nid_api.c > +++ b/tools/testing/memblock/tests/alloc_nid_api.c > @@ -1965,7 +1965,7 @@ static int alloc_nid_bottom_up_numa_part_reserved_check(void) > max_addr = memblock_end_of_DRAM(); > total_size = size + r1.size; > > - memblock_reserve(r1.base, r1.size); > + __memblock_reserve(r1.base, r1.size, nid_req, MEMBLOCK_RSRV_KERN); > allocated_ptr = run_memblock_alloc_nid(size, SMP_CACHE_BYTES, > min_addr, max_addr, nid_req); > > @@ -2412,8 +2412,8 @@ static int alloc_nid_numa_reserved_full_merge_generic_check(void) > min_addr = r2.base + r2.size; > max_addr = r1.base; > > - memblock_reserve(r1.base, r1.size); > - memblock_reserve(r2.base, r2.size); > + __memblock_reserve(r1.base, r1.size, nid_req, MEMBLOCK_RSRV_KERN); > + __memblock_reserve(r2.base, r2.size, nid_req, MEMBLOCK_RSRV_KERN); > > allocated_ptr = run_memblock_alloc_nid(size, SMP_CACHE_BYTES, > min_addr, max_addr, nid_req); > @@ -2496,15 +2496,18 @@ static int alloc_nid_numa_split_all_reserved_generic_check(void) > > /* > * A simple test that tries to allocate a memory region through the > - * memblock_alloc_node() on a NUMA node with id `nid`. Expected to have the > - * correct NUMA node set for the new region. > + * memblock_alloc_node() on a NUMA node with id `nid`. Expected to allocate > + * the region within the requested node and mark the new reservation with the > + * correct NUMA node. This change is not related to the fix and I think it's adding too much noise to the test in any case. > */ > static int alloc_node_on_correct_nid(void) > { > int nid_req = 2; > void *allocated_ptr = NULL; > #ifdef CONFIG_NUMA > + struct memblock_region *new_rgn = &memblock.reserved.regions[0]; > struct memblock_region *req_node = &memblock.memory.regions[nid_req]; > + phys_addr_t req_node_end = region_end(req_node); > #endif > phys_addr_t size = SZ_512; > > @@ -2515,7 +2518,13 @@ static int alloc_node_on_correct_nid(void) > > ASSERT_NE(allocated_ptr, NULL); > #ifdef CONFIG_NUMA > - ASSERT_EQ(nid_req, req_node->nid); > + ASSERT_EQ(memblock.reserved.cnt, 1); > + ASSERT_EQ(new_rgn->size, size); > + ASSERT_EQ(new_rgn->base, (phys_addr_t)allocated_ptr); > + ASSERT_EQ(new_rgn->flags, MEMBLOCK_RSRV_KERN); > + ASSERT_EQ(nid_req, memblock_get_region_node(new_rgn)); > + ASSERT_LE(req_node->base, new_rgn->base); > + ASSERT_LE(region_end(new_rgn), req_node_end); > #endif > > test_pass_pop(); > -- > 2.43.0 > -- Sincerely yours, Mike.