From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8BA7829D27E for ; Sun, 18 Jan 2026 12:16:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768738601; cv=none; b=DipGbl0/BIHSWrc5fNXRAILhqRWZYqdTHO+TmzfQirYxKrYIT8CgeO5E4w84Ssr0kMs7/Cxoa/r8MzkwqvnVd/RufTrZXQ8i9njXRJlA54Udt5iQzLGctQ7IvTlV/BXfmFDhDEaUW9nrJ4J7MjpETGlYjSYis6N/o4iwUV3RtBM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768738601; c=relaxed/simple; bh=GNLKijSufyMywtxmWVA+8J1TEZuTFKnq101th6Vg7vI=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=BfvryzjB8dLRlM9pnXksm9DeKrNIb7yngeRAzgexJ91pggH4MKZTLeMe1JkuKxWksgIr04EHQgHtEt4EfVvFV/7EhMgHabYPHvjVPzyzsysxaoLdXEAn4a9TlAV3NcWKjmGOO5X3uhm2V0qS3+6VKDisJvGvd9SvMZVNTwrDr6k= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kPKKUPNL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kPKKUPNL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC2C9C116D0; Sun, 18 Jan 2026 12:16:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1768738601; bh=GNLKijSufyMywtxmWVA+8J1TEZuTFKnq101th6Vg7vI=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=kPKKUPNLIgdW1FVtu6t0pHQsFANoX0X/bfdI+nnOOtyQoWwUbEywGQkG0rtvqmvGJ I3fAz5b927vx35wgy9mmIcb/LEBGigkxdXVsXS79z3NI6I+Di69mAqfOaHBu+2L+yB ELOdNY+us7iSLo3Lny9KrYowyJ5s2dyobs+AHGfsLddWBdxC8LbYHSyhkbvTJ9gAtj BMpNCeNJZaLY8IGUm3MUR/Ke2Dt1HhIc6DFtQuShEuOKMN9UpecLCWh0BM36d1Z+Fc Ld6sNLxtkUbxn+jDOi0025JJGv9MvgPnsRrK6n4r380JHjlwt3SOGRbH+6FfkNESVb 8pCvlPCJUbhkw== Date: Sun, 18 Jan 2026 14:16:33 +0200 From: Mike Rapoport To: Evangelos Petrongonas , Pratyush Yadav Cc: Pasha Tatashin , Alexander Graf , Jason Miu , linux-kernel@vger.kernel.org, kexec@lists.infradead.org, linux-mm@kvack.org, nh-open-source@amazon.com Subject: Re: [PATCH] kho: skip memoryless NUMA nodes when reserving scratch areas Message-ID: References: <20260116112640.64900-1-epetron@amazon.de> <2vxzsec57oth.fsf@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2vxzsec57oth.fsf@kernel.org> On Fri, Jan 16, 2026 at 11:57:14AM +0000, Pratyush Yadav wrote: > Hi Evangelos, > > On Fri, Jan 16 2026, Evangelos Petrongonas wrote: > > > kho_reserve_scratch() iterates over all online NUMA nodes to allocate > > per-node scratch memory. On systems with memoryless NUMA nodes (nodes > > that have CPUs but no memory), memblock_alloc_range_nid() fails because > > there is no memory available on that node. This causes KHO initialization > > to fail and kho_enable to be set to false. > > > > Some ARM64 systems have NUMA topologies where certain nodes contain only > > CPUs without any associated memory. These configurations are valid and > > should not prevent KHO from functioning. > > > > Fix this by introducing kho_mem_nodes_count() which counts only nodes > > that have memory (N_MEMORY state), and skip memoryless nodes in the > > per-node scratch allocation loop. > > > > Signed-off-by: Evangelos Petrongonas > > --- > > kernel/liveupdate/kexec_handover.c | 23 ++++++++++++++++++++++- > > 1 file changed, 22 insertions(+), 1 deletion(-) > > > > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > > index 9dc51fab604f..c970ed08b477 100644 > > --- a/kernel/liveupdate/kexec_handover.c > > +++ b/kernel/liveupdate/kexec_handover.c > > @@ -623,6 +623,23 @@ static phys_addr_t __init scratch_size_node(int nid) > > return round_up(size, CMA_MIN_ALIGNMENT_BYTES); > > } > > > > +/* > > + * Count online NUMA nodes that have memory. Memoryless nodes cannot have > > + * scratch memory and should be excluded. > > + */ > > +static unsigned int __init kho_mem_nodes_count(void) > > +{ > > + unsigned int cnt = 0; > > + int nid; > > + > > + for_each_online_node(nid) { > > + if (node_state(nid, N_MEMORY)) > > + cnt++; > > + } > > + > > + return cnt; > > +} > > + > > You don't need this. You can use nodes_weight(nodes_state[N_MEMORY]) > directly. Other than this, LGTM. > > > /** > > * kho_reserve_scratch - Reserve a contiguous chunk of memory for kexec > > * > > @@ -643,7 +660,7 @@ static void __init kho_reserve_scratch(void) > > scratch_size_update(); > > > > /* FIXME: deal with node hot-plug/remove */ > > - kho_scratch_cnt = num_online_nodes() + 2; > > + kho_scratch_cnt = kho_mem_nodes_count() + 2; > > size = kho_scratch_cnt * sizeof(*kho_scratch); > > kho_scratch = memblock_alloc(size, PAGE_SIZE); > > if (!kho_scratch) > > @@ -674,6 +691,10 @@ static void __init kho_reserve_scratch(void) > > i++; > > > > for_each_online_node(nid) { > > + /* Skip memoryless nodes - we cannot allocate scratch memory there */ > > + if (!node_state(nid, N_MEMORY)) > > + continue; > > + And here you can use for_each_node_state(nid, N_MEMORY) > > size = scratch_size_node(nid); > > addr = memblock_alloc_range_nid(size, CMA_MIN_ALIGNMENT_BYTES, > > 0, MEMBLOCK_ALLOC_ACCESSIBLE, > > -- > Regards, > Pratyush Yadav -- Sincerely yours, Mike.