From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DADE720322 for ; Fri, 28 Nov 2025 08:26:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764318380; cv=none; b=uDqc1va0oPgniUYB2f/Na/fbYLtB9P1AdpLefJ4SuDxFi2tqQdbQk0AuqLyAgRCXVpYzhVKKYyMxROQ/h4Fwm6swhH1M0nps0H26/j282yzQ91BPG30sNzzO4qRMD0qsOZ2NJP+lJPJbujVorQLiMLdz05oMQ+jV+bMnOjVmxW8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764318380; c=relaxed/simple; bh=IpmHE4TTrvndRHadxSdi0TndJPA3BUmXMEgJz7l06iY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=BagQ51ITRRE8y0OmR62mKt0n1VxjT4J1x3hzRupu7AdYJ0UjSu6b4m8/CK+YEULp2OsPva91ypCaB6GFIpH9Er5rM4pDsZcApDFpysemblnSmn8JZfLwFP+tI5cNqOKBQFXCWKGuDZYIaBDGh5y+KwQauZYq1eeX5PEoHa+8TBI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=aim6ke9Z; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="aim6ke9Z" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9E29C4CEF1; Fri, 28 Nov 2025 08:26:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1764318380; bh=IpmHE4TTrvndRHadxSdi0TndJPA3BUmXMEgJz7l06iY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=aim6ke9ZUHrxTdbMVGVG05sYSe3Fws4B7s4/WC5ikZN9cyNK/Twqt4ZCC+jLG3HdW rop0mbHSuDI12O1PswPg7eQ16mowythFkRxbPecBlXhzTbz4rKSYB+h299SkTV7zTg a+tDaeWJO9LEWPvhz4XjNO+bxVdSl0KiQ8i7isu3N+ZjLfw7ExtULGT7wCFbcOOwfm QUNFGtOROf/FCD2W8OiDJTZsuFHPK6ceYI5mfUOXqxkMmEW7/hsUAqZZHXO4xU4X2s O1m21CJYiNy1p2g0/Pu6unLJqZjJTvpuypXWjD1HJugNKj2P67QcRypss3z57//26R T7cw7eu3UyC8A== Date: Fri, 28 Nov 2025 10:26:12 +0200 From: Mike Rapoport To: Usama Arif Cc: Andrew Morton , kas@kernel.org, changyuanl@google.com, graf@amazon.com, leitao@debian.org, thevlad@meta.com, pratyush@kernel.org, dave.hansen@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: Re: [PATCH v2 2/2] mm/memblock: only mark/clear KHO scratch memory when needed Message-ID: References: <20251127203724.3177621-1-usamaarif642@gmail.com> <20251127203724.3177621-3-usamaarif642@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20251127203724.3177621-3-usamaarif642@gmail.com> On Thu, Nov 27, 2025 at 08:33:20PM +0000, Usama Arif wrote: > The scratch memory for kexec handover is used to bootstrap the > kexec'ed kernel. It is only needed when CONFIG_KEXEC_HANDOVER > is enabled and only if it is a KHO boot. Add checks to prevent > marking a KHO scratch region unless needed. Please add a paragraph along the lines of Pratyush's note from https://lore.kernel.org/all/86bjknyxgu.fsf@kernel.org/: Yeah, I don't think it will have much of a difference in practice, but I do think it is a good correctness fix. Marking the lower 1M as scratch is a hack to get around the limitations with KHO, and we should not be doing that when KHO isn't involved. > Fixes: a2daf83e10378 ("x86/e820: temporarily enable KHO scratch for memory below 1M") > Reported-by: Vlad Poenaru > Signed-off-by: Usama Arif > --- > mm/memblock.c | 74 ++++++++++++++++++++++++++++++--------------------- > 1 file changed, 44 insertions(+), 30 deletions(-) > > diff --git a/mm/memblock.c b/mm/memblock.c > index 8b13d5c28922a..8a2cebcfe0a18 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -1114,36 +1114,6 @@ int __init_memblock memblock_reserved_mark_noinit(phys_addr_t base, phys_addr_t > MEMBLOCK_RSRV_NOINIT); > } > > -/** > - * memblock_mark_kho_scratch - Mark a memory region as MEMBLOCK_KHO_SCRATCH. > - * @base: the base phys addr of the region > - * @size: the size of the region > - * > - * Only memory regions marked with %MEMBLOCK_KHO_SCRATCH will be considered > - * for allocations during early boot with kexec handover. > - * > - * Return: 0 on success, -errno on failure. > - */ > -__init int memblock_mark_kho_scratch(phys_addr_t base, phys_addr_t size) > -{ > - return memblock_setclr_flag(&memblock.memory, base, size, 1, > - MEMBLOCK_KHO_SCRATCH); > -} > - > -/** > - * memblock_clear_kho_scratch - Clear MEMBLOCK_KHO_SCRATCH flag for a > - * specified region. > - * @base: the base phys addr of the region > - * @size: the size of the region > - * > - * Return: 0 on success, -errno on failure. > - */ > -__init int memblock_clear_kho_scratch(phys_addr_t base, phys_addr_t size) > -{ > - return memblock_setclr_flag(&memblock.memory, base, size, 0, > - MEMBLOCK_KHO_SCRATCH); > -} No need to move these functions under #ifdef CONFIG_KEXEC_HANDOVER. We already have inline stubs when CONFIG_KEXEC_HANDOVER=n in include/linux/memblock.h Just add 'if (is_kho_boot())' here and in memblock_mark_kho_scratch(). > static bool should_skip_region(struct memblock_type *type, > struct memblock_region *m, > int nid, int flags) -- Sincerely yours, Mike.