From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 381B531355B for ; Fri, 21 Nov 2025 13:36:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763732217; cv=none; b=HBXsJM8GMZudXCyHLwEVbNH87Bl6AV8PeR/MX2wKbcsf5+SeKrqBTdTfGZfY6IYAPKFM+041T+/eFfvOe7+ozN1YqVWGv4qr3BJB2GSwSzJej251VLH7JYkviezDAu5fTBgSc4G5uVxXh5HNnKR2d92VBaNf+ZFTJaw5gdzurx8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763732217; c=relaxed/simple; bh=gmNU8aO4a+0cyi6R2ypDeco/WZeysrGhC1fhSvHHEmw=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Fc/7QHX9q6BQ3zSv/zLAww14ba67JISt+9H0i/cYmUjjVn/5c6yZZHK1xhXqqSBwsrUhOMwz+X3CJ27L5T3mxyAbTUCsXZegiE74DAT3BPdxwijCJDQln93ojZsXIQ6x3Or3dUuFcG6N0mI0o/GMCsNqu2OULI3cqoKbHsIWl1s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=i35emxhE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="i35emxhE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 44BB4C4CEF1; Fri, 21 Nov 2025 13:36:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1763732217; bh=gmNU8aO4a+0cyi6R2ypDeco/WZeysrGhC1fhSvHHEmw=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=i35emxhE/HumLeVdn4T2hbgV1EKis8USH5PMwPQqjrJdbEVdFlICaKV9t9pBQ+C8w 4/9EyDK39Bq0/57mVEfXnqWsU37JKELSoqWnt7XoDdpnYvuY0KNAnJw9qBgjXgNREn uaL0NQvZGiCJatDvarteEp9av6uL4Kp3uUMVuZSc8wXsDOSt7vKdazpwMpJG6wLuDa aF8pRoZIfy/vEjJvxuHacilp9Qj9eK/cwK6RKwZPUSP+MwF19AV7P4RVAbvdVR8c9w +C4YmPGo8rAvlE3OyvHUS1Kkm8Wbm+Kugv1+hpPRQZKHYKYmnqsTCL+Y3c3RLpt0a9 pB0TVKRohObWw== Date: Fri, 21 Nov 2025 15:36:49 +0200 From: Mike Rapoport To: ranxiaokai627@163.com Cc: catalin.marinas@arm.com, akpm@linux-foundation.org, graf@amazon.com, pasha.tatashin@soleen.com, pratyush@kernel.org, changyuanl@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kexec@lists.infradead.org, ran.xiaokai@zte.com.cn Subject: Re: [PATCH 2/2] liveupdate: Fix boot failure due to kmemleak access to unmapped pages Message-ID: References: <20251120144147.90508-1-ranxiaokai627@163.com> <20251120144147.90508-3-ranxiaokai627@163.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20251120144147.90508-3-ranxiaokai627@163.com> On Thu, Nov 20, 2025 at 02:41:47PM +0000, ranxiaokai627@163.com wrote: > Subject: liveupdate: Fix boot failure due to kmemleak access to unmapped pages Please prefix kexec handover patches with kho: rather than liveupdate. > From: Ran Xiaokai > > When booting with debug_pagealloc=on while having: > CONFIG_KEXEC_HANDOVER_ENABLE_DEFAULT=y > CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=n > the system fails to boot due to page faults during kmemleak scanning. > > This occurs because: > With debug_pagealloc enabled, __free_pages() invokes > debug_pagealloc_unmap_pages(), clearing the _PAGE_PRESENT bit for > freed pages in the direct mapping. > Commit 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers") > releases the KHO scratch region via init_cma_reserved_pageblock(), > unmapping its physical pages. Subsequent kmemleak scanning accesses > these unmapped pages, triggering fatal page faults. > > Call kmemleak_no_scan_phys() from kho_reserve_scratch() to > exclude the reserved region from scanning before > it is released to the buddy allocator. > > Fixes: 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers") > Signed-off-by: Ran Xiaokai > --- > kernel/liveupdate/kexec_handover.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > index 224bdf5becb6..dd4942d1d76c 100644 > --- a/kernel/liveupdate/kexec_handover.c > +++ b/kernel/liveupdate/kexec_handover.c > @@ -11,6 +11,7 @@ > > #include > #include > +#include > #include > #include > #include > @@ -654,6 +655,7 @@ static void __init kho_reserve_scratch(void) > if (!addr) > goto err_free_scratch_desc; > > + kmemleak_no_scan_phys(addr); There's kmemleak_ignore_phys() that can be called after the scratch areas allocated from memblock and with that kmemleak should not access them. Take a look at __cma_declare_contiguous_nid(). > kho_scratch[i].addr = addr; > kho_scratch[i].size = size; > i++; -- Sincerely yours, Mike.