From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DDF97346AF0 for ; Fri, 20 Mar 2026 08:56:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773996998; cv=none; b=KaYR4KsOA/3X+Ise8Ta0IFyLZVKIt83DI5CqvpWbkkrYVB0/6GbXA61wnFptxOpMCqhc0Nr+2zf+HWJBPNEFehCwca2DxdqVks97upbMYe7///T5a5J2+bKHd2KvqZ2WG6KY3SRVZ5lQ990sEhRJZzK9+sYQXtyV6tcV5jEpKxc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773996998; c=relaxed/simple; bh=ZJNfT3lPJxzaGdT6bGFu3WgbabeykkSDcAaOFE+CTEA=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=XIvJM45hwjd0juh1HoiewsGBpbVmH/uEOXEwWrIydkua67rRpnwRlZQpXNy92MMv3B5iWU9ngfnHSED4EqwQ83D9aH073A53emNeJ2J6PlntrZ3suJY85KzztfebS2k0xq6jVdizEYLAqGH8ab63r0ZXfDpb/Ks4jXXrjd1WgGE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IwWduJK3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IwWduJK3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9E6F9C4CEF7; Fri, 20 Mar 2026 08:56:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773996998; bh=ZJNfT3lPJxzaGdT6bGFu3WgbabeykkSDcAaOFE+CTEA=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=IwWduJK3Q0ps7Q3T+6/cPljz9bzlzMD7YqehSZQMUvmFFsKNNaEaT95LdBmvO9GZ+ lfFcqCjV8DbrRsRL9Ejn3CjOPulO8Hz7jBmzM6QLuIdqFibLx3wFc/nlIqvK8ftg5i m/tEnf74BYEjDiRWWfvBLfYs2Qa/MhZdCaz++kKi75vnjLKGmYep338ZfTx5cAa3dk VHR21V/T9FAgHlP3jIcO9L7mmeBt9rTSotZmIfD7CN+FQH0+tEG8pVuIivy2p6BrB/ EmW9cbnzzY4hTZCmlQhlrmLpYqdTu6BZ9aEzMIyAl0fxpaL6QV1Wd2lyyDxPBmUQVG jVPTvbJrZeGBA== From: Pratyush Yadav To: Marco Elver Cc: Alexander Graf , Mike Rapoport , Pasha Tatashin , Pratyush Yadav , kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com Subject: Re: [PATCH v2] kho: use checked arithmetic in deserialize_bitmap() In-Reply-To: <20260319210528.1694513-2-elver@google.com> (Marco Elver's message of "Thu, 19 Mar 2026 22:03:53 +0100") References: <20260319210528.1694513-2-elver@google.com> Date: Fri, 20 Mar 2026 08:56:34 +0000 Message-ID: <2vxzzf42c20t.fsf@kernel.org> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain Hi Marco, On Thu, Mar 19 2026, Marco Elver wrote: > The function deserialize_bitmap() calculates the reservation size using: > > int sz = 1 << (order + PAGE_SHIFT); > > If a corrupted KHO image provides an order >= 20 (on systems with 4KB > pages), the shift amount becomes >= 32, which overflows the 32-bit > integer. This results in a zero-size memory reservation. > > Furthermore, the physical address calculation: > > phys_addr_t phys = elm->phys_start + (bit << (order + PAGE_SHIFT)); > > can also overflow and wrap around if the order is large. This allows a > corrupt KHO image to cause out-of-bounds updates to page->private of > arbitrary physical pages during early boot. > > Fix this by changing 'sz' to 'unsigned long' and using checked add and > shift to safely calculate the shift amount, size, and physical address, > skipping malformed chunks. This allows preserving memory with an order > larger than MAX_PAGE_ORDER. > > Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation") > Signed-off-by: Marco Elver deserialize_bitmap() is replaced with the radix tree with this series [0]. Can you please redo these changes on top of that? Also, a couple comments below. [0] https://lore.kernel.org/linux-mm/20260206021428.3386442-1-jasonmiu@google.com/ > --- > v2: > * Switch to unsigned long and use checked shift and add (Mike). > > v1: https://lore.kernel.org/all/20260214010013.3027519-1-elver@google.com/ > --- > kernel/liveupdate/kexec_handover.c | 23 +++++++++++++++++++---- > 1 file changed, 19 insertions(+), 4 deletions(-) > > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > index cc68a3692905..0d8417dcd3ff 100644 > --- a/kernel/liveupdate/kexec_handover.c > +++ b/kernel/liveupdate/kexec_handover.c > @@ -19,6 +19,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -461,15 +462,29 @@ static void __init deserialize_bitmap(unsigned int order, > struct khoser_mem_bitmap_ptr *elm) > { > struct kho_mem_phys_bits *bitmap = KHOSER_LOAD_PTR(elm->bitmap); > + unsigned int shift; > unsigned long bit; > + unsigned long sz; > + > + if (check_add_overflow(order, PAGE_SHIFT, &shift) || > + check_shl_overflow(1UL, shift, &sz)) { > + pr_warn("invalid order %u for preserved bitmap\n", order); > + return; > + } Isn't it simpler to just check if (order + PAGE_SHIFT) > 63? KHO is only designed to work on 64-bit platforms so we know the max possible shift already. Is there any reason to call the proper overflow functions? The only reason I ask is because I find the open-coded check easier to read. > > for_each_set_bit(bit, bitmap->preserve, PRESERVE_BITS) { > - int sz = 1 << (order + PAGE_SHIFT); > - phys_addr_t phys = > - elm->phys_start + (bit << (order + PAGE_SHIFT)); > - struct page *page = phys_to_page(phys); > + phys_addr_t offset, phys; > + struct page *page; > union kho_page_info info; > > + if (check_shl_overflow((phys_addr_t)bit, shift, &offset) || > + check_add_overflow(elm->phys_start, offset, &phys)) { > + pr_warn("invalid phys layout for preserved bitmap\n"); > + return; > + } > + > + page = phys_to_page(phys); > + > memblock_reserve(phys, sz); > memblock_reserved_mark_noinit(phys, sz); > info.magic = KHO_PAGE_MAGIC; -- Regards, Pratyush Yadav