public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Pratyush Yadav <pratyush@kernel.org>
To: Marco Elver <elver@google.com>
Cc: Alexander Graf <graf@amazon.com>,
	 Mike Rapoport <rppt@kernel.org>,
	Pasha Tatashin <pasha.tatashin@soleen.com>,
	 Pratyush Yadav <pratyush@kernel.org>,
	 kexec@lists.infradead.org,  linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,  kasan-dev@googlegroups.com
Subject: Re: [PATCH v2] kho: use checked arithmetic in deserialize_bitmap()
Date: Fri, 20 Mar 2026 08:56:34 +0000	[thread overview]
Message-ID: <2vxzzf42c20t.fsf@kernel.org> (raw)
In-Reply-To: <20260319210528.1694513-2-elver@google.com> (Marco Elver's message of "Thu, 19 Mar 2026 22:03:53 +0100")

Hi Marco,

On Thu, Mar 19 2026, Marco Elver wrote:

> The function deserialize_bitmap() calculates the reservation size using:
>
>     int sz = 1 << (order + PAGE_SHIFT);
>
> If a corrupted KHO image provides an order >= 20 (on systems with 4KB
> pages), the shift amount becomes >= 32, which overflows the 32-bit
> integer. This results in a zero-size memory reservation.
>
> Furthermore, the physical address calculation:
>
>     phys_addr_t phys = elm->phys_start + (bit << (order + PAGE_SHIFT));
>
> can also overflow and wrap around if the order is large. This allows a
> corrupt KHO image to cause out-of-bounds updates to page->private of
> arbitrary physical pages during early boot.
>
> Fix this by changing 'sz' to 'unsigned long' and using checked add and
> shift to safely calculate the shift amount, size, and physical address,
> skipping malformed chunks. This allows preserving memory with an order
> larger than MAX_PAGE_ORDER.
>
> Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
> Signed-off-by: Marco Elver <elver@google.com>

deserialize_bitmap() is replaced with the radix tree with this series
[0]. Can you please redo these changes on top of that?

Also, a couple comments below.

[0] https://lore.kernel.org/linux-mm/20260206021428.3386442-1-jasonmiu@google.com/

> ---
> v2:
> * Switch to unsigned long and use checked shift and add (Mike).
>
> v1: https://lore.kernel.org/all/20260214010013.3027519-1-elver@google.com/
> ---
>  kernel/liveupdate/kexec_handover.c | 23 +++++++++++++++++++----
>  1 file changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> index cc68a3692905..0d8417dcd3ff 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -19,6 +19,7 @@
>  #include <linux/libfdt.h>
>  #include <linux/list.h>
>  #include <linux/memblock.h>
> +#include <linux/overflow.h>
>  #include <linux/page-isolation.h>
>  #include <linux/unaligned.h>
>  #include <linux/vmalloc.h>
> @@ -461,15 +462,29 @@ static void __init deserialize_bitmap(unsigned int order,
>  				      struct khoser_mem_bitmap_ptr *elm)
>  {
>  	struct kho_mem_phys_bits *bitmap = KHOSER_LOAD_PTR(elm->bitmap);
> +	unsigned int shift;
>  	unsigned long bit;
> +	unsigned long sz;
> +
> +	if (check_add_overflow(order, PAGE_SHIFT, &shift) ||
> +	    check_shl_overflow(1UL, shift, &sz)) {
> +		pr_warn("invalid order %u for preserved bitmap\n", order);
> +		return;
> +	}

Isn't it simpler to just check if (order + PAGE_SHIFT) > 63? KHO is only
designed to work on 64-bit platforms so we know the max possible shift
already. Is there any reason to call the proper overflow functions? The
only reason I ask is because I find the open-coded check easier to read.

>  
>  	for_each_set_bit(bit, bitmap->preserve, PRESERVE_BITS) {
> -		int sz = 1 << (order + PAGE_SHIFT);
> -		phys_addr_t phys =
> -			elm->phys_start + (bit << (order + PAGE_SHIFT));
> -		struct page *page = phys_to_page(phys);
> +		phys_addr_t offset, phys;
> +		struct page *page;
>  		union kho_page_info info;
>  
> +		if (check_shl_overflow((phys_addr_t)bit, shift, &offset) ||
> +		    check_add_overflow(elm->phys_start, offset, &phys)) {
> +			pr_warn("invalid phys layout for preserved bitmap\n");
> +			return;
> +		}
> +
> +		page = phys_to_page(phys);
> +
>  		memblock_reserve(phys, sz);
>  		memblock_reserved_mark_noinit(phys, sz);
>  		info.magic = KHO_PAGE_MAGIC;

-- 
Regards,
Pratyush Yadav


      parent reply	other threads:[~2026-03-20  8:56 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19 21:03 [PATCH v2] kho: use checked arithmetic in deserialize_bitmap() Marco Elver
2026-03-20  2:37 ` Andrew Morton
2026-03-20  9:34   ` Pratyush Yadav
2026-03-20  8:56 ` Pratyush Yadav [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2vxzzf42c20t.fsf@kernel.org \
    --to=pratyush@kernel.org \
    --cc=elver@google.com \
    --cc=graf@amazon.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=kexec@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pasha.tatashin@soleen.com \
    --cc=rppt@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox