From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2E683E47B for ; Tue, 25 Nov 2025 13:46:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764078362; cv=none; b=k475P02KCQ+3wxcszl/OvA/w/vy8RKIzyS4l4mRTN9Fb/UfGN09KkXRqiKKYDcZOyuDkUwfKzQcfMZxctfzidk4hiF30I4cX0bZ7Je5tgvUGvkpQjSd/IL/e0w1AhOngTOf+zs0JzS5ycwXGQvtLzfDi2Lkcjvpb9S4RvxX4Q8A= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764078362; c=relaxed/simple; bh=dl+oPp/GipwLpjlWzXAhu4z0Q+oNDkuAziTjJDqxIrM=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=tAd7TqbVUV3KkN+gzy3eret0Tf3gvu/9TxIlzRzBQxNfFzteZb5NVDGGcz2Ps7CNlOLL4+iUG44Awi5pv+0WRS5JPGub7+lKnl4Ybze/Q5i3L/nvh4Hw2VDFZog6R8s4jw6hjk7WDdwiOv8w1Oy3toKmw8KO1ZmQrgG9SS88Tsk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lcdV7eih; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lcdV7eih" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29AF1C4CEF1; Tue, 25 Nov 2025 13:46:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1764078361; bh=dl+oPp/GipwLpjlWzXAhu4z0Q+oNDkuAziTjJDqxIrM=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=lcdV7eihvC0Jk/3ytxRRdD1z9X+DUj1r2zKdiNuXwXfrjInezOai08J/l1JHj6y6q D+oZEsGd2nN6o3kXIQ6Hy088r0O9Dlb/9MC9Au1CiJkAMgMVRjSydFAXbT2eCotU8F II2V4q2c8Z+rvPfCFLeV28/11PtXgJSt2sNd8h1uS8EtuQ2DgwocXLVKC9H6jiB2ru 2zx5oQvc2/ZQhDpylsX1dMA1abJhOct5KJzQ8lbCY63KpVKCMDBZuqht6RmEZox3cI jrTnIwweJX49Woo4OG0ceQ14IrUTVX2FD6S8CO/jCW7l3Tt16OnyPd2T56NNwyDCQa 3XwYpK9VfYeSQ== From: Pratyush Yadav To: Mike Rapoport Cc: Andrew Morton , Alexander Graf , Pasha Tatashin , Pratyush Yadav , kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/2] kho: fix restoring of contiguous ranges of order-0 pages In-Reply-To: <20251125110917.843744-3-rppt@kernel.org> (Mike Rapoport's message of "Tue, 25 Nov 2025 13:09:17 +0200") References: <20251125110917.843744-1-rppt@kernel.org> <20251125110917.843744-3-rppt@kernel.org> Date: Tue, 25 Nov 2025 14:45:59 +0100 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain On Tue, Nov 25 2025, Mike Rapoport wrote: > From: "Mike Rapoport (Microsoft)" > > When contiguous ranges of order-0 pages are restored, kho_restore_page() > calls prep_compound_page() with the first page in the range and order as > parameters and then kho_restore_pages() calls split_page() to make sure all > pages in the range are order-0. > > However, since split_page() is not intended to split compound pages and > with VM_DEBUG enabled it will trigger a VM_BUG_ON_PAGE(). > > Update kho_restore_page() so that it will use prep_compound_page() when it > restores a folio and make sure it properly sets page count for both large > folios and ranges of order-0 pages. > > Reported-by: Pratyush Yadav > Fixes: a667300bd53f ("kho: add support for preserving vmalloc allocations") > Signed-off-by: Mike Rapoport (Microsoft) > --- > kernel/liveupdate/kexec_handover.c | 20 ++++++++++++-------- > 1 file changed, 12 insertions(+), 8 deletions(-) > > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > index e64ee87fa62a..61d17ed1f423 100644 > --- a/kernel/liveupdate/kexec_handover.c > +++ b/kernel/liveupdate/kexec_handover.c > @@ -219,11 +219,11 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, > return 0; > } > > -static struct page *kho_restore_page(phys_addr_t phys) > +static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) > { > struct page *page = pfn_to_online_page(PHYS_PFN(phys)); > + unsigned int nr_pages, ref_cnt; > union kho_page_info info; > - unsigned int nr_pages; > > if (!page) > return NULL; > @@ -243,11 +243,16 @@ static struct page *kho_restore_page(phys_addr_t phys) > /* Head page gets refcount of 1. */ > set_page_count(page, 1); > > - /* For higher order folios, tail pages get a page count of zero. */ > + /* > + * For higher order folios, tail pages get a page count of zero. > + * For physically contiguous order-0 pages every pages gets a page > + * count of 1 > + */ > + ref_cnt = is_folio ? 0 : 1; > for (unsigned int i = 1; i < nr_pages; i++) > - set_page_count(page + i, 0); > + set_page_count(page + i, ref_cnt); > > - if (info.order > 0) > + if (is_folio && info.order) This is getting a bit difficult to parse. Let's separate out folio and page initialization to separate helpers: /* Initalize 0-order KHO pages */ static void kho_init_page(struct page *page, unsigned int nr_pages) { for (unsigned int i = 0; i < nr_pages; i++) set_page_count(page + i, 1); } static void kho_init_folio(struct page *page, unsigned int order) { unsigned int nr_pages = (1 << order); /* Head page gets refcount of 1. */ set_page_count(page, 1); /* For higher order folios, tail pages get a page count of zero. */ for (unsigned int i = 1; i < nr_pages; i++) set_page_count(page + i, 0); if (order > 0) prep_compound_page(page, order); } > prep_compound_page(page, info.order); > > adjust_managed_page_count(page, nr_pages); > @@ -262,7 +267,7 @@ static struct page *kho_restore_page(phys_addr_t phys) > */ > struct folio *kho_restore_folio(phys_addr_t phys) > { > - struct page *page = kho_restore_page(phys); > + struct page *page = kho_restore_page(phys, true); > > return page ? page_folio(page) : NULL; > } > @@ -287,11 +292,10 @@ struct page *kho_restore_pages(phys_addr_t phys, unsigned int nr_pages) > while (pfn < end_pfn) { > const unsigned int order = > min(count_trailing_zeros(pfn), ilog2(end_pfn - pfn)); > - struct page *page = kho_restore_page(PFN_PHYS(pfn)); > + struct page *page = kho_restore_page(PFN_PHYS(pfn), false); > > if (!page) > return NULL; > - split_page(page, order); > pfn += 1 << order; > } -- Regards, Pratyush Yadav