From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2276442188C for ; Tue, 20 Jan 2026 13:05:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768914310; cv=none; b=pXvQP3YwsYyEotFY+B5CsWuT5wSDoxFiSN6XgQUoOxjK0cTa1LL/ytUhMxOkYLLUov4Nz4yqYCbLAlyjvHwQKUj4H/Uz35LdHVpZ7uC5dtViQxPL3bSJhJwEOeH+CjnNHAm+H/VTK2CSc3h3VDTSNqOn20vdHWk/EzV2pN4LiKA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768914310; c=relaxed/simple; bh=nLvy/n1nlxvVvAZe/tXcawkxy7YqzQ4rJIes0g8gIsM=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=i5tAp5VjQcjAa1PmMpy4e6p/8QmgrpgM2K/vBXA49IsMyKh7Re00KyonDWx8JIjrpoFavmg3Ohq/oWgiRbGip+yJZcNgf5qB2bBKHPMv0ihZ4h1vxoraJxk0acHGeYEU059jVw4BS0r2DhJccbR91l8cNvV1gcVWLdyxwpP5Qyk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jVGSgD5R; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jVGSgD5R" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E89D1C16AAE; Tue, 20 Jan 2026 13:05:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1768914309; bh=nLvy/n1nlxvVvAZe/tXcawkxy7YqzQ4rJIes0g8gIsM=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=jVGSgD5Rp1+S26FlKdh30mQwBmeaUSion/wrJY6TRn2CeLWpMfYSoIijBm3rZOtDn B8NnkxN7gQPYUZSnhffWRW3+IjJgN25XZc0kWkdEt4saUjL70y6fJCeIylbEOlrZY9 Vn64zowOOPPCSxLjw8cVHRskMwr76wGxWYl7Ngcb65fJziF4r6RXv84mJC1eMjatdh GRfsVQ+Tp8dVRCRq2zwU7TWyQMFfqtMVPm+HnrBXiy2n51kutFv6xWmSzmP72dxjPu hXba3PzVdw+dTB3OvyUtkXg126TGuRvrW961FdYoQU4imDSKPhL69VMV5atQqwOKnQ aP0c/0ccLqYBQ== Date: Tue, 20 Jan 2026 15:05:03 +0200 From: Mike Rapoport To: Pratyush Yadav Cc: Andrew Morton , Alexander Graf , Pasha Tatashin , kexec@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Suren Baghdasaryan Subject: Re: [PATCH v2 2/2] kho: simplify page initialization in kho_restore_page() Message-ID: References: <20260116112217.915803-1-pratyush@kernel.org> <20260116112217.915803-3-pratyush@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260116112217.915803-3-pratyush@kernel.org> On Fri, Jan 16, 2026 at 11:22:15AM +0000, Pratyush Yadav wrote: > When restoring a page (from kho_restore_pages()) or folio (from > kho_restore_folio()), KHO must initialize the struct page. The > initialization differs slightly depending on if a folio is requested or > a set of 0-order pages is requested. > > Conceptually, it is quite simple to understand. When restoring 0-order > pages, each page gets a refcount of 1 and that's it. When restoring a > folio, head page gets a refcount of 1 and tail pages get 0. > > kho_restore_page() tries to combine the two separate initialization flow > into one piece of code. While it works fine, it is more complicated to > read than it needs to be. Make the code simpler by splitting the two > initalization paths into two separate functions. This improves > readability by clearly showing how each type must be initialized. > > Signed-off-by: Pratyush Yadav Reviewed-by: Mike Rapoport (Microsoft) > --- > > Changes in v2: > - Use unsigned long for nr_pages. > > kernel/liveupdate/kexec_handover.c | 40 +++++++++++++++++++----------- > 1 file changed, 26 insertions(+), 14 deletions(-) > > diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c > index 709484fbf9fd..92da76977684 100644 > --- a/kernel/liveupdate/kexec_handover.c > +++ b/kernel/liveupdate/kexec_handover.c > @@ -219,11 +219,32 @@ static int __kho_preserve_order(struct kho_mem_track *track, unsigned long pfn, > return 0; > } > > +/* For physically contiguous 0-order pages. */ > +static void kho_init_pages(struct page *page, unsigned long nr_pages) > +{ > + for (unsigned long i = 0; i < nr_pages; i++) > + set_page_count(page + i, 1); > +} > + > +static void kho_init_folio(struct page *page, unsigned int order) > +{ > + unsigned long nr_pages = (1 << order); > + > + /* Head page gets refcount of 1. */ > + set_page_count(page, 1); > + > + /* For higher order folios, tail pages get a page count of zero. */ > + for (unsigned long i = 1; i < nr_pages; i++) > + set_page_count(page + i, 0); > + > + if (order > 0) > + prep_compound_page(page, order); > +} > + > static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) > { > struct page *page = pfn_to_online_page(PHYS_PFN(phys)); > unsigned long nr_pages; > - unsigned int ref_cnt; > union kho_page_info info; > > if (!page) > @@ -241,20 +262,11 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio) > > /* Clear private to make sure later restores on this page error out. */ > page->private = 0; > - /* Head page gets refcount of 1. */ > - set_page_count(page, 1); > - > - /* > - * For higher order folios, tail pages get a page count of zero. > - * For physically contiguous order-0 pages every pages gets a page > - * count of 1 > - */ > - ref_cnt = is_folio ? 0 : 1; > - for (unsigned long i = 1; i < nr_pages; i++) > - set_page_count(page + i, ref_cnt); > > - if (is_folio && info.order) > - prep_compound_page(page, info.order); > + if (is_folio) > + kho_init_folio(page, info.order); > + else > + kho_init_pages(page, nr_pages); > > adjust_managed_page_count(page, nr_pages); > return page; > -- > 2.52.0.457.g6b5491de43-goog > -- Sincerely yours, Mike.