From: Pratyush Yadav <pratyush@kernel.org>
To: Alexander Graf <graf@amazon.com>, Mike Rapoport <rppt@kernel.org>,
Pasha Tatashin <pasha.tatashin@soleen.com>,
Pratyush Yadav <pratyush@kernel.org>,
Andrew Morton <akpm@linux-foundation.org>
Cc: kexec@lists.infradead.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: [PATCH 2/2] kho: drop restriction on maximum page order
Date: Mon, 9 Mar 2026 12:34:07 +0000 [thread overview]
Message-ID: <20260309123410.382308-2-pratyush@kernel.org> (raw)
In-Reply-To: <20260309123410.382308-1-pratyush@kernel.org>
KHO currently restricts the maximum order of a restored page to the
maximum order supported by the buddy allocator. While this works fine
for much of the data passed across kexec, it is possible to have pages
larger than MAX_PAGE_ORDER.
For one, it is possible to get a larger order when using
kho_preserve_pages() if the number of pages is large enough, since it
tries to combine multiple aligned 0-order preservations into one higher
order preservation.
For another, upcoming support for hugepages can have gigantic hugepages
being preserved over KHO.
There is no real reason for this limit. The KHO preservation machinery
can handle any page order. Remove this artificial restriction on max
page order.
Signed-off-by: Pratyush Yadav <pratyush@kernel.org>
Signed-off-by: Pratyush Yadav (Google) <pratyush@kernel.org>
---
Notes:
This patch was first sent with this RFC series [0]. I am sending it
separately since it is an independent patch that is useful even without
hugepage preservation. No changes since the RFC.
[0] https://lore.kernel.org/linux-mm/20251206230222.853493-1-pratyush@kernel.org/T/#u
kernel/liveupdate/kexec_handover.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
index bc9bd18294ee..1038e41ff9f9 100644
--- a/kernel/liveupdate/kexec_handover.c
+++ b/kernel/liveupdate/kexec_handover.c
@@ -253,7 +253,7 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
* check also implicitly makes sure phys is order-aligned since for
* non-order-aligned phys addresses, magic will never be set.
*/
- if (WARN_ON_ONCE(info.magic != KHO_PAGE_MAGIC || info.order > MAX_PAGE_ORDER))
+ if (WARN_ON_ONCE(info.magic != KHO_PAGE_MAGIC))
return NULL;
nr_pages = (1 << info.order);
--
2.53.0.473.g4a7958ca14-goog
next prev parent reply other threads:[~2026-03-09 12:35 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-09 12:34 [PATCH 1/2] kho: make sure preservations do not span multiple NUMA nodes Pratyush Yadav
2026-03-09 12:34 ` Pratyush Yadav [this message]
2026-03-10 10:33 ` [PATCH 2/2] kho: drop restriction on maximum page order Mike Rapoport
2026-03-17 9:12 ` Pratyush Yadav
2026-03-17 11:04 ` Mike Rapoport
2026-03-20 10:24 ` Pratyush Yadav
2026-03-09 15:59 ` [PATCH 1/2] kho: make sure preservations do not span multiple NUMA nodes Samiullah Khawaja
2026-03-10 10:32 ` Mike Rapoport
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260309123410.382308-2-pratyush@kernel.org \
--to=pratyush@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=graf@amazon.com \
--cc=kexec@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=pasha.tatashin@soleen.com \
--cc=rppt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox