* [RFC PATCH 40/45] PM: hibernate: walk per-superpageblock free lists in mark_free_pages
[not found] <20260430202233.111010-1-riel@surriel.com>
@ 2026-04-30 20:21 ` Rik van Riel
0 siblings, 0 replies; only message in thread
From: Rik van Riel @ 2026-04-30 20:21 UTC (permalink / raw)
To: linux-kernel
Cc: kernel-team, linux-mm, david, willy, surenb, hannes, ljs, ziy,
usama.arif, Rik van Riel, Rafael J. Wysocki, Len Brown,
Pavel Machek, linux-pm, Rik van Riel
From: Rik van Riel <riel@meta.com>
mark_free_pages() walks the buddy allocator's free lists and calls
swsusp_set_page_free() on each free page so it is omitted from the
hibernation image. After the SPB rework, free pages live on per-
superpageblock free lists rather than the zone-level free list, so
the existing list_for_each_entry() walk over zone->free_area[order]
.free_list[t] found nothing. The hibernation snapshot then treated
every free page as needing to be saved, wasting image space and risking
OOM during the snapshot.
Wrap the existing per-page walk in an SPB iteration loop. When the
zone has no SPBs (e.g. an unpopulated hotplug zone), fall back to the
zone-level free list. The whole function still runs under
spin_lock_irqsave(&zone->lock) without drops, so there are no lock-
order or hotplug concerns.
Note: kernel/power/snapshot.o currently fails to build with
-Werror=return-type due to an unrelated pre-existing warning in
enough_free_mem(). This patch was reviewed but not build-tested in
isolation; the change itself is mechanical.
Cc: Rafael J. Wysocki <rafael@kernel.org>
Cc: Len Brown <lenb@kernel.org>
Cc: Pavel Machek <pavel@kernel.org>
Cc: linux-pm@vger.kernel.org
Signed-off-by: Rik van Riel <riel@surriel.com>
Assisted-by: Claude:claude-opus-4.7 syzkaller
---
kernel/power/snapshot.c | 35 +++++++++++++++++++++++++----------
1 file changed, 25 insertions(+), 10 deletions(-)
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 6e1321837c66..682c3bf2ba8b 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1270,17 +1270,32 @@ static void mark_free_pages(struct zone *zone)
}
for_each_migratetype_order(order, t) {
- list_for_each_entry(page,
- &zone->free_area[order].free_list[t], buddy_list) {
- unsigned long i;
-
- pfn = page_to_pfn(page);
- for (i = 0; i < (1UL << order); i++) {
- if (!--page_count) {
- touch_nmi_watchdog();
- page_count = WD_PAGE_COUNT;
+ unsigned long sb_idx;
+ unsigned long nr_lists = zone->nr_superpageblocks ? : 1;
+
+ /*
+ * After the SPB rework, free pages live on per-superpageblock
+ * free lists. Walk every SPB's list for this (order, mt) cell.
+ * If the zone has no SPBs (unpopulated zone), fall back to the
+ * zone-level list head so that any pre-SPB pages are still
+ * marked.
+ */
+ for (sb_idx = 0; sb_idx < nr_lists; sb_idx++) {
+ struct list_head *list = zone->nr_superpageblocks ?
+ &zone->superpageblocks[sb_idx].free_area[order].free_list[t] :
+ &zone->free_area[order].free_list[t];
+
+ list_for_each_entry(page, list, buddy_list) {
+ unsigned long i;
+
+ pfn = page_to_pfn(page);
+ for (i = 0; i < (1UL << order); i++) {
+ if (!--page_count) {
+ touch_nmi_watchdog();
+ page_count = WD_PAGE_COUNT;
+ }
+ swsusp_set_page_free(pfn_to_page(pfn + i));
}
- swsusp_set_page_free(pfn_to_page(pfn + i));
}
}
}
--
2.52.0
^ permalink raw reply related [flat|nested] only message in thread
only message in thread, other threads:[~2026-04-30 20:22 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260430202233.111010-1-riel@surriel.com>
2026-04-30 20:21 ` [RFC PATCH 40/45] PM: hibernate: walk per-superpageblock free lists in mark_free_pages Rik van Riel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox