* [PATCH v3 8/8] mm: use zone lock guard in __offline_isolated_pages()
2026-04-29 12:02 [PATCH v3 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
@ 2026-04-29 12:02 ` Dmitry Ilvokhin
2026-04-29 12:02 ` Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
` (7 subsequent siblings)
8 siblings, 1 reply; 13+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-29 12:02 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in __offline_isolated_pages() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/page_alloc.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 27bd316e4453..8cfec846203f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7358,7 +7358,7 @@ void zone_pcp_reset(struct zone *zone)
unsigned long __offline_isolated_pages(unsigned long start_pfn,
unsigned long end_pfn)
{
- unsigned long already_offline = 0, flags;
+ unsigned long already_offline = 0;
unsigned long pfn = start_pfn;
struct page *page;
struct zone *zone;
@@ -7366,7 +7366,7 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
offline_mem_sections(pfn, end_pfn);
zone = page_zone(pfn_to_page(pfn));
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
while (pfn < end_pfn) {
page = pfn_to_page(pfn);
/*
@@ -7396,7 +7396,6 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
pfn += (1 << order);
}
- spin_unlock_irqrestore(&zone->lock, flags);
return end_pfn - start_pfn - already_offline;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v3 0/8] mm: use spinlock guards for zone lock
2026-04-29 12:02 ` [PATCH v3 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
@ 2026-04-29 12:02 Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
` (8 more replies)
0 siblings, 9 replies; 13+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-29 12:02 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
This series uses spinlock guard for zone lock across several mm
functions to replace explicit lock/unlock patterns with automatic
scope-based cleanup.
This simplifies the control flow by removing 'flags' variables, goto
labels, and redundant unlock calls.
Patches are ordered by decreasing value. The first six patches simplify
the control flow by removing gotos, multiple unlock paths, or 'ret'
variables. The last two are simpler lock/unlock pair conversions that
only remove 'flags' and can be dropped if considered unnecessary churn.
Binary size increase is +39 bytes, with Peter Zijlstra's fix for guards
[1] applied (already in mm-stable). This is due to the compiler not
being able to deduplicate epilogue and eliminate redundant NULL check.
See discussion [2] for more details. I proposed a patch [3] that fixes
this, but until it is merged we need to assume +39 bytes will stay
(though it is compiler dependent).
Based on mm-stable.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
v2 -> v3:
- Rebased on top of mm-stable.
- Added Acked-by from Michal.
v1 -> v2:
- Andrew Morton raised concerns about binary size increase in v1.
Peter Zijlstra has since fixed the underlying issue in the guards
infrastructure in tip [1].
- Rebased on mm-stable, since the patch this series depended on was
dropped from mm-new.
- Converted guard(zone_lock_irqsave)(zone) to
guard(spinlock_irqsave)(&zone->lock).
- Dropped redundant braces in unreserve_highatomic_pageblock()
(Steven Rostedt)
v2 resend: https://lore.kernel.org/all/cover.1775654118.git.d@ilvokhin.com/
v2: https://lore.kernel.org/all/cover.1774627568.git.d@ilvokhin.com/
v1: https://lore.kernel.org/all/cover.1772811429.git.d@ilvokhin.com/
[1]: https://lore.kernel.org/all/20260309164516.GE606826@noisy.programming.kicks-ass.net/
[2]: https://lore.kernel.org/all/afC5C6fylF4AsITV@shell.ilvokhin.com/
[3]: https://lore.kernel.org/all/20260427165037.205337-1-d@ilvokhin.com/
Dmitry Ilvokhin (8):
mm: use zone lock guard in reserve_highatomic_pageblock()
mm: use zone lock guard in unset_migratetype_isolate()
mm: use zone lock guard in unreserve_highatomic_pageblock()
mm: use zone lock guard in set_migratetype_isolate()
mm: use zone lock guard in take_page_off_buddy()
mm: use zone lock guard in put_page_back_buddy()
mm: use zone lock guard in free_pcppages_bulk()
mm: use zone lock guard in __offline_isolated_pages()
mm/page_alloc.c | 53 ++++++++++++-----------------------
mm/page_isolation.c | 67 +++++++++++++++++++--------------------------
2 files changed, 45 insertions(+), 75 deletions(-)
--
2.52.0
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v3 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
2026-04-29 12:02 [PATCH v3 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
@ 2026-04-29 12:02 ` Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
` (6 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-29 12:02 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use the spinlock_irqsave zone lock guard in
reserve_highatomic_pageblock() to replace the explicit lock/unlock and
goto out_unlock pattern with automatic scope-based cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/page_alloc.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 65e205111553..14c172c9b6c6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3429,7 +3429,7 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
struct zone *zone)
{
int mt;
- unsigned long max_managed, flags;
+ unsigned long max_managed;
/*
* The number reserved as: minimum is 1 pageblock, maximum is
@@ -3443,29 +3443,26 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
if (zone->nr_reserved_highatomic >= max_managed)
return;
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
/* Recheck the nr_reserved_highatomic limit under the lock */
if (zone->nr_reserved_highatomic >= max_managed)
- goto out_unlock;
+ return;
/* Yoink! */
mt = get_pageblock_migratetype(page);
/* Only reserve normal pageblocks (i.e., they can merge with others) */
if (!migratetype_is_mergeable(mt))
- goto out_unlock;
+ return;
if (order < pageblock_order) {
if (move_freepages_block(zone, page, mt, MIGRATE_HIGHATOMIC) == -1)
- goto out_unlock;
+ return;
zone->nr_reserved_highatomic += pageblock_nr_pages;
} else {
change_pageblock_range(page, order, MIGRATE_HIGHATOMIC);
zone->nr_reserved_highatomic += 1 << order;
}
-
-out_unlock:
- spin_unlock_irqrestore(&zone->lock, flags);
}
/*
--
2.52.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v3 2/8] mm: use zone lock guard in unset_migratetype_isolate()
2026-04-29 12:02 [PATCH v3 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-04-29 12:02 ` Dmitry Ilvokhin
2026-04-29 15:40 ` Zi Yan
2026-04-29 12:02 ` [PATCH v3 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
` (5 subsequent siblings)
8 siblings, 1 reply; 13+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-29 12:02 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in unset_migratetype_isolate() to
replace the explicit lock/unlock and goto pattern with automatic
scope-based cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/page_isolation.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index c48ff5c00244..9d606052dd80 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -223,15 +223,14 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
static void unset_migratetype_isolate(struct page *page)
{
struct zone *zone;
- unsigned long flags;
bool isolated_page = false;
unsigned int order;
struct page *buddy;
zone = page_zone(page);
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
if (!is_migrate_isolate_page(page))
- goto out;
+ return;
/*
* Because freepage with more than pageblock_order on isolated
@@ -279,8 +278,6 @@ static void unset_migratetype_isolate(struct page *page)
__putback_isolated_page(page, order, get_pageblock_migratetype(page));
}
zone->nr_isolate_pageblock--;
-out:
- spin_unlock_irqrestore(&zone->lock, flags);
}
static inline struct page *
--
2.52.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v3 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock()
2026-04-29 12:02 [PATCH v3 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (2 preceding siblings ...)
2026-04-29 12:02 ` [PATCH v3 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
@ 2026-04-29 12:02 ` Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
` (4 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-29 12:02 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in unreserve_highatomic_pageblock()
to replace the explicit lock/unlock pattern with automatic scope-based
cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/page_alloc.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 14c172c9b6c6..2f4170ae60f5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3478,7 +3478,6 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
bool force)
{
struct zonelist *zonelist = ac->zonelist;
- unsigned long flags;
struct zoneref *z;
struct zone *zone;
struct page *page;
@@ -3495,7 +3494,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
pageblock_nr_pages)
continue;
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
for (order = 0; order < NR_PAGE_ORDERS; order++) {
struct free_area *area = &(zone->free_area[order]);
unsigned long size;
@@ -3542,12 +3541,9 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
* so this should not fail on zone boundaries.
*/
WARN_ON_ONCE(ret == -1);
- if (ret > 0) {
- spin_unlock_irqrestore(&zone->lock, flags);
+ if (ret > 0)
return ret;
- }
}
- spin_unlock_irqrestore(&zone->lock, flags);
}
return false;
--
2.52.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v3 4/8] mm: use zone lock guard in set_migratetype_isolate()
2026-04-29 12:02 [PATCH v3 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (3 preceding siblings ...)
2026-04-29 12:02 ` [PATCH v3 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-04-29 12:02 ` Dmitry Ilvokhin
2026-04-29 15:39 ` Zi Yan
2026-04-29 12:02 ` [PATCH v3 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
` (3 subsequent siblings)
8 siblings, 1 reply; 13+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-29 12:02 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave scoped lock guard in set_migratetype_isolate() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup. The scoped variant is used to keep dump_page() outside the
locked section to avoid a lockdep splat.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/page_isolation.c | 60 ++++++++++++++++++++-------------------------
1 file changed, 26 insertions(+), 34 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 9d606052dd80..7a9d631945a3 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -167,48 +167,40 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
{
struct zone *zone = page_zone(page);
struct page *unmovable;
- unsigned long flags;
unsigned long check_unmovable_start, check_unmovable_end;
if (PageUnaccepted(page))
accept_page(page);
- spin_lock_irqsave(&zone->lock, flags);
-
- /*
- * We assume the caller intended to SET migrate type to isolate.
- * If it is already set, then someone else must have raced and
- * set it before us.
- */
- if (is_migrate_isolate_page(page)) {
- spin_unlock_irqrestore(&zone->lock, flags);
- return -EBUSY;
- }
-
- /*
- * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
- * We just check MOVABLE pages.
- *
- * Pass the intersection of [start_pfn, end_pfn) and the page's pageblock
- * to avoid redundant checks.
- */
- check_unmovable_start = max(page_to_pfn(page), start_pfn);
- check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
- end_pfn);
-
- unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end,
- mode);
- if (!unmovable) {
- if (!pageblock_isolate_and_move_free_pages(zone, page)) {
- spin_unlock_irqrestore(&zone->lock, flags);
+ scoped_guard(spinlock_irqsave, &zone->lock) {
+ /*
+ * We assume the caller intended to SET migrate type to
+ * isolate. If it is already set, then someone else must have
+ * raced and set it before us.
+ */
+ if (is_migrate_isolate_page(page))
return -EBUSY;
+
+ /*
+ * FIXME: Now, memory hotplug doesn't call shrink_slab() by
+ * itself. We just check MOVABLE pages.
+ *
+ * Pass the intersection of [start_pfn, end_pfn) and the page's
+ * pageblock to avoid redundant checks.
+ */
+ check_unmovable_start = max(page_to_pfn(page), start_pfn);
+ check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
+ end_pfn);
+
+ unmovable = has_unmovable_pages(check_unmovable_start,
+ check_unmovable_end, mode);
+ if (!unmovable) {
+ if (!pageblock_isolate_and_move_free_pages(zone, page))
+ return -EBUSY;
+ zone->nr_isolate_pageblock++;
+ return 0;
}
- zone->nr_isolate_pageblock++;
- spin_unlock_irqrestore(&zone->lock, flags);
- return 0;
}
-
- spin_unlock_irqrestore(&zone->lock, flags);
if (mode == PB_ISOLATE_MODE_MEM_OFFLINE) {
/*
* printk() with zone->lock held will likely trigger a
--
2.52.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v3 5/8] mm: use zone lock guard in take_page_off_buddy()
2026-04-29 12:02 [PATCH v3 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (4 preceding siblings ...)
2026-04-29 12:02 ` [PATCH v3 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
@ 2026-04-29 12:02 ` Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
` (2 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-29 12:02 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in take_page_off_buddy() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.
This also allows to return directly from the loop, removing the 'ret'
variable.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/page_alloc.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2f4170ae60f5..013c97a3db12 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7471,11 +7471,9 @@ bool take_page_off_buddy(struct page *page)
{
struct zone *zone = page_zone(page);
unsigned long pfn = page_to_pfn(page);
- unsigned long flags;
unsigned int order;
- bool ret = false;
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
for (order = 0; order < NR_PAGE_ORDERS; order++) {
struct page *page_head = page - (pfn & ((1 << order) - 1));
int page_order = buddy_order(page_head);
@@ -7490,14 +7488,12 @@ bool take_page_off_buddy(struct page *page)
break_down_buddy_pages(zone, page_head, page, 0,
page_order, migratetype);
SetPageHWPoisonTakenOff(page);
- ret = true;
- break;
+ return true;
}
if (page_count(page_head) > 0)
break;
}
- spin_unlock_irqrestore(&zone->lock, flags);
- return ret;
+ return false;
}
/*
--
2.52.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v3 6/8] mm: use zone lock guard in put_page_back_buddy()
2026-04-29 12:02 [PATCH v3 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (5 preceding siblings ...)
2026-04-29 12:02 ` [PATCH v3 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
@ 2026-04-29 12:02 ` Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
2026-04-29 15:37 ` [PATCH v3 0/8] mm: use spinlock guards for zone lock Andrew Morton
8 siblings, 0 replies; 13+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-29 12:02 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in put_page_back_buddy() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/page_alloc.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 013c97a3db12..87758fb6a926 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7502,23 +7502,19 @@ bool take_page_off_buddy(struct page *page)
bool put_page_back_buddy(struct page *page)
{
struct zone *zone = page_zone(page);
- unsigned long flags;
- bool ret = false;
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
if (put_page_testzero(page)) {
unsigned long pfn = page_to_pfn(page);
int migratetype = get_pfnblock_migratetype(page, pfn);
ClearPageHWPoisonTakenOff(page);
__free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
- if (TestClearPageHWPoison(page)) {
- ret = true;
- }
+ if (TestClearPageHWPoison(page))
+ return true;
}
- spin_unlock_irqrestore(&zone->lock, flags);
- return ret;
+ return false;
}
#endif
--
2.52.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v3 7/8] mm: use zone lock guard in free_pcppages_bulk()
2026-04-29 12:02 [PATCH v3 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (6 preceding siblings ...)
2026-04-29 12:02 ` [PATCH v3 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
@ 2026-04-29 12:02 ` Dmitry Ilvokhin
2026-04-29 15:37 ` [PATCH v3 0/8] mm: use spinlock guards for zone lock Andrew Morton
8 siblings, 0 replies; 13+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-29 12:02 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in free_pcppages_bulk() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/page_alloc.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 87758fb6a926..27bd316e4453 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1456,7 +1456,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
struct per_cpu_pages *pcp,
int pindex)
{
- unsigned long flags;
unsigned int order;
struct page *page;
@@ -1469,7 +1468,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
/* Ensure requested pindex is drained first. */
pindex = pindex - 1;
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
while (count > 0) {
struct list_head *list;
@@ -1501,8 +1500,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
trace_mm_page_pcpu_drain(page, order, mt);
} while (count > 0 && !list_empty(list));
}
-
- spin_unlock_irqrestore(&zone->lock, flags);
}
/* Split a multi-block free page into its individual pageblocks. */
--
2.52.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v3 8/8] mm: use zone lock guard in __offline_isolated_pages()
2026-04-29 12:02 ` [PATCH v3 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
@ 2026-04-29 12:02 ` Dmitry Ilvokhin
0 siblings, 0 replies; 13+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-29 12:02 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in __offline_isolated_pages() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/page_alloc.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 27bd316e4453..8cfec846203f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7358,7 +7358,7 @@ void zone_pcp_reset(struct zone *zone)
unsigned long __offline_isolated_pages(unsigned long start_pfn,
unsigned long end_pfn)
{
- unsigned long already_offline = 0, flags;
+ unsigned long already_offline = 0;
unsigned long pfn = start_pfn;
struct page *page;
struct zone *zone;
@@ -7366,7 +7366,7 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
offline_mem_sections(pfn, end_pfn);
zone = page_zone(pfn_to_page(pfn));
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
while (pfn < end_pfn) {
page = pfn_to_page(pfn);
/*
@@ -7396,7 +7396,6 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
pfn += (1 << order);
}
- spin_unlock_irqrestore(&zone->lock, flags);
return end_pfn - start_pfn - already_offline;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v3 0/8] mm: use spinlock guards for zone lock
2026-04-29 12:02 [PATCH v3 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (7 preceding siblings ...)
2026-04-29 12:02 ` [PATCH v3 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
@ 2026-04-29 15:37 ` Andrew Morton
8 siblings, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2026-04-29 15:37 UTC (permalink / raw)
To: Dmitry Ilvokhin
Cc: Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan, linux-mm, linux-kernel,
kernel-team, Steven Rostedt, david, Lorenzo Stoakes,
Peter Zijlstra
On Wed, 29 Apr 2026 12:02:05 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:
> This series uses spinlock guard for zone lock across several mm
> functions to replace explicit lock/unlock patterns with automatic
> scope-based cleanup.
>
> This simplifies the control flow by removing 'flags' variables, goto
> labels, and redundant unlock calls.
>
> Patches are ordered by decreasing value. The first six patches simplify
> the control flow by removing gotos, multiple unlock paths, or 'ret'
> variables. The last two are simpler lock/unlock pair conversions that
> only remove 'flags' and can be dropped if considered unnecessary churn.
>
> Binary size increase is +39 bytes, with Peter Zijlstra's fix for guards
> [1] applied (already in mm-stable). This is due to the compiler not
> being able to deduplicate epilogue and eliminate redundant NULL check.
> See discussion [2] for more details. I proposed a patch [3] that fixes
> this, but until it is merged we need to assume +39 bytes will stay
> (though it is compiler dependent).
OK, thanks, I'll queue it up.
Yet again: the question here is whether those who work on this code
like guard(), or would prefer the traditional open-coded locking.
Michal was an ack, others were silent.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v3 4/8] mm: use zone lock guard in set_migratetype_isolate()
2026-04-29 12:02 ` [PATCH v3 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
@ 2026-04-29 15:39 ` Zi Yan
0 siblings, 0 replies; 13+ messages in thread
From: Zi Yan @ 2026-04-29 15:39 UTC (permalink / raw)
To: Dmitry Ilvokhin
Cc: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, linux-mm, linux-kernel,
kernel-team, Steven Rostedt
On 29 Apr 2026, at 8:02, Dmitry Ilvokhin wrote:
> Use spinlock_irqsave scoped lock guard in set_migratetype_isolate() to
> replace the explicit lock/unlock pattern with automatic scope-based
> cleanup. The scoped variant is used to keep dump_page() outside the
> locked section to avoid a lockdep splat.
>
> Suggested-by: Steven Rostedt <rostedt@goodmis.org>
> Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
> Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/page_isolation.c | 60 ++++++++++++++++++++-------------------------
> 1 file changed, 26 insertions(+), 34 deletions(-)
>
Acked-by: Zi Yan <ziy@nvidia.com>
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v3 2/8] mm: use zone lock guard in unset_migratetype_isolate()
2026-04-29 12:02 ` [PATCH v3 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
@ 2026-04-29 15:40 ` Zi Yan
0 siblings, 0 replies; 13+ messages in thread
From: Zi Yan @ 2026-04-29 15:40 UTC (permalink / raw)
To: Dmitry Ilvokhin
Cc: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, linux-mm, linux-kernel,
kernel-team, Steven Rostedt
On 29 Apr 2026, at 8:02, Dmitry Ilvokhin wrote:
> Use spinlock_irqsave zone lock guard in unset_migratetype_isolate() to
> replace the explicit lock/unlock and goto pattern with automatic
> scope-based cleanup.
>
> Suggested-by: Steven Rostedt <rostedt@goodmis.org>
> Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
> Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/page_isolation.c | 7 ++-----
> 1 file changed, 2 insertions(+), 5 deletions(-)
>
Acked-by: Zi Yan <ziy@nvidia.com>
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2026-04-29 15:40 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-29 12:02 [PATCH v3 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
2026-04-29 12:02 ` Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
2026-04-29 15:40 ` Zi Yan
2026-04-29 12:02 ` [PATCH v3 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
2026-04-29 15:39 ` Zi Yan
2026-04-29 12:02 ` [PATCH v3 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
2026-04-29 12:02 ` [PATCH v3 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
2026-04-29 15:37 ` [PATCH v3 0/8] mm: use spinlock guards for zone lock Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox