public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock
@ 2026-04-08 13:33 Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
                   ` (8 more replies)
  0 siblings, 9 replies; 11+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-08 13:33 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Resending v2 to get feedback from folks who work with this code, as
Andrew suggested.

This series uses spinlock guard for zone lock across several mm
functions to replace explicit lock/unlock patterns with automatic
scope-based cleanup.

This simplifies the control flow by removing 'flags' variables, goto
labels, and redundant unlock calls.

Patches are ordered by decreasing value. The first six patches simplify
the control flow by removing gotos, multiple unlock paths, or 'ret'
variables. The last two are simpler lock/unlock pair conversions that
only remove 'flags' and can be dropped if considered unnecessary churn.

Based on mm-stable.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>

v1 -> v2:

- Andrew Morton raised concerns about binary size increase in v1.
  Peter Zijlstra has since fixed the underlying issue in the guards
  infrastructure in tip [1]. Note: the fix is not yet in mm-stable, so
  it needs to be applied first to reproduce these results. With that
  fix, bloat-o-meter on x86 defconfig shows a net decrease of 49 bytes
  (-0.12%) for page_alloc.o.
- Rebased on mm-stable, since the patch this series depended on was
  dropped from mm-new.
- Converted guard(zone_lock_irqsave)(zone) to
  guard(spinlock_irqsave)(&zone->lock).
- Dropped redundant braces in unreserve_highatomic_pageblock()
  (Steven Rostedt)

v2: https://lore.kernel.org/all/cover.1774627568.git.d@ilvokhin.com/
v1: https://lore.kernel.org/all/cover.1772811429.git.d@ilvokhin.com/

[1]: https://lore.kernel.org/all/20260309164516.GE606826@noisy.programming.kicks-ass.net/

Dmitry Ilvokhin (8):
  mm: use zone lock guard in reserve_highatomic_pageblock()
  mm: use zone lock guard in unset_migratetype_isolate()
  mm: use zone lock guard in unreserve_highatomic_pageblock()
  mm: use zone lock guard in set_migratetype_isolate()
  mm: use zone lock guard in take_page_off_buddy()
  mm: use zone lock guard in put_page_back_buddy()
  mm: use zone lock guard in free_pcppages_bulk()
  mm: use zone lock guard in __offline_isolated_pages()

 mm/page_alloc.c     | 53 ++++++++++++-----------------------
 mm/page_isolation.c | 67 +++++++++++++++++++--------------------------
 2 files changed, 45 insertions(+), 75 deletions(-)

-- 
2.52.0



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RESEND PATCH v2 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-04-08 13:33 [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
@ 2026-04-08 13:33 ` Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-08 13:33 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use the spinlock_irqsave zone lock guard in
reserve_highatomic_pageblock() to replace the explicit lock/unlock and
goto out_unlock pattern with automatic scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 111b54df8a3c..3a4523c35fb6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3431,7 +3431,7 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
 					 struct zone *zone)
 {
 	int mt;
-	unsigned long max_managed, flags;
+	unsigned long max_managed;
 
 	/*
 	 * The number reserved as: minimum is 1 pageblock, maximum is
@@ -3445,29 +3445,26 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
 	if (zone->nr_reserved_highatomic >= max_managed)
 		return;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	guard(spinlock_irqsave)(&zone->lock);
 
 	/* Recheck the nr_reserved_highatomic limit under the lock */
 	if (zone->nr_reserved_highatomic >= max_managed)
-		goto out_unlock;
+		return;
 
 	/* Yoink! */
 	mt = get_pageblock_migratetype(page);
 	/* Only reserve normal pageblocks (i.e., they can merge with others) */
 	if (!migratetype_is_mergeable(mt))
-		goto out_unlock;
+		return;
 
 	if (order < pageblock_order) {
 		if (move_freepages_block(zone, page, mt, MIGRATE_HIGHATOMIC) == -1)
-			goto out_unlock;
+			return;
 		zone->nr_reserved_highatomic += pageblock_nr_pages;
 	} else {
 		change_pageblock_range(page, order, MIGRATE_HIGHATOMIC);
 		zone->nr_reserved_highatomic += 1 << order;
 	}
-
-out_unlock:
-	spin_unlock_irqrestore(&zone->lock, flags);
 }
 
 /*
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RESEND PATCH v2 2/8] mm: use zone lock guard in unset_migratetype_isolate()
  2026-04-08 13:33 [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-04-08 13:33 ` Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-08 13:33 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use spinlock_irqsave zone lock guard in unset_migratetype_isolate() to
replace the explicit lock/unlock and goto pattern with automatic
scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_isolation.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index c48ff5c00244..9d606052dd80 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -223,15 +223,14 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
 static void unset_migratetype_isolate(struct page *page)
 {
 	struct zone *zone;
-	unsigned long flags;
 	bool isolated_page = false;
 	unsigned int order;
 	struct page *buddy;
 
 	zone = page_zone(page);
-	spin_lock_irqsave(&zone->lock, flags);
+	guard(spinlock_irqsave)(&zone->lock);
 	if (!is_migrate_isolate_page(page))
-		goto out;
+		return;
 
 	/*
 	 * Because freepage with more than pageblock_order on isolated
@@ -279,8 +278,6 @@ static void unset_migratetype_isolate(struct page *page)
 		__putback_isolated_page(page, order, get_pageblock_migratetype(page));
 	}
 	zone->nr_isolate_pageblock--;
-out:
-	spin_unlock_irqrestore(&zone->lock, flags);
 }
 
 static inline struct page *
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RESEND PATCH v2 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock()
  2026-04-08 13:33 [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
@ 2026-04-08 13:33 ` Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-08 13:33 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use spinlock_irqsave zone lock guard in unreserve_highatomic_pageblock()
to replace the explicit lock/unlock pattern with automatic scope-based
cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3a4523c35fb6..0b9a423cbd24 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3480,7 +3480,6 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 						bool force)
 {
 	struct zonelist *zonelist = ac->zonelist;
-	unsigned long flags;
 	struct zoneref *z;
 	struct zone *zone;
 	struct page *page;
@@ -3497,7 +3496,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 					pageblock_nr_pages)
 			continue;
 
-		spin_lock_irqsave(&zone->lock, flags);
+		guard(spinlock_irqsave)(&zone->lock);
 		for (order = 0; order < NR_PAGE_ORDERS; order++) {
 			struct free_area *area = &(zone->free_area[order]);
 			unsigned long size;
@@ -3544,12 +3543,9 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 			 * so this should not fail on zone boundaries.
 			 */
 			WARN_ON_ONCE(ret == -1);
-			if (ret > 0) {
-				spin_unlock_irqrestore(&zone->lock, flags);
+			if (ret > 0)
 				return ret;
-			}
 		}
-		spin_unlock_irqrestore(&zone->lock, flags);
 	}
 
 	return false;
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RESEND PATCH v2 4/8] mm: use zone lock guard in set_migratetype_isolate()
  2026-04-08 13:33 [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
                   ` (2 preceding siblings ...)
  2026-04-08 13:33 ` [RESEND PATCH v2 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-04-08 13:33 ` Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-08 13:33 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use spinlock_irqsave scoped lock guard in set_migratetype_isolate() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup. The scoped variant is used to keep dump_page() outside the
locked section to avoid a lockdep splat.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_isolation.c | 60 ++++++++++++++++++++-------------------------
 1 file changed, 26 insertions(+), 34 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 9d606052dd80..7a9d631945a3 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -167,48 +167,40 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
 {
 	struct zone *zone = page_zone(page);
 	struct page *unmovable;
-	unsigned long flags;
 	unsigned long check_unmovable_start, check_unmovable_end;
 
 	if (PageUnaccepted(page))
 		accept_page(page);
 
-	spin_lock_irqsave(&zone->lock, flags);
-
-	/*
-	 * We assume the caller intended to SET migrate type to isolate.
-	 * If it is already set, then someone else must have raced and
-	 * set it before us.
-	 */
-	if (is_migrate_isolate_page(page)) {
-		spin_unlock_irqrestore(&zone->lock, flags);
-		return -EBUSY;
-	}
-
-	/*
-	 * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
-	 * We just check MOVABLE pages.
-	 *
-	 * Pass the intersection of [start_pfn, end_pfn) and the page's pageblock
-	 * to avoid redundant checks.
-	 */
-	check_unmovable_start = max(page_to_pfn(page), start_pfn);
-	check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
-				  end_pfn);
-
-	unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end,
-			mode);
-	if (!unmovable) {
-		if (!pageblock_isolate_and_move_free_pages(zone, page)) {
-			spin_unlock_irqrestore(&zone->lock, flags);
+	scoped_guard(spinlock_irqsave, &zone->lock) {
+		/*
+		 * We assume the caller intended to SET migrate type to
+		 * isolate. If it is already set, then someone else must have
+		 * raced and set it before us.
+		 */
+		if (is_migrate_isolate_page(page))
 			return -EBUSY;
+
+		/*
+		 * FIXME: Now, memory hotplug doesn't call shrink_slab() by
+		 * itself. We just check MOVABLE pages.
+		 *
+		 * Pass the intersection of [start_pfn, end_pfn) and the page's
+		 * pageblock to avoid redundant checks.
+		 */
+		check_unmovable_start = max(page_to_pfn(page), start_pfn);
+		check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
+					  end_pfn);
+
+		unmovable = has_unmovable_pages(check_unmovable_start,
+				check_unmovable_end, mode);
+		if (!unmovable) {
+			if (!pageblock_isolate_and_move_free_pages(zone, page))
+				return -EBUSY;
+			zone->nr_isolate_pageblock++;
+			return 0;
 		}
-		zone->nr_isolate_pageblock++;
-		spin_unlock_irqrestore(&zone->lock, flags);
-		return 0;
 	}
-
-	spin_unlock_irqrestore(&zone->lock, flags);
 	if (mode == PB_ISOLATE_MODE_MEM_OFFLINE) {
 		/*
 		 * printk() with zone->lock held will likely trigger a
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RESEND PATCH v2 5/8] mm: use zone lock guard in take_page_off_buddy()
  2026-04-08 13:33 [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
                   ` (3 preceding siblings ...)
  2026-04-08 13:33 ` [RESEND PATCH v2 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
@ 2026-04-08 13:33 ` Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-08 13:33 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use spinlock_irqsave zone lock guard in take_page_off_buddy() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.

This also allows to return directly from the loop, removing the 'ret'
variable.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0b9a423cbd24..4d074707c850 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7509,11 +7509,9 @@ bool take_page_off_buddy(struct page *page)
 {
 	struct zone *zone = page_zone(page);
 	unsigned long pfn = page_to_pfn(page);
-	unsigned long flags;
 	unsigned int order;
-	bool ret = false;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	guard(spinlock_irqsave)(&zone->lock);
 	for (order = 0; order < NR_PAGE_ORDERS; order++) {
 		struct page *page_head = page - (pfn & ((1 << order) - 1));
 		int page_order = buddy_order(page_head);
@@ -7528,14 +7526,12 @@ bool take_page_off_buddy(struct page *page)
 			break_down_buddy_pages(zone, page_head, page, 0,
 						page_order, migratetype);
 			SetPageHWPoisonTakenOff(page);
-			ret = true;
-			break;
+			return true;
 		}
 		if (page_count(page_head) > 0)
 			break;
 	}
-	spin_unlock_irqrestore(&zone->lock, flags);
-	return ret;
+	return false;
 }
 
 /*
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RESEND PATCH v2 6/8] mm: use zone lock guard in put_page_back_buddy()
  2026-04-08 13:33 [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
                   ` (4 preceding siblings ...)
  2026-04-08 13:33 ` [RESEND PATCH v2 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
@ 2026-04-08 13:33 ` Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-08 13:33 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use spinlock_irqsave zone lock guard in put_page_back_buddy() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4d074707c850..096f5fef0eb5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7540,23 +7540,19 @@ bool take_page_off_buddy(struct page *page)
 bool put_page_back_buddy(struct page *page)
 {
 	struct zone *zone = page_zone(page);
-	unsigned long flags;
-	bool ret = false;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	guard(spinlock_irqsave)(&zone->lock);
 	if (put_page_testzero(page)) {
 		unsigned long pfn = page_to_pfn(page);
 		int migratetype = get_pfnblock_migratetype(page, pfn);
 
 		ClearPageHWPoisonTakenOff(page);
 		__free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
-		if (TestClearPageHWPoison(page)) {
-			ret = true;
-		}
+		if (TestClearPageHWPoison(page))
+			return true;
 	}
-	spin_unlock_irqrestore(&zone->lock, flags);
 
-	return ret;
+	return false;
 }
 #endif
 
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RESEND PATCH v2 7/8] mm: use zone lock guard in free_pcppages_bulk()
  2026-04-08 13:33 [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
                   ` (5 preceding siblings ...)
  2026-04-08 13:33 ` [RESEND PATCH v2 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
@ 2026-04-08 13:33 ` Dmitry Ilvokhin
  2026-04-08 13:33 ` [RESEND PATCH v2 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
  2026-04-08 16:42 ` [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Michal Hocko
  8 siblings, 0 replies; 11+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-08 13:33 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use spinlock_irqsave zone lock guard in free_pcppages_bulk() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 096f5fef0eb5..6a7c548a7406 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1458,7 +1458,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 					struct per_cpu_pages *pcp,
 					int pindex)
 {
-	unsigned long flags;
 	unsigned int order;
 	struct page *page;
 
@@ -1471,7 +1470,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 	/* Ensure requested pindex is drained first. */
 	pindex = pindex - 1;
 
-	spin_lock_irqsave(&zone->lock, flags);
+	guard(spinlock_irqsave)(&zone->lock);
 
 	while (count > 0) {
 		struct list_head *list;
@@ -1503,8 +1502,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 			trace_mm_page_pcpu_drain(page, order, mt);
 		} while (count > 0 && !list_empty(list));
 	}
-
-	spin_unlock_irqrestore(&zone->lock, flags);
 }
 
 /* Split a multi-block free page into its individual pageblocks. */
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RESEND PATCH v2 8/8] mm: use zone lock guard in __offline_isolated_pages()
  2026-04-08 13:33 [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
                   ` (6 preceding siblings ...)
  2026-04-08 13:33 ` [RESEND PATCH v2 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
@ 2026-04-08 13:33 ` Dmitry Ilvokhin
  2026-04-08 16:42 ` [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Michal Hocko
  8 siblings, 0 replies; 11+ messages in thread
From: Dmitry Ilvokhin @ 2026-04-08 13:33 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use spinlock_irqsave zone lock guard in __offline_isolated_pages() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6a7c548a7406..bda0282bcb03 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7396,7 +7396,7 @@ void zone_pcp_reset(struct zone *zone)
 unsigned long __offline_isolated_pages(unsigned long start_pfn,
 		unsigned long end_pfn)
 {
-	unsigned long already_offline = 0, flags;
+	unsigned long already_offline = 0;
 	unsigned long pfn = start_pfn;
 	struct page *page;
 	struct zone *zone;
@@ -7404,7 +7404,7 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
 
 	offline_mem_sections(pfn, end_pfn);
 	zone = page_zone(pfn_to_page(pfn));
-	spin_lock_irqsave(&zone->lock, flags);
+	guard(spinlock_irqsave)(&zone->lock);
 	while (pfn < end_pfn) {
 		page = pfn_to_page(pfn);
 		/*
@@ -7434,7 +7434,6 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
 		del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
 		pfn += (1 << order);
 	}
-	spin_unlock_irqrestore(&zone->lock, flags);
 
 	return end_pfn - start_pfn - already_offline;
 }
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock
  2026-04-08 13:33 [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
                   ` (7 preceding siblings ...)
  2026-04-08 13:33 ` [RESEND PATCH v2 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
@ 2026-04-08 16:42 ` Michal Hocko
  2026-04-08 20:07   ` Andrew Morton
  8 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2026-04-08 16:42 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan,
	Brendan Jackman, Johannes Weiner, Zi Yan, linux-mm, linux-kernel,
	kernel-team, Steven Rostedt

On Wed 08-04-26 13:33:15, Dmitry Ilvokhin wrote:
> Resending v2 to get feedback from folks who work with this code, as
> Andrew suggested.
> 
> This series uses spinlock guard for zone lock across several mm
> functions to replace explicit lock/unlock patterns with automatic
> scope-based cleanup.
> 
> This simplifies the control flow by removing 'flags' variables, goto
> labels, and redundant unlock calls.
> 
> Patches are ordered by decreasing value. The first six patches simplify
> the control flow by removing gotos, multiple unlock paths, or 'ret'
> variables. The last two are simpler lock/unlock pair conversions that
> only remove 'flags' and can be dropped if considered unnecessary churn.
> 
> Based on mm-stable.
> 
> Suggested-by: Steven Rostedt <rostedt@goodmis.org>
> 
> v1 -> v2:
> 
> - Andrew Morton raised concerns about binary size increase in v1.
>   Peter Zijlstra has since fixed the underlying issue in the guards
>   infrastructure in tip [1]. Note: the fix is not yet in mm-stable, so
>   it needs to be applied first to reproduce these results. With that
>   fix, bloat-o-meter on x86 defconfig shows a net decrease of 49 bytes
>   (-0.12%) for page_alloc.o.
> - Rebased on mm-stable, since the patch this series depended on was
>   dropped from mm-new.
> - Converted guard(zone_lock_irqsave)(zone) to
>   guard(spinlock_irqsave)(&zone->lock).
> - Dropped redundant braces in unreserve_highatomic_pageblock()
>   (Steven Rostedt)
> 
> v2: https://lore.kernel.org/all/cover.1774627568.git.d@ilvokhin.com/
> v1: https://lore.kernel.org/all/cover.1772811429.git.d@ilvokhin.com/
> 
> [1]: https://lore.kernel.org/all/20260309164516.GE606826@noisy.programming.kicks-ass.net/
> 
> Dmitry Ilvokhin (8):
>   mm: use zone lock guard in reserve_highatomic_pageblock()
>   mm: use zone lock guard in unset_migratetype_isolate()
>   mm: use zone lock guard in unreserve_highatomic_pageblock()
>   mm: use zone lock guard in set_migratetype_isolate()
>   mm: use zone lock guard in take_page_off_buddy()
>   mm: use zone lock guard in put_page_back_buddy()
>   mm: use zone lock guard in free_pcppages_bulk()
>   mm: use zone lock guard in __offline_isolated_pages()
> 
>  mm/page_alloc.c     | 53 ++++++++++++-----------------------
>  mm/page_isolation.c | 67 +++++++++++++++++++--------------------------
>  2 files changed, 45 insertions(+), 75 deletions(-)

I like the resulting code. For the whole series.
Acked-by: Michal Hocko <mhocko@suse.com>

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock
  2026-04-08 16:42 ` [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Michal Hocko
@ 2026-04-08 20:07   ` Andrew Morton
  0 siblings, 0 replies; 11+ messages in thread
From: Andrew Morton @ 2026-04-08 20:07 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Dmitry Ilvokhin, Vlastimil Babka, Suren Baghdasaryan,
	Brendan Jackman, Johannes Weiner, Zi Yan, linux-mm, linux-kernel,
	kernel-team, Steven Rostedt

On Wed, 8 Apr 2026 18:42:16 +0200 Michal Hocko <mhocko@suse.com> wrote:

> > Dmitry Ilvokhin (8):
> >   mm: use zone lock guard in reserve_highatomic_pageblock()
> >   mm: use zone lock guard in unset_migratetype_isolate()
> >   mm: use zone lock guard in unreserve_highatomic_pageblock()
> >   mm: use zone lock guard in set_migratetype_isolate()
> >   mm: use zone lock guard in take_page_off_buddy()
> >   mm: use zone lock guard in put_page_back_buddy()
> >   mm: use zone lock guard in free_pcppages_bulk()
> >   mm: use zone lock guard in __offline_isolated_pages()
> > 
> >  mm/page_alloc.c     | 53 ++++++++++++-----------------------
> >  mm/page_isolation.c | 67 +++++++++++++++++++--------------------------
> >  2 files changed, 45 insertions(+), 75 deletions(-)
> 
> I like the resulting code. For the whole series.
> Acked-by: Michal Hocko <mhocko@suse.com>

Thanks.

As I mentioned previously, the outcome here hinges on how the
developers who work on this code feel about using guard().  Most of
them are in hiding at present but that's OK - it's post -rc1 fun.



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2026-04-08 20:07 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-08 13:33 [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
2026-04-08 13:33 ` [RESEND PATCH v2 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
2026-04-08 13:33 ` [RESEND PATCH v2 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
2026-04-08 13:33 ` [RESEND PATCH v2 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
2026-04-08 13:33 ` [RESEND PATCH v2 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
2026-04-08 13:33 ` [RESEND PATCH v2 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
2026-04-08 13:33 ` [RESEND PATCH v2 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
2026-04-08 13:33 ` [RESEND PATCH v2 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
2026-04-08 13:33 ` [RESEND PATCH v2 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
2026-04-08 16:42 ` [RESEND PATCH v2 0/8] mm: use spinlock guards for zone lock Michal Hocko
2026-04-08 20:07   ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox