public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH 0/8] mm: introduce zone lock guards
@ 2026-03-06 16:05 Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
                   ` (8 more replies)
  0 siblings, 9 replies; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

This series defines DEFINE_LOCK_GUARD_1 for zone_lock_irqsave and uses
it across several mm functions to replace explicit lock/unlock patterns
with automatic scope-based cleanup.

This simplifies the control flow by removing 'flags' variables, goto
labels, and redundant unlock calls.

Patches are ordered by decreasing value. The first six patches simplify
the control flow by removing gotos, multiple unlock paths, or 'ret'
variables. The last two are simpler lock/unlock pair conversions that
only remove 'flags' and can be dropped if considered unnecessary churn.

Based on mm-new.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>

Dmitry Ilvokhin (8):
  mm: use zone lock guard in reserve_highatomic_pageblock()
  mm: use zone lock guard in unset_migratetype_isolate()
  mm: use zone lock guard in unreserve_highatomic_pageblock()
  mm: use zone lock guard in set_migratetype_isolate()
  mm: use zone lock guard in take_page_off_buddy()
  mm: use zone lock guard in put_page_back_buddy()
  mm: use zone lock guard in free_pcppages_bulk()
  mm: use zone lock guard in __offline_isolated_pages()

 include/linux/mmzone_lock.h |  9 +++++
 mm/page_alloc.c             | 50 +++++++++------------------
 mm/page_isolation.c         | 67 ++++++++++++++++---------------------
 3 files changed, 53 insertions(+), 73 deletions(-)

-- 
2.47.3



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 17:53   ` Andrew Morton
  2026-03-06 16:05 ` [PATCH 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use the newly introduced zone_lock_irqsave lock guard in
reserve_highatomic_pageblock() to replace the explicit lock/unlock and
goto out_unlock pattern with automatic scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 include/linux/mmzone_lock.h |  9 +++++++++
 mm/page_alloc.c             | 13 +++++--------
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/include/linux/mmzone_lock.h b/include/linux/mmzone_lock.h
index 6bd8b026029f..fe399a4505ba 100644
--- a/include/linux/mmzone_lock.h
+++ b/include/linux/mmzone_lock.h
@@ -97,4 +97,13 @@ static inline void zone_unlock_irq(struct zone *zone)
 	spin_unlock_irq(&zone->_lock);
 }
 
+DEFINE_LOCK_GUARD_1(zone_lock_irqsave, struct zone,
+		zone_lock_irqsave(_T->lock, _T->flags),
+		zone_unlock_irqrestore(_T->lock, _T->flags),
+		unsigned long flags)
+DECLARE_LOCK_GUARD_1_ATTRS(zone_lock_irqsave,
+		__acquires(_T), __releases(*(struct zone **)_T))
+#define class_zone_lock_irqsave_constructor(_T) \
+	WITH_LOCK_GUARD_1_ATTRS(zone_lock_irqsave, _T)
+
 #endif /* _LINUX_MMZONE_LOCK_H */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 75ee81445640..260fb003822a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3407,7 +3407,7 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
 					 struct zone *zone)
 {
 	int mt;
-	unsigned long max_managed, flags;
+	unsigned long max_managed;
 
 	/*
 	 * The number reserved as: minimum is 1 pageblock, maximum is
@@ -3421,29 +3421,26 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
 	if (zone->nr_reserved_highatomic >= max_managed)
 		return;
 
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 
 	/* Recheck the nr_reserved_highatomic limit under the lock */
 	if (zone->nr_reserved_highatomic >= max_managed)
-		goto out_unlock;
+		return;
 
 	/* Yoink! */
 	mt = get_pageblock_migratetype(page);
 	/* Only reserve normal pageblocks (i.e., they can merge with others) */
 	if (!migratetype_is_mergeable(mt))
-		goto out_unlock;
+		return;
 
 	if (order < pageblock_order) {
 		if (move_freepages_block(zone, page, mt, MIGRATE_HIGHATOMIC) == -1)
-			goto out_unlock;
+			return;
 		zone->nr_reserved_highatomic += pageblock_nr_pages;
 	} else {
 		change_pageblock_range(page, order, MIGRATE_HIGHATOMIC);
 		zone->nr_reserved_highatomic += 1 << order;
 	}
-
-out_unlock:
-	zone_unlock_irqrestore(zone, flags);
 }
 
 /*
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/8] mm: use zone lock guard in unset_migratetype_isolate()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use zone_lock_irqsave lock guard in unset_migratetype_isolate() to
replace the explicit lock/unlock and goto pattern with automatic
scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_isolation.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index e8414e9a718a..dc1e18124228 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -224,15 +224,14 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
 static void unset_migratetype_isolate(struct page *page)
 {
 	struct zone *zone;
-	unsigned long flags;
 	bool isolated_page = false;
 	unsigned int order;
 	struct page *buddy;
 
 	zone = page_zone(page);
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 	if (!is_migrate_isolate_page(page))
-		goto out;
+		return;
 
 	/*
 	 * Because freepage with more than pageblock_order on isolated
@@ -280,8 +279,6 @@ static void unset_migratetype_isolate(struct page *page)
 		__putback_isolated_page(page, order, get_pageblock_migratetype(page));
 	}
 	zone->nr_isolate_pageblock--;
-out:
-	zone_unlock_irqrestore(zone, flags);
 }
 
 static inline struct page *
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:10   ` Steven Rostedt
  2026-03-06 16:05 ` [PATCH 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use zone_lock_irqsave lock guard in unreserve_highatomic_pageblock()
to replace the explicit lock/unlock pattern with automatic scope-based
cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 260fb003822a..2857daf6ebfd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3456,7 +3456,6 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 						bool force)
 {
 	struct zonelist *zonelist = ac->zonelist;
-	unsigned long flags;
 	struct zoneref *z;
 	struct zone *zone;
 	struct page *page;
@@ -3473,7 +3472,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 					pageblock_nr_pages)
 			continue;
 
-		zone_lock_irqsave(zone, flags);
+		guard(zone_lock_irqsave)(zone);
 		for (order = 0; order < NR_PAGE_ORDERS; order++) {
 			struct free_area *area = &(zone->free_area[order]);
 			unsigned long size;
@@ -3521,11 +3520,9 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 			 */
 			WARN_ON_ONCE(ret == -1);
 			if (ret > 0) {
-				zone_unlock_irqrestore(zone, flags);
 				return ret;
 			}
 		}
-		zone_unlock_irqrestore(zone, flags);
 	}
 
 	return false;
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 4/8] mm: use zone lock guard in set_migratetype_isolate()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (2 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use zone_lock_irqsave scoped lock guard in set_migratetype_isolate() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup. The scoped variant is used to keep dump_page() outside the
locked section to avoid a lockdep splat.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_isolation.c | 60 ++++++++++++++++++++-------------------------
 1 file changed, 26 insertions(+), 34 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index dc1e18124228..e7f006e8870c 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -168,48 +168,40 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
 {
 	struct zone *zone = page_zone(page);
 	struct page *unmovable;
-	unsigned long flags;
 	unsigned long check_unmovable_start, check_unmovable_end;
 
 	if (PageUnaccepted(page))
 		accept_page(page);
 
-	zone_lock_irqsave(zone, flags);
-
-	/*
-	 * We assume the caller intended to SET migrate type to isolate.
-	 * If it is already set, then someone else must have raced and
-	 * set it before us.
-	 */
-	if (is_migrate_isolate_page(page)) {
-		zone_unlock_irqrestore(zone, flags);
-		return -EBUSY;
-	}
-
-	/*
-	 * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
-	 * We just check MOVABLE pages.
-	 *
-	 * Pass the intersection of [start_pfn, end_pfn) and the page's pageblock
-	 * to avoid redundant checks.
-	 */
-	check_unmovable_start = max(page_to_pfn(page), start_pfn);
-	check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
-				  end_pfn);
-
-	unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end,
-			mode);
-	if (!unmovable) {
-		if (!pageblock_isolate_and_move_free_pages(zone, page)) {
-			zone_unlock_irqrestore(zone, flags);
+	scoped_guard(zone_lock_irqsave, zone) {
+		/*
+		 * We assume the caller intended to SET migrate type to
+		 * isolate. If it is already set, then someone else must have
+		 * raced and set it before us.
+		 */
+		if (is_migrate_isolate_page(page))
 			return -EBUSY;
+
+		/*
+		 * FIXME: Now, memory hotplug doesn't call shrink_slab() by
+		 * itself. We just check MOVABLE pages.
+		 *
+		 * Pass the intersection of [start_pfn, end_pfn) and the page's
+		 * pageblock to avoid redundant checks.
+		 */
+		check_unmovable_start = max(page_to_pfn(page), start_pfn);
+		check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
+					  end_pfn);
+
+		unmovable = has_unmovable_pages(check_unmovable_start,
+				check_unmovable_end, mode);
+		if (!unmovable) {
+			if (!pageblock_isolate_and_move_free_pages(zone, page))
+				return -EBUSY;
+			zone->nr_isolate_pageblock++;
+			return 0;
 		}
-		zone->nr_isolate_pageblock++;
-		zone_unlock_irqrestore(zone, flags);
-		return 0;
 	}
-
-	zone_unlock_irqrestore(zone, flags);
 	if (mode == PB_ISOLATE_MODE_MEM_OFFLINE) {
 		/*
 		 * printk() with zone lock held will likely trigger a
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 5/8] mm: use zone lock guard in take_page_off_buddy()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (3 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use zone_lock_irqsave lock guard in take_page_off_buddy() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.

This also allows to return directly from the loop, removing the 'ret'
variable.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2857daf6ebfd..92fa922911d5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7493,11 +7493,9 @@ bool take_page_off_buddy(struct page *page)
 {
 	struct zone *zone = page_zone(page);
 	unsigned long pfn = page_to_pfn(page);
-	unsigned long flags;
 	unsigned int order;
-	bool ret = false;
 
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 	for (order = 0; order < NR_PAGE_ORDERS; order++) {
 		struct page *page_head = page - (pfn & ((1 << order) - 1));
 		int page_order = buddy_order(page_head);
@@ -7512,14 +7510,12 @@ bool take_page_off_buddy(struct page *page)
 			break_down_buddy_pages(zone, page_head, page, 0,
 						page_order, migratetype);
 			SetPageHWPoisonTakenOff(page);
-			ret = true;
-			break;
+			return true;
 		}
 		if (page_count(page_head) > 0)
 			break;
 	}
-	zone_unlock_irqrestore(zone, flags);
-	return ret;
+	return false;
 }
 
 /*
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 6/8] mm: use zone lock guard in put_page_back_buddy()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (4 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use zone_lock_irqsave lock guard in put_page_back_buddy() to replace the
explicit lock/unlock pattern with automatic scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 92fa922911d5..28b06baa4075 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7524,23 +7524,19 @@ bool take_page_off_buddy(struct page *page)
 bool put_page_back_buddy(struct page *page)
 {
 	struct zone *zone = page_zone(page);
-	unsigned long flags;
-	bool ret = false;
 
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 	if (put_page_testzero(page)) {
 		unsigned long pfn = page_to_pfn(page);
 		int migratetype = get_pfnblock_migratetype(page, pfn);
 
 		ClearPageHWPoisonTakenOff(page);
 		__free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
-		if (TestClearPageHWPoison(page)) {
-			ret = true;
-		}
+		if (TestClearPageHWPoison(page))
+			return true;
 	}
-	zone_unlock_irqrestore(zone, flags);
 
-	return ret;
+	return false;
 }
 #endif
 
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 7/8] mm: use zone lock guard in free_pcppages_bulk()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (5 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:05 ` [PATCH 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
  2026-03-06 16:15 ` [PATCH 0/8] mm: introduce zone lock guards Steven Rostedt
  8 siblings, 0 replies; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use zone_lock_irqsave lock guard in free_pcppages_bulk() to replace the
explicit lock/unlock pattern with automatic scope-based cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 28b06baa4075..2759e02340fa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1455,7 +1455,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 					struct per_cpu_pages *pcp,
 					int pindex)
 {
-	unsigned long flags;
 	unsigned int order;
 	struct page *page;
 
@@ -1468,7 +1467,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 	/* Ensure requested pindex is drained first. */
 	pindex = pindex - 1;
 
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 
 	while (count > 0) {
 		struct list_head *list;
@@ -1500,8 +1499,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 			trace_mm_page_pcpu_drain(page, order, mt);
 		} while (count > 0 && !list_empty(list));
 	}
-
-	zone_unlock_irqrestore(zone, flags);
 }
 
 /* Split a multi-block free page into its individual pageblocks. */
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 8/8] mm: use zone lock guard in __offline_isolated_pages()
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (6 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
@ 2026-03-06 16:05 ` Dmitry Ilvokhin
  2026-03-06 16:15 ` [PATCH 0/8] mm: introduce zone lock guards Steven Rostedt
  8 siblings, 0 replies; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-06 16:05 UTC (permalink / raw)
  To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan
  Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
	Steven Rostedt

Use zone_lock_irqsave lock guard in __offline_isolated_pages() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup.

Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
 mm/page_alloc.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2759e02340fa..6f7420e4431f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7380,7 +7380,7 @@ void zone_pcp_reset(struct zone *zone)
 unsigned long __offline_isolated_pages(unsigned long start_pfn,
 		unsigned long end_pfn)
 {
-	unsigned long already_offline = 0, flags;
+	unsigned long already_offline = 0;
 	unsigned long pfn = start_pfn;
 	struct page *page;
 	struct zone *zone;
@@ -7388,7 +7388,7 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
 
 	offline_mem_sections(pfn, end_pfn);
 	zone = page_zone(pfn_to_page(pfn));
-	zone_lock_irqsave(zone, flags);
+	guard(zone_lock_irqsave)(zone);
 	while (pfn < end_pfn) {
 		page = pfn_to_page(pfn);
 		/*
@@ -7418,7 +7418,6 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
 		del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
 		pfn += (1 << order);
 	}
-	zone_unlock_irqrestore(zone, flags);
 
 	return end_pfn - start_pfn - already_offline;
 }
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock()
  2026-03-06 16:05 ` [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-03-06 16:10   ` Steven Rostedt
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Rostedt @ 2026-03-06 16:10 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team

On Fri,  6 Mar 2026 16:05:37 +0000
Dmitry Ilvokhin <d@ilvokhin.com> wrote:

>  			 */
>  			WARN_ON_ONCE(ret == -1);
>  			if (ret > 0) {
> -				zone_unlock_irqrestore(zone, flags);
>  				return ret;
>  			}

You can lose the braces here too:

			if (ret > 0)
				return ret;

-- Steve

>  		}
> -		zone_unlock_irqrestore(zone, flags);
>  	}
>  


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 0/8] mm: introduce zone lock guards
  2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
                   ` (7 preceding siblings ...)
  2026-03-06 16:05 ` [PATCH 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
@ 2026-03-06 16:15 ` Steven Rostedt
  8 siblings, 0 replies; 24+ messages in thread
From: Steven Rostedt @ 2026-03-06 16:15 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team

On Fri,  6 Mar 2026 16:05:34 +0000
Dmitry Ilvokhin <d@ilvokhin.com> wrote:

> This series defines DEFINE_LOCK_GUARD_1 for zone_lock_irqsave and uses
> it across several mm functions to replace explicit lock/unlock patterns
> with automatic scope-based cleanup.
> 
> This simplifies the control flow by removing 'flags' variables, goto
> labels, and redundant unlock calls.
> 
> Patches are ordered by decreasing value. The first six patches simplify
> the control flow by removing gotos, multiple unlock paths, or 'ret'
> variables. The last two are simpler lock/unlock pair conversions that
> only remove 'flags' and can be dropped if considered unnecessary churn.
> 
> Based on mm-new.
> 
> Suggested-by: Steven Rostedt <rostedt@goodmis.org>

Thanks, the code looks much cleaner.

Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>

-- Steve


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 16:05 ` [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-03-06 17:53   ` Andrew Morton
  2026-03-06 18:00     ` Steven Rostedt
  2026-03-26 18:04     ` Dmitry Ilvokhin
  0 siblings, 2 replies; 24+ messages in thread
From: Andrew Morton @ 2026-03-06 17:53 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
	Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan, linux-mm, linux-kernel,
	kernel-team, Steven Rostedt

On Fri,  6 Mar 2026 16:05:35 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:

> Use the newly introduced zone_lock_irqsave lock guard in
> reserve_highatomic_pageblock() to replace the explicit lock/unlock and
> goto out_unlock pattern with automatic scope-based cleanup.
> 
> ...
>
> -	zone_lock_irqsave(zone, flags);
> +	guard(zone_lock_irqsave)(zone);

guard() is cute, but this patch adds a little overhead - defconfig
page_alloc.o text increases by 32 bytes, presumably all in
reserve_highatomic_pageblock().  More instructions, larger cache
footprint.

So we're adding a little overhead to every user's Linux machine for all
time.  In return for which the developers get a little convenience and
maintainability.

Is it worth it?


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 17:53   ` Andrew Morton
@ 2026-03-06 18:00     ` Steven Rostedt
  2026-03-06 18:24       ` Vlastimil Babka
  2026-03-26 18:04     ` Dmitry Ilvokhin
  1 sibling, 1 reply; 24+ messages in thread
From: Steven Rostedt @ 2026-03-06 18:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Dmitry Ilvokhin, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team,
	Peter Zijlstra


[ Adding Peter ]

On Fri, 6 Mar 2026 09:53:36 -0800
Andrew Morton <akpm@linux-foundation.org> wrote:

> On Fri,  6 Mar 2026 16:05:35 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:
> 
> > Use the newly introduced zone_lock_irqsave lock guard in
> > reserve_highatomic_pageblock() to replace the explicit lock/unlock and
> > goto out_unlock pattern with automatic scope-based cleanup.
> > 
> > ...
> >
> > -	zone_lock_irqsave(zone, flags);
> > +	guard(zone_lock_irqsave)(zone);  
> 
> guard() is cute, but this patch adds a little overhead - defconfig
> page_alloc.o text increases by 32 bytes, presumably all in
> reserve_highatomic_pageblock().  More instructions, larger cache
> footprint.
> 
> So we're adding a little overhead to every user's Linux machine for all
> time.  In return for which the developers get a little convenience and
> maintainability.

I think maintainability is of importance. Is there any measurable slowdown?
Or are we only worried about the text size increase?

> 
> Is it worth it?

This is being done all over the kernel. Perhaps we should look at ways to
make the generic infrastructure more performant?

-- Steve


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 18:00     ` Steven Rostedt
@ 2026-03-06 18:24       ` Vlastimil Babka
  2026-03-06 18:33         ` Andrew Morton
  2026-03-07 13:16         ` Peter Zijlstra
  0 siblings, 2 replies; 24+ messages in thread
From: Vlastimil Babka @ 2026-03-06 18:24 UTC (permalink / raw)
  To: Steven Rostedt, Andrew Morton
  Cc: Dmitry Ilvokhin, David Hildenbrand, Lorenzo Stoakes,
	Liam R. Howlett, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan, linux-mm, linux-kernel,
	kernel-team, Peter Zijlstra

On 3/6/26 19:00, Steven Rostedt wrote:
> 
> [ Adding Peter ]
> 
> On Fri, 6 Mar 2026 09:53:36 -0800
> Andrew Morton <akpm@linux-foundation.org> wrote:
> 
>> On Fri,  6 Mar 2026 16:05:35 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:
>> 
>> > Use the newly introduced zone_lock_irqsave lock guard in
>> > reserve_highatomic_pageblock() to replace the explicit lock/unlock and
>> > goto out_unlock pattern with automatic scope-based cleanup.
>> > 
>> > ...
>> >
>> > -	zone_lock_irqsave(zone, flags);
>> > +	guard(zone_lock_irqsave)(zone);  
>> 
>> guard() is cute, but this patch adds a little overhead - defconfig
>> page_alloc.o text increases by 32 bytes, presumably all in
>> reserve_highatomic_pageblock().  More instructions, larger cache
>> footprint.

I get this:

Function                                     old     new   delta
get_page_from_freelist                      6389    6452     +63

>> So we're adding a little overhead to every user's Linux machine for all
>> time.  In return for which the developers get a little convenience and
>> maintainability.
> 
> I think maintainability is of importance. Is there any measurable slowdown?
> Or are we only worried about the text size increase?
> 
>> 
>> Is it worth it?
> 
> This is being done all over the kernel. Perhaps we should look at ways to
> make the generic infrastructure more performant?

Yeah I don't think the guard construct in this case should be doing anything
here that wouldn't allow the compiler to compile to the exactly same result
as before? Either there's some problem with the infra, or we're just victim
of compiler heuristics. In both cases imho worth looking into rather than
rejecting the construct.

> -- Steve



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 18:24       ` Vlastimil Babka
@ 2026-03-06 18:33         ` Andrew Morton
  2026-03-06 18:46           ` Steven Rostedt
  2026-03-07 13:16         ` Peter Zijlstra
  1 sibling, 1 reply; 24+ messages in thread
From: Andrew Morton @ 2026-03-06 18:33 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Steven Rostedt, Dmitry Ilvokhin, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team,
	Peter Zijlstra

On Fri, 6 Mar 2026 19:24:56 +0100 Vlastimil Babka <vbabka@kernel.org> wrote:

> >> 
> >> Is it worth it?
> > 
> > This is being done all over the kernel. Perhaps we should look at ways to
> > make the generic infrastructure more performant?
> 
> Yeah I don't think the guard construct in this case should be doing anything
> here that wouldn't allow the compiler to compile to the exactly same result
> as before? Either there's some problem with the infra, or we're just victim
> of compiler heuristics.

Sure, it'd be good to figure this out.

> In both cases imho worth looking into rather than
> rejecting the construct.

I'm not enjoying the ides of penalizing billions of machines all of the
time in order to make life a little easier for the developers.  Seems
like a poor tradeoff.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 18:33         ` Andrew Morton
@ 2026-03-06 18:46           ` Steven Rostedt
  0 siblings, 0 replies; 24+ messages in thread
From: Steven Rostedt @ 2026-03-06 18:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Dmitry Ilvokhin, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team,
	Peter Zijlstra

On Fri, 6 Mar 2026 10:33:07 -0800
Andrew Morton <akpm@linux-foundation.org> wrote:

> I'm not enjoying the ides of penalizing billions of machines all of the
> time in order to make life a little easier for the developers.  Seems
> like a poor tradeoff.

But if there's a bug due to not being as maintainable, that too will affect
billions of machines! To me, that balances the tradeoff.

-- Steve


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 18:24       ` Vlastimil Babka
  2026-03-06 18:33         ` Andrew Morton
@ 2026-03-07 13:16         ` Peter Zijlstra
  2026-03-07 14:09           ` Dmitry Ilvokhin
  1 sibling, 1 reply; 24+ messages in thread
From: Peter Zijlstra @ 2026-03-07 13:16 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Steven Rostedt, Andrew Morton, Dmitry Ilvokhin, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team

On Fri, Mar 06, 2026 at 07:24:56PM +0100, Vlastimil Babka wrote:

> Yeah I don't think the guard construct in this case should be doing anything
> here that wouldn't allow the compiler to compile to the exactly same result
> as before? Either there's some problem with the infra, or we're just victim
> of compiler heuristics. In both cases imho worth looking into rather than
> rejecting the construct.

I'd love to look into it, but I can't seem to apply these patches to
anything.

By virtue of not actually having the patches, I had to resort to b4, and
I think the incantation is something like:

  b4 shazam cover.1772811429.git.d@ilvokhin.com

but it doesn't want to apply to anything I have at hand. Specifically, I
tried Linus' tree and tip, which is most of what I have at hand.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-07 13:16         ` Peter Zijlstra
@ 2026-03-07 14:09           ` Dmitry Ilvokhin
  2026-03-09 16:45             ` Peter Zijlstra
  0 siblings, 1 reply; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-07 14:09 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Vlastimil Babka, Steven Rostedt, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team

On Sat, Mar 07, 2026 at 02:16:41PM +0100, Peter Zijlstra wrote:
> On Fri, Mar 06, 2026 at 07:24:56PM +0100, Vlastimil Babka wrote:
> 
> > Yeah I don't think the guard construct in this case should be doing anything
> > here that wouldn't allow the compiler to compile to the exactly same result
> > as before? Either there's some problem with the infra, or we're just victim
> > of compiler heuristics. In both cases imho worth looking into rather than
> > rejecting the construct.
> 
> I'd love to look into it, but I can't seem to apply these patches to
> anything.
> 
> By virtue of not actually having the patches, I had to resort to b4, and
> I think the incantation is something like:
> 
>   b4 shazam cover.1772811429.git.d@ilvokhin.com
> 
> but it doesn't want to apply to anything I have at hand. Specifically, I
> tried Linus' tree and tip, which is most of what I have at hand.

Thanks for taking a look, Peter.

This series is based on mm-new and depends on my earlier patchset:

https://lore.kernel.org/all/cover.1772206930.git.d@ilvokhin.com/

Those patches are currently only in Andrew's mm-new tree, so this series
won't apply cleanly on Linus' tree or tip.

It should apply on top of mm-new from:

https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-07 14:09           ` Dmitry Ilvokhin
@ 2026-03-09 16:45             ` Peter Zijlstra
  2026-03-10 12:57               ` Dmitry Ilvokhin
  2026-03-12 23:40               ` Dan Williams
  0 siblings, 2 replies; 24+ messages in thread
From: Peter Zijlstra @ 2026-03-09 16:45 UTC (permalink / raw)
  To: Dmitry Ilvokhin, dan.j.williams
  Cc: Vlastimil Babka, Steven Rostedt, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team

On Sat, Mar 07, 2026 at 02:09:41PM +0000, Dmitry Ilvokhin wrote:
> On Sat, Mar 07, 2026 at 02:16:41PM +0100, Peter Zijlstra wrote:
> > On Fri, Mar 06, 2026 at 07:24:56PM +0100, Vlastimil Babka wrote:
> > 
> > > Yeah I don't think the guard construct in this case should be doing anything
> > > here that wouldn't allow the compiler to compile to the exactly same result
> > > as before? Either there's some problem with the infra, or we're just victim
> > > of compiler heuristics. In both cases imho worth looking into rather than
> > > rejecting the construct.
> > 
> > I'd love to look into it, but I can't seem to apply these patches to
> > anything.
> > 
> > By virtue of not actually having the patches, I had to resort to b4, and
> > I think the incantation is something like:
> > 
> >   b4 shazam cover.1772811429.git.d@ilvokhin.com
> > 
> > but it doesn't want to apply to anything I have at hand. Specifically, I
> > tried Linus' tree and tip, which is most of what I have at hand.
> 
> Thanks for taking a look, Peter.
> 
> This series is based on mm-new and depends on my earlier patchset:
> 
> https://lore.kernel.org/all/cover.1772206930.git.d@ilvokhin.com/
> 
> Those patches are currently only in Andrew's mm-new tree, so this series
> won't apply cleanly on Linus' tree or tip.
> 
> It should apply on top of mm-new from:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

OK, so the big problem is __GUARD_IS_ERR(), and that came up before, but
while Linus told me how to fix it, he didn't actually like it very much:

  https://lore.kernel.org/all/20250513085001.GC25891@noisy.programming.kicks-ass.net/

However it does help with this:

$ ./scripts/bloat-o-meter defconfig-build/mm/page_alloc-pre-gcc-16.o defconfig-build/mm/page_alloc-post-gcc-16.o | grep -v __UNIQUE
add/remove: 24/24 grow/shrink: 3/2 up/down: 296/-224 (72)
Function                                     old     new   delta
get_page_from_freelist                      6158    6198     +40
free_pcppages_bulk                           678     714     +36
unreserve_highatomic_pageblock               708     736     +28
make_alloc_exact                             280     264     -16
alloc_pages_bulk_noprof                     1415    1399     -16
Total: Before=45299, After=45371, chg +0.16%

$ ./scripts/bloat-o-meter defconfig-build/mm/page_alloc-pre-gcc-16.o defconfig-build/mm/page_alloc.o | grep -v __UNIQUE
add/remove: 24/24 grow/shrink: 3/15 up/down: 277/-363 (-86)
Function                                     old     new   delta
unreserve_highatomic_pageblock               708     757     +49
free_pcppages_bulk                           678     707     +29
get_page_from_freelist                      6158    6165      +7
try_to_claim_block                          1729    1726      -3
setup_per_zone_wmarks                        656     653      -3
free_pages_prepare                           924     921      -3
calculate_totalreserve_pages                 282     279      -3
alloc_frozen_pages_nolock_noprof             622     619      -3
__free_pages_prepare                         924     921      -3
__free_pages_ok                             1197    1194      -3
__free_one_page                             1330    1327      -3
__free_frozen_pages                         1303    1300      -3
__rmqueue_pcplist                           2786    2777      -9
free_unref_folios                           1905    1894     -11
setup_per_zone_lowmem_reserve                388     374     -14
make_alloc_exact                             280     264     -16
__alloc_frozen_pages_noprof                 5411    5368     -43
nr_free_zone_pages                           189     138     -51
Total: Before=45299, After=45213, chg -0.19%



However, looking at things again, I think we can get rid of that
unconditional __GUARD_IS_ERR(), something like the below, Dan?

This then gives:

$ ./scripts/bloat-o-meter defconfig-build/mm/page_alloc-pre-gcc-16.o defconfig-build/mm/page_alloc.o | grep -v __UNIQUE
add/remove: 24/24 grow/shrink: 1/16 up/down: 213/-486 (-273)
Function                                     old     new   delta
free_pcppages_bulk                           678     699     +21
try_to_claim_block                          1729    1723      -6
setup_per_zone_wmarks                        656     650      -6
free_pages_prepare                           924     918      -6
calculate_totalreserve_pages                 282     276      -6
alloc_frozen_pages_nolock_noprof             622     616      -6
__free_pages_prepare                         924     918      -6
__free_pages_ok                             1197    1191      -6
__free_one_page                             1330    1324      -6
__free_frozen_pages                         1303    1297      -6
free_pages_exact                             199     183     -16
setup_per_zone_lowmem_reserve                388     371     -17
free_unref_folios                           1905    1888     -17
__rmqueue_pcplist                           2786    2768     -18
nr_free_zone_pages                           189     138     -51
__alloc_frozen_pages_noprof                 5411    5359     -52
get_page_from_freelist                      6158    6089     -69
Total: Before=45299, After=45026, chg -0.60%


Anyway, if you all care about the size of things -- those tracepoints
consume *WAAY* more bytes than any of this.


---
--- a/include/linux/cleanup.h
+++ b/include/linux/cleanup.h
@@ -286,15 +286,18 @@ static __always_inline _type class_##_na
 	__no_context_analysis						\
 { _type t = _init; return t; }
 
-#define EXTEND_CLASS(_name, ext, _init, _init_args...)			\
-typedef lock_##_name##_t lock_##_name##ext##_t;			\
+#define EXTEND_CLASS_COND(_name, ext, _cond, _init, _init_args...)	\
+typedef lock_##_name##_t lock_##_name##ext##_t;				\
 typedef class_##_name##_t class_##_name##ext##_t;			\
-static __always_inline void class_##_name##ext##_destructor(class_##_name##_t *p) \
-{ class_##_name##_destructor(p); }					\
+static __always_inline void class_##_name##ext##_destructor(class_##_name##_t *_T) \
+{ if (_cond) return; class_##_name##_destructor(_T); }			\
 static __always_inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \
 	__no_context_analysis \
 { class_##_name##_t t = _init; return t; }
 
+#define EXTEND_CLASS(_name, ext, _init, _init_args...)			\
+	EXTEND_CLASS_COND(_name, ext, 0, _init, _init_args)
+
 #define CLASS(_name, var)						\
 	class_##_name##_t var __cleanup(class_##_name##_destructor) =	\
 		class_##_name##_constructor
@@ -394,12 +397,12 @@ static __maybe_unused const bool class_#
 	__DEFINE_GUARD_LOCK_PTR(_name, _T)
 
 #define DEFINE_GUARD(_name, _type, _lock, _unlock) \
-	DEFINE_CLASS(_name, _type, if (!__GUARD_IS_ERR(_T)) { _unlock; }, ({ _lock; _T; }), _type _T); \
+	DEFINE_CLASS(_name, _type, if (_T) { _unlock; }, ({ _lock; _T; }), _type _T); \
 	DEFINE_CLASS_IS_GUARD(_name)
 
 #define DEFINE_GUARD_COND_4(_name, _ext, _lock, _cond) \
 	__DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \
-	EXTEND_CLASS(_name, _ext, \
+	EXTEND_CLASS_COND(_name, _ext, __GUARD_IS_ERR(*_T), \
 		     ({ void *_t = _T; int _RET = (_lock); if (_T && !(_cond)) _t = ERR_PTR(_RET); _t; }), \
 		     class_##_name##_t _T) \
 	static __always_inline void * class_##_name##_ext##_lock_ptr(class_##_name##_t *_T) \
@@ -488,7 +491,7 @@ typedef struct {							\
 static __always_inline void class_##_name##_destructor(class_##_name##_t *_T) \
 	__no_context_analysis						\
 {									\
-	if (!__GUARD_IS_ERR(_T->lock)) { _unlock; }			\
+	if (_T->lock) { _unlock; }					\
 }									\
 									\
 __DEFINE_GUARD_LOCK_PTR(_name, &_T->lock)
@@ -565,7 +568,7 @@ __DEFINE_LOCK_GUARD_0(_name, _lock)
 
 #define DEFINE_LOCK_GUARD_1_COND_4(_name, _ext, _lock, _cond)		\
 	__DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true);		\
-	EXTEND_CLASS(_name, _ext,					\
+	EXTEND_CLASS_COND(_name, _ext, __GUARD_IS_ERR(_T->lock),	\
 		     ({ class_##_name##_t _t = { .lock = l }, *_T = &_t;\
 		        int _RET = (_lock);                             \
 		        if (_T->lock && !(_cond)) _T->lock = ERR_PTR(_RET);\


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-09 16:45             ` Peter Zijlstra
@ 2026-03-10 12:57               ` Dmitry Ilvokhin
  2026-03-12 23:40               ` Dan Williams
  1 sibling, 0 replies; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-10 12:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: dan.j.williams, Vlastimil Babka, Steven Rostedt, Andrew Morton,
	David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team

On Mon, Mar 09, 2026 at 05:45:16PM +0100, Peter Zijlstra wrote:
> On Sat, Mar 07, 2026 at 02:09:41PM +0000, Dmitry Ilvokhin wrote:
> > On Sat, Mar 07, 2026 at 02:16:41PM +0100, Peter Zijlstra wrote:
> > > On Fri, Mar 06, 2026 at 07:24:56PM +0100, Vlastimil Babka wrote:
> > > 
> > > > Yeah I don't think the guard construct in this case should be doing anything
> > > > here that wouldn't allow the compiler to compile to the exactly same result
> > > > as before? Either there's some problem with the infra, or we're just victim
> > > > of compiler heuristics. In both cases imho worth looking into rather than
> > > > rejecting the construct.
> > > 
> > > I'd love to look into it, but I can't seem to apply these patches to
> > > anything.
> > > 
> > > By virtue of not actually having the patches, I had to resort to b4, and
> > > I think the incantation is something like:
> > > 
> > >   b4 shazam cover.1772811429.git.d@ilvokhin.com
> > > 
> > > but it doesn't want to apply to anything I have at hand. Specifically, I
> > > tried Linus' tree and tip, which is most of what I have at hand.
> > 
> > Thanks for taking a look, Peter.
> > 
> > This series is based on mm-new and depends on my earlier patchset:
> > 
> > https://lore.kernel.org/all/cover.1772206930.git.d@ilvokhin.com/
> > 
> > Those patches are currently only in Andrew's mm-new tree, so this series
> > won't apply cleanly on Linus' tree or tip.
> > 
> > It should apply on top of mm-new from:
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
> 
> OK, so the big problem is __GUARD_IS_ERR(), and that came up before, but
> while Linus told me how to fix it, he didn't actually like it very much:
> 
>   https://lore.kernel.org/all/20250513085001.GC25891@noisy.programming.kicks-ass.net/

Thanks for taking a look and digging into this.

[...]

> Anyway, if you all care about the size of things -- those tracepoints
> consume *WAAY* more bytes than any of this.

That's a fair point, but as I understand Andrew's main concern, the
guard() usage becomes part of the code unconditionally, with no way to
disable it, whereas tracepoints can be compiled out. Any overhead
introduced by guards is therefore carried by all kernel builds.

Given that, improvements to the guard infrastructure itself seem worth
exploring regardless of whether this particular patchset ends up going
in. If the overhead can be reduced or eliminated in the common case, it
should make the trade-off much easier.

Thanks again for investigating this and suggesting a possible approach.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-09 16:45             ` Peter Zijlstra
  2026-03-10 12:57               ` Dmitry Ilvokhin
@ 2026-03-12 23:40               ` Dan Williams
  2026-03-13  8:36                 ` Peter Zijlstra
  1 sibling, 1 reply; 24+ messages in thread
From: Dan Williams @ 2026-03-12 23:40 UTC (permalink / raw)
  To: Peter Zijlstra, Dmitry Ilvokhin, dan.j.williams
  Cc: Vlastimil Babka, Steven Rostedt, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team

Peter Zijlstra wrote:
[..]
> However, looking at things again, I think we can get rid of that
> unconditional __GUARD_IS_ERR(), something like the below, Dan?

I think it makes sense, do not make everyone pay the cost of
__GUARD_IS_ERR() at least until a better __GUARD_IS_ERR() comes along.

I gave the below a run through the CXL subsystem tests which uses
conditional guards quite a bit. Worked fine, and looks good to me.

So feel free to add a "tested by" from me. Not putting the actual tag
here so that b4 does not slurp a tag for the wrong patchset.

> ---
> --- a/include/linux/cleanup.h
> +++ b/include/linux/cleanup.h
[..]


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-12 23:40               ` Dan Williams
@ 2026-03-13  8:36                 ` Peter Zijlstra
  0 siblings, 0 replies; 24+ messages in thread
From: Peter Zijlstra @ 2026-03-13  8:36 UTC (permalink / raw)
  To: Dan Williams
  Cc: Dmitry Ilvokhin, Vlastimil Babka, Steven Rostedt, Andrew Morton,
	David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Brendan Jackman,
	Johannes Weiner, Zi Yan, linux-mm, linux-kernel, kernel-team

On Thu, Mar 12, 2026 at 04:40:45PM -0700, Dan Williams wrote:
> Peter Zijlstra wrote:
> [..]
> > However, looking at things again, I think we can get rid of that
> > unconditional __GUARD_IS_ERR(), something like the below, Dan?
> 
> I think it makes sense, do not make everyone pay the cost of
> __GUARD_IS_ERR() at least until a better __GUARD_IS_ERR() comes along.
> 
> I gave the below a run through the CXL subsystem tests which uses
> conditional guards quite a bit. Worked fine, and looks good to me.
> 
> So feel free to add a "tested by" from me. Not putting the actual tag
> here so that b4 does not slurp a tag for the wrong patchset.

Excellent, thanks for having a look. Let me go write it up.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-06 17:53   ` Andrew Morton
  2026-03-06 18:00     ` Steven Rostedt
@ 2026-03-26 18:04     ` Dmitry Ilvokhin
  2026-03-26 18:51       ` Andrew Morton
  1 sibling, 1 reply; 24+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-26 18:04 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
	Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan, linux-mm, linux-kernel,
	kernel-team, Steven Rostedt

On Fri, Mar 06, 2026 at 09:53:36AM -0800, Andrew Morton wrote:
> On Fri,  6 Mar 2026 16:05:35 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:
> 
> > Use the newly introduced zone_lock_irqsave lock guard in
> > reserve_highatomic_pageblock() to replace the explicit lock/unlock and
> > goto out_unlock pattern with automatic scope-based cleanup.
> > 
> > ...
> >
> > -	zone_lock_irqsave(zone, flags);
> > +	guard(zone_lock_irqsave)(zone);
> 
> guard() is cute, but this patch adds a little overhead - defconfig
> page_alloc.o text increases by 32 bytes, presumably all in
> reserve_highatomic_pageblock().  More instructions, larger cache
> footprint.
> 
> So we're adding a little overhead to every user's Linux machine for all
> time.  In return for which the developers get a little convenience and
> maintainability.
> 
> Is it worth it?

Hi Andrew,

Before respinning this series, I wanted to check if it's worth pursuing.

At the time you noted the text size increase and questioned whether the
trade-off makes sense. Since then, the guard infrastructure was fixed by
Peter, so the code generation situation has improved.

The main benefit of the series is still simplifying control flow in
these functions (removing multiple unlock paths, gotos, etc.).

Would you be open to this direction if the overhead is negligible, or
would you prefer to avoid this kind of transformation regardless?

I can also limit the series to only the more complex cases if that
helps.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
  2026-03-26 18:04     ` Dmitry Ilvokhin
@ 2026-03-26 18:51       ` Andrew Morton
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Morton @ 2026-03-26 18:51 UTC (permalink / raw)
  To: Dmitry Ilvokhin
  Cc: David Hildenbrand, Lorenzo Stoakes, Liam R. Howlett,
	Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Brendan Jackman, Johannes Weiner, Zi Yan, linux-mm, linux-kernel,
	kernel-team, Steven Rostedt

On Thu, 26 Mar 2026 18:04:35 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:

> On Fri, Mar 06, 2026 at 09:53:36AM -0800, Andrew Morton wrote:
> > On Fri,  6 Mar 2026 16:05:35 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:
> > 
> > > Use the newly introduced zone_lock_irqsave lock guard in
> > > reserve_highatomic_pageblock() to replace the explicit lock/unlock and
> > > goto out_unlock pattern with automatic scope-based cleanup.
> > > 
> > > ...
> > >
> > > -	zone_lock_irqsave(zone, flags);
> > > +	guard(zone_lock_irqsave)(zone);
> > 
> > guard() is cute, but this patch adds a little overhead - defconfig
> > page_alloc.o text increases by 32 bytes, presumably all in
> > reserve_highatomic_pageblock().  More instructions, larger cache
> > footprint.
> > 
> > So we're adding a little overhead to every user's Linux machine for all
> > time.  In return for which the developers get a little convenience and
> > maintainability.
> > 
> > Is it worth it?
> 
> Hi Andrew,
> 
> Before respinning this series, I wanted to check if it's worth pursuing.

Probably.  Much depends on the views of the people who regularly work
on this code.  Do they like guard(), or do they prefer the current
explicit open-coded locking?

> At the time you noted the text size increase and questioned whether the
> trade-off makes sense. Since then, the guard infrastructure was fixed by
> Peter, so the code generation situation has improved.

Great.

> The main benefit of the series is still simplifying control flow in
> these functions (removing multiple unlock paths, gotos, etc.).
> 
> Would you be open to this direction if the overhead is negligible, or
> would you prefer to avoid this kind of transformation regardless?
> 
> I can also limit the series to only the more complex cases if that
> helps.

Gee.  I think it would be helpful to prepare a respin which reflects
your current thinking, see what others think.

Please understand that I'm resisting adding new material during this
cycle
(https://lkml.kernel.org/r/20260323202941.08ddf2b0411501cae801ab4c@linux-foundation.org)
so you'd best be targeting 7.1-rc1 at the earliest.

But sending out a new version during this cycle for people to consider
would be a good step.



^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2026-03-26 18:51 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-06 16:05 [PATCH 0/8] mm: introduce zone lock guards Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
2026-03-06 17:53   ` Andrew Morton
2026-03-06 18:00     ` Steven Rostedt
2026-03-06 18:24       ` Vlastimil Babka
2026-03-06 18:33         ` Andrew Morton
2026-03-06 18:46           ` Steven Rostedt
2026-03-07 13:16         ` Peter Zijlstra
2026-03-07 14:09           ` Dmitry Ilvokhin
2026-03-09 16:45             ` Peter Zijlstra
2026-03-10 12:57               ` Dmitry Ilvokhin
2026-03-12 23:40               ` Dan Williams
2026-03-13  8:36                 ` Peter Zijlstra
2026-03-26 18:04     ` Dmitry Ilvokhin
2026-03-26 18:51       ` Andrew Morton
2026-03-06 16:05 ` [PATCH 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
2026-03-06 16:10   ` Steven Rostedt
2026-03-06 16:05 ` [PATCH 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
2026-03-06 16:05 ` [PATCH 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
2026-03-06 16:15 ` [PATCH 0/8] mm: introduce zone lock guards Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox