* [PATCH v2 1/8] mm: use zone lock guard in reserve_highatomic_pageblock()
2026-03-27 16:14 [PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
@ 2026-03-27 16:14 ` Dmitry Ilvokhin
2026-03-27 16:14 ` [PATCH v2 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-27 16:14 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use the spinlock_irqsave zone lock guard in
reserve_highatomic_pageblock() to replace the explicit lock/unlock and
goto out_unlock pattern with automatic scope-based cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
mm/page_alloc.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f11f38ba2e12..c7b9b82b5956 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3403,7 +3403,7 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
struct zone *zone)
{
int mt;
- unsigned long max_managed, flags;
+ unsigned long max_managed;
/*
* The number reserved as: minimum is 1 pageblock, maximum is
@@ -3417,29 +3417,26 @@ static void reserve_highatomic_pageblock(struct page *page, int order,
if (zone->nr_reserved_highatomic >= max_managed)
return;
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
/* Recheck the nr_reserved_highatomic limit under the lock */
if (zone->nr_reserved_highatomic >= max_managed)
- goto out_unlock;
+ return;
/* Yoink! */
mt = get_pageblock_migratetype(page);
/* Only reserve normal pageblocks (i.e., they can merge with others) */
if (!migratetype_is_mergeable(mt))
- goto out_unlock;
+ return;
if (order < pageblock_order) {
if (move_freepages_block(zone, page, mt, MIGRATE_HIGHATOMIC) == -1)
- goto out_unlock;
+ return;
zone->nr_reserved_highatomic += pageblock_nr_pages;
} else {
change_pageblock_range(page, order, MIGRATE_HIGHATOMIC);
zone->nr_reserved_highatomic += 1 << order;
}
-
-out_unlock:
- spin_unlock_irqrestore(&zone->lock, flags);
}
/*
--
2.52.0
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v2 2/8] mm: use zone lock guard in unset_migratetype_isolate()
2026-03-27 16:14 [PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
2026-03-27 16:14 ` [PATCH v2 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-03-27 16:14 ` Dmitry Ilvokhin
2026-03-27 16:14 ` [PATCH v2 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-27 16:14 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in unset_migratetype_isolate() to
replace the explicit lock/unlock and goto pattern with automatic
scope-based cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
mm/page_isolation.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index c48ff5c00244..9d606052dd80 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -223,15 +223,14 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
static void unset_migratetype_isolate(struct page *page)
{
struct zone *zone;
- unsigned long flags;
bool isolated_page = false;
unsigned int order;
struct page *buddy;
zone = page_zone(page);
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
if (!is_migrate_isolate_page(page))
- goto out;
+ return;
/*
* Because freepage with more than pageblock_order on isolated
@@ -279,8 +278,6 @@ static void unset_migratetype_isolate(struct page *page)
__putback_isolated_page(page, order, get_pageblock_migratetype(page));
}
zone->nr_isolate_pageblock--;
-out:
- spin_unlock_irqrestore(&zone->lock, flags);
}
static inline struct page *
--
2.52.0
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v2 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock()
2026-03-27 16:14 [PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
2026-03-27 16:14 ` [PATCH v2 1/8] mm: use zone lock guard in reserve_highatomic_pageblock() Dmitry Ilvokhin
2026-03-27 16:14 ` [PATCH v2 2/8] mm: use zone lock guard in unset_migratetype_isolate() Dmitry Ilvokhin
@ 2026-03-27 16:14 ` Dmitry Ilvokhin
2026-03-27 16:14 ` [PATCH v2 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-27 16:14 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in unreserve_highatomic_pageblock()
to replace the explicit lock/unlock pattern with automatic scope-based
cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
mm/page_alloc.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c7b9b82b5956..f06d8b5ffc88 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3452,7 +3452,6 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
bool force)
{
struct zonelist *zonelist = ac->zonelist;
- unsigned long flags;
struct zoneref *z;
struct zone *zone;
struct page *page;
@@ -3469,7 +3468,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
pageblock_nr_pages)
continue;
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
for (order = 0; order < NR_PAGE_ORDERS; order++) {
struct free_area *area = &(zone->free_area[order]);
unsigned long size;
@@ -3516,12 +3515,9 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
* so this should not fail on zone boundaries.
*/
WARN_ON_ONCE(ret == -1);
- if (ret > 0) {
- spin_unlock_irqrestore(&zone->lock, flags);
+ if (ret > 0)
return ret;
- }
}
- spin_unlock_irqrestore(&zone->lock, flags);
}
return false;
--
2.52.0
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v2 4/8] mm: use zone lock guard in set_migratetype_isolate()
2026-03-27 16:14 [PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (2 preceding siblings ...)
2026-03-27 16:14 ` [PATCH v2 3/8] mm: use zone lock guard in unreserve_highatomic_pageblock() Dmitry Ilvokhin
@ 2026-03-27 16:14 ` Dmitry Ilvokhin
2026-03-27 16:14 ` [PATCH v2 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-27 16:14 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave scoped lock guard in set_migratetype_isolate() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup. The scoped variant is used to keep dump_page() outside the
locked section to avoid a lockdep splat.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
mm/page_isolation.c | 60 ++++++++++++++++++++-------------------------
1 file changed, 26 insertions(+), 34 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 9d606052dd80..7a9d631945a3 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -167,48 +167,40 @@ static int set_migratetype_isolate(struct page *page, enum pb_isolate_mode mode,
{
struct zone *zone = page_zone(page);
struct page *unmovable;
- unsigned long flags;
unsigned long check_unmovable_start, check_unmovable_end;
if (PageUnaccepted(page))
accept_page(page);
- spin_lock_irqsave(&zone->lock, flags);
-
- /*
- * We assume the caller intended to SET migrate type to isolate.
- * If it is already set, then someone else must have raced and
- * set it before us.
- */
- if (is_migrate_isolate_page(page)) {
- spin_unlock_irqrestore(&zone->lock, flags);
- return -EBUSY;
- }
-
- /*
- * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
- * We just check MOVABLE pages.
- *
- * Pass the intersection of [start_pfn, end_pfn) and the page's pageblock
- * to avoid redundant checks.
- */
- check_unmovable_start = max(page_to_pfn(page), start_pfn);
- check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
- end_pfn);
-
- unmovable = has_unmovable_pages(check_unmovable_start, check_unmovable_end,
- mode);
- if (!unmovable) {
- if (!pageblock_isolate_and_move_free_pages(zone, page)) {
- spin_unlock_irqrestore(&zone->lock, flags);
+ scoped_guard(spinlock_irqsave, &zone->lock) {
+ /*
+ * We assume the caller intended to SET migrate type to
+ * isolate. If it is already set, then someone else must have
+ * raced and set it before us.
+ */
+ if (is_migrate_isolate_page(page))
return -EBUSY;
+
+ /*
+ * FIXME: Now, memory hotplug doesn't call shrink_slab() by
+ * itself. We just check MOVABLE pages.
+ *
+ * Pass the intersection of [start_pfn, end_pfn) and the page's
+ * pageblock to avoid redundant checks.
+ */
+ check_unmovable_start = max(page_to_pfn(page), start_pfn);
+ check_unmovable_end = min(pageblock_end_pfn(page_to_pfn(page)),
+ end_pfn);
+
+ unmovable = has_unmovable_pages(check_unmovable_start,
+ check_unmovable_end, mode);
+ if (!unmovable) {
+ if (!pageblock_isolate_and_move_free_pages(zone, page))
+ return -EBUSY;
+ zone->nr_isolate_pageblock++;
+ return 0;
}
- zone->nr_isolate_pageblock++;
- spin_unlock_irqrestore(&zone->lock, flags);
- return 0;
}
-
- spin_unlock_irqrestore(&zone->lock, flags);
if (mode == PB_ISOLATE_MODE_MEM_OFFLINE) {
/*
* printk() with zone->lock held will likely trigger a
--
2.52.0
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v2 5/8] mm: use zone lock guard in take_page_off_buddy()
2026-03-27 16:14 [PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (3 preceding siblings ...)
2026-03-27 16:14 ` [PATCH v2 4/8] mm: use zone lock guard in set_migratetype_isolate() Dmitry Ilvokhin
@ 2026-03-27 16:14 ` Dmitry Ilvokhin
2026-03-27 16:14 ` [PATCH v2 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-27 16:14 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in take_page_off_buddy() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.
This also allows to return directly from the loop, removing the 'ret'
variable.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
mm/page_alloc.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f06d8b5ffc88..a124e4eebda4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7488,11 +7488,9 @@ bool take_page_off_buddy(struct page *page)
{
struct zone *zone = page_zone(page);
unsigned long pfn = page_to_pfn(page);
- unsigned long flags;
unsigned int order;
- bool ret = false;
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
for (order = 0; order < NR_PAGE_ORDERS; order++) {
struct page *page_head = page - (pfn & ((1 << order) - 1));
int page_order = buddy_order(page_head);
@@ -7507,14 +7505,12 @@ bool take_page_off_buddy(struct page *page)
break_down_buddy_pages(zone, page_head, page, 0,
page_order, migratetype);
SetPageHWPoisonTakenOff(page);
- ret = true;
- break;
+ return true;
}
if (page_count(page_head) > 0)
break;
}
- spin_unlock_irqrestore(&zone->lock, flags);
- return ret;
+ return false;
}
/*
--
2.52.0
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v2 6/8] mm: use zone lock guard in put_page_back_buddy()
2026-03-27 16:14 [PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (4 preceding siblings ...)
2026-03-27 16:14 ` [PATCH v2 5/8] mm: use zone lock guard in take_page_off_buddy() Dmitry Ilvokhin
@ 2026-03-27 16:14 ` Dmitry Ilvokhin
2026-03-27 16:14 ` [PATCH v2 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-27 16:14 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in put_page_back_buddy() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
mm/page_alloc.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a124e4eebda4..dd13aa197456 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7519,23 +7519,19 @@ bool take_page_off_buddy(struct page *page)
bool put_page_back_buddy(struct page *page)
{
struct zone *zone = page_zone(page);
- unsigned long flags;
- bool ret = false;
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
if (put_page_testzero(page)) {
unsigned long pfn = page_to_pfn(page);
int migratetype = get_pfnblock_migratetype(page, pfn);
ClearPageHWPoisonTakenOff(page);
__free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
- if (TestClearPageHWPoison(page)) {
- ret = true;
- }
+ if (TestClearPageHWPoison(page))
+ return true;
}
- spin_unlock_irqrestore(&zone->lock, flags);
- return ret;
+ return false;
}
#endif
--
2.52.0
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v2 7/8] mm: use zone lock guard in free_pcppages_bulk()
2026-03-27 16:14 [PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (5 preceding siblings ...)
2026-03-27 16:14 ` [PATCH v2 6/8] mm: use zone lock guard in put_page_back_buddy() Dmitry Ilvokhin
@ 2026-03-27 16:14 ` Dmitry Ilvokhin
2026-03-27 16:14 ` [PATCH v2 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
2026-03-27 20:11 ` [PATCH v2 0/8] mm: use spinlock guards for zone lock Andrew Morton
8 siblings, 0 replies; 10+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-27 16:14 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in free_pcppages_bulk() to replace
the explicit lock/unlock pattern with automatic scope-based cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
mm/page_alloc.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index dd13aa197456..b00707433898 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1451,7 +1451,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
struct per_cpu_pages *pcp,
int pindex)
{
- unsigned long flags;
unsigned int order;
struct page *page;
@@ -1464,7 +1463,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
/* Ensure requested pindex is drained first. */
pindex = pindex - 1;
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
while (count > 0) {
struct list_head *list;
@@ -1496,8 +1495,6 @@ static void free_pcppages_bulk(struct zone *zone, int count,
trace_mm_page_pcpu_drain(page, order, mt);
} while (count > 0 && !list_empty(list));
}
-
- spin_unlock_irqrestore(&zone->lock, flags);
}
/* Split a multi-block free page into its individual pageblocks. */
--
2.52.0
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v2 8/8] mm: use zone lock guard in __offline_isolated_pages()
2026-03-27 16:14 [PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (6 preceding siblings ...)
2026-03-27 16:14 ` [PATCH v2 7/8] mm: use zone lock guard in free_pcppages_bulk() Dmitry Ilvokhin
@ 2026-03-27 16:14 ` Dmitry Ilvokhin
2026-03-27 20:11 ` [PATCH v2 0/8] mm: use spinlock guards for zone lock Andrew Morton
8 siblings, 0 replies; 10+ messages in thread
From: Dmitry Ilvokhin @ 2026-03-27 16:14 UTC (permalink / raw)
To: Andrew Morton, Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan
Cc: linux-mm, linux-kernel, kernel-team, Dmitry Ilvokhin,
Steven Rostedt
Use spinlock_irqsave zone lock guard in __offline_isolated_pages() to
replace the explicit lock/unlock pattern with automatic scope-based
cleanup.
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Dmitry Ilvokhin <d@ilvokhin.com>
---
mm/page_alloc.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b00707433898..6a679995b9df 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7375,7 +7375,7 @@ void zone_pcp_reset(struct zone *zone)
unsigned long __offline_isolated_pages(unsigned long start_pfn,
unsigned long end_pfn)
{
- unsigned long already_offline = 0, flags;
+ unsigned long already_offline = 0;
unsigned long pfn = start_pfn;
struct page *page;
struct zone *zone;
@@ -7383,7 +7383,7 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
offline_mem_sections(pfn, end_pfn);
zone = page_zone(pfn_to_page(pfn));
- spin_lock_irqsave(&zone->lock, flags);
+ guard(spinlock_irqsave)(&zone->lock);
while (pfn < end_pfn) {
page = pfn_to_page(pfn);
/*
@@ -7413,7 +7413,6 @@ unsigned long __offline_isolated_pages(unsigned long start_pfn,
del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE);
pfn += (1 << order);
}
- spin_unlock_irqrestore(&zone->lock, flags);
return end_pfn - start_pfn - already_offline;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH v2 0/8] mm: use spinlock guards for zone lock
2026-03-27 16:14 [PATCH v2 0/8] mm: use spinlock guards for zone lock Dmitry Ilvokhin
` (7 preceding siblings ...)
2026-03-27 16:14 ` [PATCH v2 8/8] mm: use zone lock guard in __offline_isolated_pages() Dmitry Ilvokhin
@ 2026-03-27 20:11 ` Andrew Morton
8 siblings, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2026-03-27 20:11 UTC (permalink / raw)
To: Dmitry Ilvokhin
Cc: Vlastimil Babka, Suren Baghdasaryan, Michal Hocko,
Brendan Jackman, Johannes Weiner, Zi Yan, linux-mm, linux-kernel,
kernel-team, Steven Rostedt
On Fri, 27 Mar 2026 16:14:40 +0000 Dmitry Ilvokhin <d@ilvokhin.com> wrote:
> This series uses spinlock guard for zone lock across several mm
> functions to replace explicit lock/unlock patterns with automatic
> scope-based cleanup.
>
> This simplifies the control flow by removing 'flags' variables, goto
> labels, and redundant unlock calls.
>
> Patches are ordered by decreasing value. The first six patches simplify
> the control flow by removing gotos, multiple unlock paths, or 'ret'
> variables. The last two are simpler lock/unlock pair conversions that
> only remove 'flags' and can be dropped if considered unnecessary churn.
Thanks, you've been busy.
I'm not wanting to move new, non-fix, non-speedup things into mm.git
until after -rc1 so there's your target. But now is a good time to be
sending out material for people to look at. Let's not have a gigantic
flood of new stuff the day after -rc1!
I think progress here is totally dependent on whether those who
regularly work on this code want guard() in there. A
preference/familiarity/style choice, mainly. At present the adoption
of guard() in mm/*.c is very small.
^ permalink raw reply [flat|nested] 10+ messages in thread