public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH 1/1] mm/vmscan: prevent MGLRU reclaim from pinning address space
@ 2026-03-22  7:08 Suren Baghdasaryan
  2026-03-23 13:43 ` Lorenzo Stoakes (Oracle)
  2026-03-23 13:43 ` Lorenzo Stoakes (Oracle)
  0 siblings, 2 replies; 9+ messages in thread
From: Suren Baghdasaryan @ 2026-03-22  7:08 UTC (permalink / raw)
  To: akpm
  Cc: hannes, david, mhocko, zhengqi.arch, yuzhao, shakeel.butt, willy,
	Liam.Howlett, ljs, axelrasmussen, yuanchu, weixugc, linux-mm,
	linux-kernel, Suren Baghdasaryan

When shrinking lruvec, MGLRU pins address space before walking it.
This is excessive since all it needs for walking the page range is
a stable mm_struct to be able to take and release mmap_read_lock and
a stable mm->mm_mt tree to walk. This address space pinning results
in delays when releasing the memory of a dying process. This also
prevents mm reapers (both in-kernel oom-reaper and userspace
process_mrelease()) from doing their job during MGLRU scan because
they check task_will_free_mem() which will yield negative result due
to the elevated mm->mm_users.

Replace unnecessary address space pinning with mm_struct pinning by
replacing mmget/mmput with mmgrab/mmdrop calls. mm_mt is contained
within mm_struct itself, therefore it won't be freed as long as
mm_struct is stable and it won't change during the walk because
mmap_read_lock is being held.

Fixes: bd74fdaea146 ("mm: multi-gen LRU: support page table walks")
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/vmscan.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 33287ba4a500..68e8e90e38f5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2863,8 +2863,9 @@ static struct mm_struct *get_next_mm(struct lru_gen_mm_walk *walk)
 		return NULL;
 
 	clear_bit(key, &mm->lru_gen.bitmap);
+	mmgrab(mm);
 
-	return mmget_not_zero(mm) ? mm : NULL;
+	return mm;
 }
 
 void lru_gen_add_mm(struct mm_struct *mm)
@@ -3064,7 +3065,7 @@ static bool iterate_mm_list(struct lru_gen_mm_walk *walk, struct mm_struct **ite
 		reset_bloom_filter(mm_state, walk->seq + 1);
 
 	if (*iter)
-		mmput_async(*iter);
+		mmdrop(*iter);
 
 	*iter = mm;
 

base-commit: 8c65073d94c8b7cc3170de31af38edc9f5d96f0e
-- 
2.53.0.1018.g2bb0e51243-goog



^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-03-23 17:43 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-22  7:08 [PATCH 1/1] mm/vmscan: prevent MGLRU reclaim from pinning address space Suren Baghdasaryan
2026-03-23 13:43 ` Lorenzo Stoakes (Oracle)
2026-03-23 16:19   ` Suren Baghdasaryan
2026-03-23 17:06     ` Lorenzo Stoakes (Oracle)
2026-03-23 17:24       ` Suren Baghdasaryan
2026-03-23 13:43 ` Lorenzo Stoakes (Oracle)
2026-03-23 16:26   ` Suren Baghdasaryan
2026-03-23 17:02     ` Lorenzo Stoakes (Oracle)
2026-03-23 17:43       ` Suren Baghdasaryan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox