* Re: [PATCH v1 1/3] mm: process_mrelease: expedite clean file folio reclaim via mmu_gather
[not found] ` <20260421230239.172582-2-minchan@kernel.org>
@ 2026-05-05 14:53 ` Alexander Gordeev
2026-05-08 21:56 ` Minchan Kim
0 siblings, 1 reply; 4+ messages in thread
From: Alexander Gordeev @ 2026-05-05 14:53 UTC (permalink / raw)
To: Minchan Kim
Cc: akpm, hca, linux-s390, david, mhocko, brauner, linux-mm,
linux-kernel, surenb, timmurray, Minchan Kim
On Tue, Apr 21, 2026 at 04:02:37PM -0700, Minchan Kim wrote:
Hi Minchan,
> Currently, process_mrelease() unmaps pages but file-backed pages are
> not evicted and stay in the pagecache, relying on standard memory reclaim
> (kswapd or direct reclaim) to eventually free them. This delays the
> immediate recovery of system memory under Android's LMKD scenarios,
> leading to redundant background apps kills.
>
> This patch implements an expedited eviction mechanism for clean pagecache
> folios in the mmu_gather code, similar to how swapcache folios are handled.
> It drops them from the pagecache (i.e., evicting them) if they are completely
> unmapped during reaping.
>
> Within this single unified loop, anonymous pages are released via
> free_swap_cache(), and file-backed folios are symmetrically released via
> free_file_cache().
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
> arch/s390/include/asm/tlb.h | 2 +-
> include/linux/swap.h | 5 ++---
> mm/mmu_gather.c | 7 ++++---
> mm/swap.c | 42 +++++++++++++++++++++++++++++++++++++
> mm/swap_state.c | 26 -----------------------
> 5 files changed, 49 insertions(+), 33 deletions(-)
>
> diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
> index 619fd41e710e..2736dbb571a8 100644
> --- a/arch/s390/include/asm/tlb.h
> +++ b/arch/s390/include/asm/tlb.h
> @@ -62,7 +62,7 @@ static inline bool __tlb_remove_folio_pages(struct mmu_gather *tlb,
> VM_WARN_ON_ONCE(delay_rmap);
> VM_WARN_ON_ONCE(page_folio(page) != page_folio(page + nr_pages - 1));
>
> - free_pages_and_swap_cache(encoded_pages, ARRAY_SIZE(encoded_pages));
> + free_pages_and_caches(tlb->mm, encoded_pages, ARRAY_SIZE(encoded_pages));
> return false;
> }
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 62fc7499b408..bdb784966343 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -414,7 +414,9 @@ extern int sysctl_min_unmapped_ratio;
> extern int sysctl_min_slab_ratio;
> #endif
>
> +struct mm_struct;
> void check_move_unevictable_folios(struct folio_batch *fbatch);
> +void free_pages_and_caches(struct mm_struct *mm, struct encoded_page **pages, int nr);
>
> extern void __meminit kswapd_run(int nid);
> extern void __meminit kswapd_stop(int nid);
> @@ -433,7 +435,6 @@ static inline unsigned long total_swapcache_pages(void)
>
> void free_swap_cache(struct folio *folio);
> void free_folio_and_swap_cache(struct folio *folio);
> -void free_pages_and_swap_cache(struct encoded_page **, int);
> /* linux/mm/swapfile.c */
> extern atomic_long_t nr_swap_pages;
> extern long total_swap_pages;
> @@ -510,8 +511,6 @@ static inline void put_swap_device(struct swap_info_struct *si)
> do { (val)->freeswap = (val)->totalswap = 0; } while (0)
> #define free_folio_and_swap_cache(folio) \
> folio_put(folio)
> -#define free_pages_and_swap_cache(pages, nr) \
> - release_pages((pages), (nr));
>
> static inline void free_swap_cache(struct folio *folio)
> {
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index fe5b6a031717..3c6c315d3c48 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -100,7 +100,8 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
> */
> #define MAX_NR_FOLIOS_PER_FREE 512
>
> -static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
> +static void __tlb_batch_free_encoded_pages(struct mm_struct *mm,
> + struct mmu_gather_batch *batch)
> {
> struct encoded_page **pages = batch->encoded_pages;
> unsigned int nr, nr_pages;
> @@ -135,7 +136,7 @@ static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
> }
> }
>
> - free_pages_and_swap_cache(pages, nr);
> + free_pages_and_caches(mm, pages, nr);
> pages += nr;
> batch->nr -= nr;
>
> @@ -148,7 +149,7 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
> struct mmu_gather_batch *batch;
>
> for (batch = &tlb->local; batch && batch->nr; batch = batch->next)
> - __tlb_batch_free_encoded_pages(batch);
> + __tlb_batch_free_encoded_pages(tlb->mm, batch);
> tlb->active = &tlb->local;
> }
>
> diff --git a/mm/swap.c b/mm/swap.c
> index bb19ccbece46..e44bc8cefceb 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -1043,6 +1043,48 @@ void release_pages(release_pages_arg arg, int nr)
> }
> EXPORT_SYMBOL(release_pages);
>
> +static inline void free_file_cache(struct folio *folio)
> +{
> + if (folio_trylock(folio)) {
> + mapping_evict_folio(folio_mapping(folio), folio);
> + folio_unlock(folio);
> + }
> +}
> +
> +/*
> + * Passed an array of pages, drop them all from swapcache and then release
> + * them. They are removed from the LRU and freed if this is their last use.
> + *
> + * If @try_evict_file_folios is true, this function will proactively evict clean
> + * file-backed folios if they are no longer mapped.
> + */
> +void free_pages_and_caches(struct mm_struct *mm, struct encoded_page **pages, int nr)
> +{
> + bool try_evict_file_folios = mm_flags_test(MMF_UNSTABLE, mm);
> + struct folio_batch folios;
> + unsigned int refs[PAGEVEC_SIZE];
> +
> + folio_batch_init(&folios);
> + for (int i = 0; i < nr; i++) {
> + struct folio *folio = page_folio(encoded_page_ptr(pages[i]));
> +
> + if (folio_test_anon(folio))
> + free_swap_cache(folio);
> + else if (unlikely(try_evict_file_folios))
> + free_file_cache(folio);
This condition is absent in free_pages_and_swap_cache().
What would happen with non-anon and non-evict folio?
> +
> + refs[folios.nr] = 1;
> + if (unlikely(encoded_page_flags(pages[i]) &
> + ENCODED_PAGE_BIT_NR_PAGES_NEXT))
> + refs[folios.nr] = encoded_nr_pages(pages[++i]);
> +
> + if (folio_batch_add(&folios, folio) == 0)
> + folios_put_refs(&folios, refs);
> + }
> + if (folios.nr)
> + folios_put_refs(&folios, refs);
> +}
> +
> /*
> * The folios which we're about to release may be in the deferred lru-addition
> * queues. That would prevent them from really being freed right now. That's
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 6d0eef7470be..7576bf36d920 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -400,32 +400,6 @@ void free_folio_and_swap_cache(struct folio *folio)
> folio_put(folio);
> }
>
> -/*
> - * Passed an array of pages, drop them all from swapcache and then release
> - * them. They are removed from the LRU and freed if this is their last use.
> - */
> -void free_pages_and_swap_cache(struct encoded_page **pages, int nr)
> -{
> - struct folio_batch folios;
> - unsigned int refs[PAGEVEC_SIZE];
> -
> - folio_batch_init(&folios);
> - for (int i = 0; i < nr; i++) {
> - struct folio *folio = page_folio(encoded_page_ptr(pages[i]));
> -
> - free_swap_cache(folio);
> - refs[folios.nr] = 1;
> - if (unlikely(encoded_page_flags(pages[i]) &
> - ENCODED_PAGE_BIT_NR_PAGES_NEXT))
> - refs[folios.nr] = encoded_nr_pages(pages[++i]);
> -
> - if (folio_batch_add(&folios, folio) == 0)
> - folios_put_refs(&folios, refs);
> - }
> - if (folios.nr)
> - folios_put_refs(&folios, refs);
> -}
> -
> static inline bool swap_use_vma_readahead(void)
> {
> return READ_ONCE(enable_vma_readahead) && !atomic_read(&nr_rotate_swap);
> --
> 2.54.0.rc1.555.g9c883467ad-goog
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v1 1/3] mm: process_mrelease: expedite clean file folio reclaim via mmu_gather
2026-05-05 14:53 ` [PATCH v1 1/3] mm: process_mrelease: expedite clean file folio reclaim via mmu_gather Alexander Gordeev
@ 2026-05-08 21:56 ` Minchan Kim
2026-05-11 16:13 ` Alexander Gordeev
0 siblings, 1 reply; 4+ messages in thread
From: Minchan Kim @ 2026-05-08 21:56 UTC (permalink / raw)
To: Alexander Gordeev
Cc: akpm, hca, linux-s390, david, mhocko, brauner, linux-mm,
linux-kernel, surenb, timmurray
Hi Alexander,
Sorry for late reply.
On Tue, May 05, 2026 at 04:53:18PM +0200, Alexander Gordeev wrote:
> On Tue, Apr 21, 2026 at 04:02:37PM -0700, Minchan Kim wrote:
>
> Hi Minchan,
>
> > Currently, process_mrelease() unmaps pages but file-backed pages are
> > not evicted and stay in the pagecache, relying on standard memory reclaim
> > (kswapd or direct reclaim) to eventually free them. This delays the
> > immediate recovery of system memory under Android's LMKD scenarios,
> > leading to redundant background apps kills.
> >
> > This patch implements an expedited eviction mechanism for clean pagecache
> > folios in the mmu_gather code, similar to how swapcache folios are handled.
> > It drops them from the pagecache (i.e., evicting them) if they are completely
> > unmapped during reaping.
> >
> > Within this single unified loop, anonymous pages are released via
> > free_swap_cache(), and file-backed folios are symmetrically released via
> > free_file_cache().
> >
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> > arch/s390/include/asm/tlb.h | 2 +-
> > include/linux/swap.h | 5 ++---
> > mm/mmu_gather.c | 7 ++++---
> > mm/swap.c | 42 +++++++++++++++++++++++++++++++++++++
> > mm/swap_state.c | 26 -----------------------
> > 5 files changed, 49 insertions(+), 33 deletions(-)
> >
> > diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
> > index 619fd41e710e..2736dbb571a8 100644
> > --- a/arch/s390/include/asm/tlb.h
> > +++ b/arch/s390/include/asm/tlb.h
> > @@ -62,7 +62,7 @@ static inline bool __tlb_remove_folio_pages(struct mmu_gather *tlb,
> > VM_WARN_ON_ONCE(delay_rmap);
> > VM_WARN_ON_ONCE(page_folio(page) != page_folio(page + nr_pages - 1));
> >
> > - free_pages_and_swap_cache(encoded_pages, ARRAY_SIZE(encoded_pages));
> > + free_pages_and_caches(tlb->mm, encoded_pages, ARRAY_SIZE(encoded_pages));
> > return false;
> > }
> >
> > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > index 62fc7499b408..bdb784966343 100644
> > --- a/include/linux/swap.h
> > +++ b/include/linux/swap.h
> > @@ -414,7 +414,9 @@ extern int sysctl_min_unmapped_ratio;
> > extern int sysctl_min_slab_ratio;
> > #endif
> >
> > +struct mm_struct;
> > void check_move_unevictable_folios(struct folio_batch *fbatch);
> > +void free_pages_and_caches(struct mm_struct *mm, struct encoded_page **pages, int nr);
> >
> > extern void __meminit kswapd_run(int nid);
> > extern void __meminit kswapd_stop(int nid);
> > @@ -433,7 +435,6 @@ static inline unsigned long total_swapcache_pages(void)
> >
> > void free_swap_cache(struct folio *folio);
> > void free_folio_and_swap_cache(struct folio *folio);
> > -void free_pages_and_swap_cache(struct encoded_page **, int);
> > /* linux/mm/swapfile.c */
> > extern atomic_long_t nr_swap_pages;
> > extern long total_swap_pages;
> > @@ -510,8 +511,6 @@ static inline void put_swap_device(struct swap_info_struct *si)
> > do { (val)->freeswap = (val)->totalswap = 0; } while (0)
> > #define free_folio_and_swap_cache(folio) \
> > folio_put(folio)
> > -#define free_pages_and_swap_cache(pages, nr) \
> > - release_pages((pages), (nr));
> >
> > static inline void free_swap_cache(struct folio *folio)
> > {
> > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> > index fe5b6a031717..3c6c315d3c48 100644
> > --- a/mm/mmu_gather.c
> > +++ b/mm/mmu_gather.c
> > @@ -100,7 +100,8 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
> > */
> > #define MAX_NR_FOLIOS_PER_FREE 512
> >
> > -static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
> > +static void __tlb_batch_free_encoded_pages(struct mm_struct *mm,
> > + struct mmu_gather_batch *batch)
> > {
> > struct encoded_page **pages = batch->encoded_pages;
> > unsigned int nr, nr_pages;
> > @@ -135,7 +136,7 @@ static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
> > }
> > }
> >
> > - free_pages_and_swap_cache(pages, nr);
> > + free_pages_and_caches(mm, pages, nr);
> > pages += nr;
> > batch->nr -= nr;
> >
> > @@ -148,7 +149,7 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
> > struct mmu_gather_batch *batch;
> >
> > for (batch = &tlb->local; batch && batch->nr; batch = batch->next)
> > - __tlb_batch_free_encoded_pages(batch);
> > + __tlb_batch_free_encoded_pages(tlb->mm, batch);
> > tlb->active = &tlb->local;
> > }
> >
> > diff --git a/mm/swap.c b/mm/swap.c
> > index bb19ccbece46..e44bc8cefceb 100644
> > --- a/mm/swap.c
> > +++ b/mm/swap.c
> > @@ -1043,6 +1043,48 @@ void release_pages(release_pages_arg arg, int nr)
> > }
> > EXPORT_SYMBOL(release_pages);
> >
> > +static inline void free_file_cache(struct folio *folio)
> > +{
> > + if (folio_trylock(folio)) {
> > + mapping_evict_folio(folio_mapping(folio), folio);
> > + folio_unlock(folio);
> > + }
> > +}
> > +
> > +/*
> > + * Passed an array of pages, drop them all from swapcache and then release
> > + * them. They are removed from the LRU and freed if this is their last use.
> > + *
> > + * If @try_evict_file_folios is true, this function will proactively evict clean
> > + * file-backed folios if they are no longer mapped.
> > + */
> > +void free_pages_and_caches(struct mm_struct *mm, struct encoded_page **pages, int nr)
> > +{
> > + bool try_evict_file_folios = mm_flags_test(MMF_UNSTABLE, mm);
> > + struct folio_batch folios;
> > + unsigned int refs[PAGEVEC_SIZE];
> > +
> > + folio_batch_init(&folios);
> > + for (int i = 0; i < nr; i++) {
> > + struct folio *folio = page_folio(encoded_page_ptr(pages[i]));
> > +
> > + if (folio_test_anon(folio))
> > + free_swap_cache(folio);
> > + else if (unlikely(try_evict_file_folios))
> > + free_file_cache(folio);
>
> This condition is absent in free_pages_and_swap_cache().
> What would happen with non-anon and non-evict folio?
Are you asking about mlocked pages for file?
During unmapping, munlock_vma_folio() inside __folio_remove_rmap() clears
the PG_mlocked flag and moves the folio back to the evictable LRU list.
By the time the folios reach free_pages_and_caches(), if the folio is
exclusive, it will be successfully evicted. However, if the folio is shared,
mapping_evict_folio() detects it via the refcount check and skips the
eviction.
However, I realized we miss shmem folios in the swap cache due to the new
folio_test_anon() check we introduced. I will update the check to something
like this:
if (folio_test_swapcache(folio))
free_swap_cache(folio);
else if (unlikely(try_evict_file_folios))
free_file_cache(folio);
Let me know if I missed something from your point.
Thank you.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v1 1/3] mm: process_mrelease: expedite clean file folio reclaim via mmu_gather
2026-05-08 21:56 ` Minchan Kim
@ 2026-05-11 16:13 ` Alexander Gordeev
2026-05-11 21:48 ` Minchan Kim
0 siblings, 1 reply; 4+ messages in thread
From: Alexander Gordeev @ 2026-05-11 16:13 UTC (permalink / raw)
To: Minchan Kim
Cc: akpm, hca, linux-s390, david, mhocko, brauner, linux-mm,
linux-kernel, surenb, timmurray
On Fri, May 08, 2026 at 02:56:35PM -0700, Minchan Kim wrote:
Hi Minchan,
> > > +void free_pages_and_caches(struct mm_struct *mm, struct encoded_page **pages, int nr)
> > > +{
> > > + bool try_evict_file_folios = mm_flags_test(MMF_UNSTABLE, mm);
> > > + struct folio_batch folios;
> > > + unsigned int refs[PAGEVEC_SIZE];
> > > +
> > > + folio_batch_init(&folios);
> > > + for (int i = 0; i < nr; i++) {
> > > + struct folio *folio = page_folio(encoded_page_ptr(pages[i]));
> > > +
> > > + if (folio_test_anon(folio))
> > > + free_swap_cache(folio);
> > > + else if (unlikely(try_evict_file_folios))
> > > + free_file_cache(folio);
> >
> > This condition is absent in free_pages_and_swap_cache().
> > What would happen with non-anon and non-evict folio?
>
> Are you asking about mlocked pages for file?
>
> During unmapping, munlock_vma_folio() inside __folio_remove_rmap() clears
> the PG_mlocked flag and moves the folio back to the evictable LRU list.
>
> By the time the folios reach free_pages_and_caches(), if the folio is
> exclusive, it will be successfully evicted. However, if the folio is shared,
> mapping_evict_folio() detects it via the refcount check and skips the
> eviction.
>
> However, I realized we miss shmem folios in the swap cache due to the new
> folio_test_anon() check we introduced. I will update the check to something
> like this:
>
> if (folio_test_swapcache(folio))
> free_swap_cache(folio);
This condition looks redundant, since free_swap_cache() checks it too.
> else if (unlikely(try_evict_file_folios))
> free_file_cache(folio);
>
> Let me know if I missed something from your point.
Thanks!
> Thank you.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v1 1/3] mm: process_mrelease: expedite clean file folio reclaim via mmu_gather
2026-05-11 16:13 ` Alexander Gordeev
@ 2026-05-11 21:48 ` Minchan Kim
0 siblings, 0 replies; 4+ messages in thread
From: Minchan Kim @ 2026-05-11 21:48 UTC (permalink / raw)
To: Alexander Gordeev
Cc: akpm, hca, linux-s390, david, mhocko, brauner, linux-mm,
linux-kernel, surenb, timmurray
On Mon, May 11, 2026 at 06:13:09PM +0200, Alexander Gordeev wrote:
> On Fri, May 08, 2026 at 02:56:35PM -0700, Minchan Kim wrote:
>
> Hi Minchan,
>
> > > > +void free_pages_and_caches(struct mm_struct *mm, struct encoded_page **pages, int nr)
> > > > +{
> > > > + bool try_evict_file_folios = mm_flags_test(MMF_UNSTABLE, mm);
> > > > + struct folio_batch folios;
> > > > + unsigned int refs[PAGEVEC_SIZE];
> > > > +
> > > > + folio_batch_init(&folios);
> > > > + for (int i = 0; i < nr; i++) {
> > > > + struct folio *folio = page_folio(encoded_page_ptr(pages[i]));
> > > > +
> > > > + if (folio_test_anon(folio))
> > > > + free_swap_cache(folio);
> > > > + else if (unlikely(try_evict_file_folios))
> > > > + free_file_cache(folio);
> > >
> > > This condition is absent in free_pages_and_swap_cache().
> > > What would happen with non-anon and non-evict folio?
> >
> > Are you asking about mlocked pages for file?
> >
> > During unmapping, munlock_vma_folio() inside __folio_remove_rmap() clears
> > the PG_mlocked flag and moves the folio back to the evictable LRU list.
> >
> > By the time the folios reach free_pages_and_caches(), if the folio is
> > exclusive, it will be successfully evicted. However, if the folio is shared,
> > mapping_evict_folio() detects it via the refcount check and skips the
> > eviction.
> >
> > However, I realized we miss shmem folios in the swap cache due to the new
> > folio_test_anon() check we introduced. I will update the check to something
> > like this:
> >
> > if (folio_test_swapcache(folio))
> > free_swap_cache(folio);
>
> This condition looks redundant, since free_swap_cache() checks it too.
What I meant is that the free_pages_and_swap_cache calls free_swap_cache
unconditionally for all those folio but my change in
free_pages_and_cached calls it only anon folio, which will miss shmem
cases.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-05-11 21:49 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260421230239.172582-1-minchan@kernel.org>
[not found] ` <20260421230239.172582-2-minchan@kernel.org>
2026-05-05 14:53 ` [PATCH v1 1/3] mm: process_mrelease: expedite clean file folio reclaim via mmu_gather Alexander Gordeev
2026-05-08 21:56 ` Minchan Kim
2026-05-11 16:13 ` Alexander Gordeev
2026-05-11 21:48 ` Minchan Kim
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox