From: Pedro Falcato <pfalcato@suse.de>
To: Zhang Peng <zippermonkey@icloud.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <ljs@kernel.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Qi Zheng <zhengqi.arch@bytedance.com>,
Shakeel Butt <shakeel.butt@linux.dev>,
Axel Rasmussen <axelrasmussen@google.com>,
Yuanchu Xie <yuanchu@google.com>, Wei Xu <weixugc@google.com>,
Michal Hocko <mhocko@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Kairui Song <kasong@tencent.com>,
Zhang Peng <bruzzhang@tencent.com>
Subject: Re: [PATCH v2 5/5] mm/vmscan: flush TLB for every 31 folios evictions
Date: Thu, 26 Mar 2026 12:40:50 +0000 [thread overview]
Message-ID: <tq5bx3gz2mzqyf3mylay22bcdo2sxt5urvpbblqej4v4sbk3q6@hwjqs4y3ghxf> (raw)
In-Reply-To: <20260326-batch-tlb-flush-v2-5-403e523325c4@icloud.com>
On Thu, Mar 26, 2026 at 04:36:21PM +0800, Zhang Peng wrote:
> From: Zhang Peng <bruzzhang@tencent.com>
>
> Currently we flush TLB for every dirty folio, which is a bottleneck for
> systems with many cores as this causes heavy IPI usage.
>
> So instead, batch the folios, and flush once for every 31 folios (one
> folio_batch). These folios will be held in a folio_batch releasing their
> lock, then when folio_batch is full, do following steps:
>
> - For each folio: lock - check still evictable (writeback, lru, mapped,
> dma_pinned)
> - If no longer evictable, put back to LRU
> - Flush TLB once for the batch
> - Pageout the folios
>
> Note we can't hold a frozen folio in folio_batch for long as it will
> cause filemap/swapcache lookup to livelock. Fortunately pageout usually
> won't take too long; sync IO is fast, and non-sync IO will be issued
> with the folio marked writeback.
>
> Suggested-by: Kairui Song <kasong@tencent.com>
> Signed-off-by: Zhang Peng <bruzzhang@tencent.com>
> ---
> mm/vmscan.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-------
> 1 file changed, 62 insertions(+), 7 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 63cc88c875e8..27de8034f582 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1217,6 +1217,47 @@ static void pageout_one(struct folio *folio, struct list_head *ret_folios,
> folio_test_unevictable(folio), folio);
> }
>
> +static void pageout_batch(struct folio_batch *fbatch,
> + struct list_head *ret_folios,
> + struct folio_batch *free_folios,
> + struct scan_control *sc, struct reclaim_stat *stat,
> + struct swap_iocb **plug, struct list_head *folio_list)
> +{
> + int i, count = folio_batch_count(fbatch);
> + struct folio *folio;
> +
> + folio_batch_reinit(fbatch);
> + for (i = 0; i < count; ++i) {
> + folio = fbatch->folios[i];
> + if (!folio_trylock(folio)) {
> + list_add(&folio->lru, ret_folios);
> + continue;
> + }
> +
> + if (folio_test_writeback(folio) || folio_test_lru(folio) ||
If PG_lru is set here, we're in a world of trouble as we're actively using
folio->lru. I don't think it's possible for it to be set, as isolating folios
clears lru, and refcount bump means the folio cannot be reused or reinserted
back on the LRU. So perhaps:
VM_WARN_ON_FOLIO(folio_test_lru(folio), folio);
> + folio_mapped(folio) || folio_maybe_dma_pinned(folio)) {
> + folio_unlock(folio);
> + list_add(&folio->lru, ret_folios);
> + continue;
> + }
> +
> + folio_batch_add(fbatch, folio);
> + }
> +
> + i = 0;
> + count = folio_batch_count(fbatch);
> + if (!count)
> + return;
> + /* One TLB flush for the batch */
> + try_to_unmap_flush_dirty();
> + for (i = 0; i < count; ++i) {
> + folio = fbatch->folios[i];
> + pageout_one(folio, ret_folios, free_folios, sc, stat, plug,
> + folio_list);
Would be lovely if we could pass the batch down to the swap layer.
> + }
> + folio_batch_reinit(fbatch);
The way you keep reinitializing fbatch is a bit confusing.
Probably worth a comment or two (or kdocs for pageout_batch documenting
that the folio batch is reset, etc).
> +}
> +
> static bool folio_try_unmap(struct folio *folio, struct reclaim_stat *stat,
> unsigned int nr_pages)
> {
> @@ -1264,6 +1305,8 @@ static void shrink_folio_list(struct list_head *folio_list,
> struct mem_cgroup *memcg)
> {
> struct folio_batch free_folios;
> + struct folio_batch flush_folios;
> +
> LIST_HEAD(ret_folios);
> LIST_HEAD(demote_folios);
> unsigned int nr_demoted = 0;
> @@ -1272,6 +1315,8 @@ static void shrink_folio_list(struct list_head *folio_list,
> struct swap_iocb *plug = NULL;
>
> folio_batch_init(&free_folios);
> + folio_batch_init(&flush_folios);
> +
> memset(stat, 0, sizeof(*stat));
> cond_resched();
> do_demote_pass = can_demote(pgdat->node_id, sc, memcg);
> @@ -1565,15 +1610,21 @@ static void shrink_folio_list(struct list_head *folio_list,
> goto keep_locked;
> if (!sc->may_writepage)
> goto keep_locked;
> -
> /*
> - * Folio is dirty. Flush the TLB if a writable entry
> - * potentially exists to avoid CPU writes after I/O
> - * starts and then write it out here.
> + * For anon, we should only see swap cache (anon) and
> + * the list pinning the page. For file page, the filemap
> + * and the list pins it. The folio is unlocked while
> + * held in the batch, so pageout_batch() relocks each
> + * folio and rechecks its state. If the folio is under
> + * writeback, on LRU, mapped, or DMA-pinned, it will
> + * not be written out and is put back to LRU list.
> */
> - try_to_unmap_flush_dirty();
> - pageout_one(folio, &ret_folios, &free_folios, sc, stat,
> - &plug, folio_list);
> + folio_unlock(folio);
Why is the folio unlocked? I don't see the need to take the lock trip twice.
Is there something I'm missing?
--
Pedro
prev parent reply other threads:[~2026-03-26 12:40 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-26 8:36 [PATCH v2 0/5] mm: batch TLB flushing for dirty folios in vmscan Zhang Peng
2026-03-26 8:36 ` [PATCH v2 1/5] mm/vmscan: track reclaimed pages in reclaim_stat Zhang Peng
2026-03-26 8:36 ` [PATCH v2 2/5] mm/vmscan: extract folio activation into folio_active_bounce() Zhang Peng
2026-03-26 8:36 ` [PATCH v2 3/5] mm/vmscan: extract folio_free() and pageout_one() Zhang Peng
2026-03-26 8:36 ` [PATCH v2 4/5] mm/vmscan: extract folio unmap logic into folio_try_unmap() Zhang Peng
2026-03-26 8:36 ` [PATCH v2 5/5] mm/vmscan: flush TLB for every 31 folios evictions Zhang Peng
2026-03-26 12:40 ` Pedro Falcato [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=tq5bx3gz2mzqyf3mylay22bcdo2sxt5urvpbblqej4v4sbk3q6@hwjqs4y3ghxf \
--to=pfalcato@suse.de \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=bruzzhang@tencent.com \
--cc=david@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@kernel.org \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=weixugc@google.com \
--cc=yuanchu@google.com \
--cc=zhengqi.arch@bytedance.com \
--cc=zippermonkey@icloud.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox