public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Usama Arif <usama.arif@linux.dev>
To: Zhang Peng via B4 Relay <devnull+zippermonkey.icloud.com@kernel.org>
Cc: Usama Arif <usama.arif@linux.dev>,
	Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@kernel.org>,
	Lorenzo Stoakes <ljs@kernel.org>,
	"Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Vlastimil Babka <vbabka@kernel.org>,
	Mike Rapoport <rppt@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Qi Zheng <zhengqi.arch@bytedance.com>,
	Shakeel Butt <shakeel.butt@linux.dev>,
	Axel Rasmussen <axelrasmussen@google.com>,
	Yuanchu Xie <yuanchu@google.com>, Wei Xu <weixugc@google.com>,
	Michal Hocko <mhocko@kernel.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Kairui Song <kasong@tencent.com>,
	Zhang Peng <bruzzhang@tencent.com>
Subject: Re: [PATCH 2/2] mm, vmscan: flush TLB for every 31 folios evictions
Date: Mon,  9 Mar 2026 05:29:38 -0700	[thread overview]
Message-ID: <20260309122939.723610-1-usama.arif@linux.dev> (raw)
In-Reply-To: <20260309-batch-tlb-flush-v1-2-eb8fed7d1a9e@icloud.com>

On Mon, 09 Mar 2026 16:17:42 +0800 Zhang Peng via B4 Relay <devnull+zippermonkey.icloud.com@kernel.org> wrote:

> From: bruzzhang <bruzzhang@tencent.com>
> 
> Currently we flush TLB for every dirty folio, which is a bottleneck for
> systems with many cores as this causes heavy IPI usage.
> 
> So instead, batch the folios, and flush once for every 31 folios (one
> folio_batch). These folios will be held in a folio_batch releasing their
> lock, then when folio_batch is full, do following steps:
> 
> - For each folio: lock - check still evictable - unlock
>   - If no longer evictable, return the folio to the caller.
> - Flush TLB once for the batch
> - Pageout the folios (refcount freeze happens in the pageout path)
> 
> Note we can't hold a frozen folio in folio_batch for long as it will
> cause filemap/swapcache lookup to livelock. Fortunately pageout usually
> won't take too long; sync IO is fast, and non-sync IO will be issued
> with the folio marked writeback.
> 
> Suggested-by: Kairui Song <kasong@tencent.com>
> Signed-off-by: bruzzhang <bruzzhang@tencent.com>
> ---
>  mm/vmscan.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-------
>  1 file changed, 61 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a336f7fc7dae..69cdd3252ff8 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1240,6 +1240,48 @@ static void pageout_one(struct folio *folio, struct list_head *ret_folios,
>  	VM_BUG_ON_FOLIO(folio_test_lru(folio) ||
>  			folio_test_unevictable(folio), folio);
>  }
> +
> +static void pageout_batch(struct folio_batch *fbatch,
> +			  struct list_head *ret_folios,
> +			  struct folio_batch *free_folios,
> +			  struct scan_control *sc, struct reclaim_stat *stat,
> +			  struct swap_iocb **plug, struct list_head *folio_list)
> +{
> +	int i = 0, count = folio_batch_count(fbatch);
> +	struct folio *folio;
> +
> +	folio_batch_reinit(fbatch);
> +	do {
> +		folio = fbatch->folios[i];
> +		if (!folio_trylock(folio)) {
> +			list_add(&folio->lru, ret_folios);
> +			continue;
> +		}
> +
> +		if (folio_test_writeback(folio) || folio_test_lru(folio) ||
> +		    folio_mapped(folio))
> +			goto next;
> +		folio_batch_add(fbatch, folio);
> +		continue;
> +next:
> +		folio_unlock(folio);
> +		list_add(&folio->lru, ret_folios);
> +	} while (++i != count);

Hello!

Instead of using do {} while(++i != count), its better to use for loop,
a standard for loop would be better for code readability.

> +
> +	i = 0;
> +	count = folio_batch_count(fbatch);
> +	if (!count)
> +		return;
> +	/* One TLB flush for the batch */
> +	try_to_unmap_flush_dirty();
> +	do {
> +		folio = fbatch->folios[i];
> +		pageout_one(folio, ret_folios, free_folios, sc, stat, plug,
> +			    folio_list);
> +	} while (++i != count);
> +	folio_batch_reinit(fbatch);
> +}
> +
>  /*
>   * Reclaimed folios are counted in stat->nr_reclaimed.
>   */
> @@ -1249,6 +1291,8 @@ static void shrink_folio_list(struct list_head *folio_list,
>  		struct mem_cgroup *memcg)
>  {
>  	struct folio_batch free_folios;
> +	struct folio_batch flush_folios;
> +
>  	LIST_HEAD(ret_folios);
>  	LIST_HEAD(demote_folios);
>  	unsigned int nr_demoted = 0;
> @@ -1257,6 +1301,8 @@ static void shrink_folio_list(struct list_head *folio_list,
>  	struct swap_iocb *plug = NULL;
>  
>  	folio_batch_init(&free_folios);
> +	folio_batch_init(&flush_folios);
> +
>  	memset(stat, 0, sizeof(*stat));
>  	cond_resched();
>  	do_demote_pass = can_demote(pgdat->node_id, sc, memcg);
> @@ -1578,15 +1624,19 @@ static void shrink_folio_list(struct list_head *folio_list,
>  				goto keep_locked;
>  			if (!sc->may_writepage)
>  				goto keep_locked;
> -
>  			/*
> -			 * Folio is dirty. Flush the TLB if a writable entry
> -			 * potentially exists to avoid CPU writes after I/O
> -			 * starts and then write it out here.
> +			 * For anon, we should only see swap cache (anon) and
> +			 * the list pinning the page. For file page, the filemap
> +			 * and the list pins it. Combined with the page_ref_freeze
> +			 * in pageout_batch ensure nothing else touches the page
> +			 * during lock unlocked.
>  			 */

page_ref_freeze happens inside pageout_one() -> pageout() -> __remove_mapping(),
which runs after the folio is re-locked and after the TLB flush.  During
the unlocked window, the refcount is not frozen. Right?

With this patch, the folio is unlocked before try_to_unmap_flush_dirty() runs
in pageout_batch(). During this window, TLB entries on other CPUs could allow
writes to the folio after it has been selected for pageout. My understanding
is that the original code intentionally flushed TLB while the folio was locked
to prevent this? Could there be data corruption can result if a write through
a stale TLB entry races with the pageout I/O?


> -			try_to_unmap_flush_dirty();
> -			pageout_one(folio, &ret_folios, &free_folios, sc, stat,
> -				&plug, folio_list);
> +			folio_unlock(folio);
> +			if (!folio_batch_add(&flush_folios, folio))
> +				pageout_batch(&flush_folios,
> +							&ret_folios, &free_folios,
> +							sc, stat, &plug,
> +							folio_list);
>  			goto next;
>  		}
>  
> @@ -1614,6 +1664,10 @@ static void shrink_folio_list(struct list_head *folio_list,
>  next:
>  		continue;
>  	}
> +	if (folio_batch_count(&flush_folios)) {
> +		pageout_batch(&flush_folios, &ret_folios, &free_folios, sc,
> +			      stat, &plug, folio_list);
> +	}
>  	/* 'folio_list' is always empty here */
>  
>  	/* Migrate folios selected for demotion */
> 
> -- 
> 2.43.7
> 
> 
> 


  reply	other threads:[~2026-03-09 12:29 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-09  8:17 [PATCH 0/2] mm: batch TLB flushing for dirty folios in vmscan Zhang Peng via B4 Relay
2026-03-09  8:17 ` [PATCH 1/2] mm/vmscan: refactor shrink_folio_list for readability and maintainability Zhang Peng via B4 Relay
2026-03-09  8:17 ` [PATCH 2/2] mm, vmscan: flush TLB for every 31 folios evictions Zhang Peng via B4 Relay
2026-03-09 12:29   ` Usama Arif [this message]
2026-03-09 13:19     ` Kairui Song
2026-03-09 14:56     ` Zhang Peng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260309122939.723610-1-usama.arif@linux.dev \
    --to=usama.arif@linux.dev \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=axelrasmussen@google.com \
    --cc=bruzzhang@tencent.com \
    --cc=david@kernel.org \
    --cc=devnull+zippermonkey.icloud.com@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kasong@tencent.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=mhocko@kernel.org \
    --cc=mhocko@suse.com \
    --cc=rppt@kernel.org \
    --cc=shakeel.butt@linux.dev \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    --cc=weixugc@google.com \
    --cc=yuanchu@google.com \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox