From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: kasong@tencent.com, linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>, Zi Yan <ziy@nvidia.com>,
Barry Song <baohua@kernel.org>, Hugh Dickins <hughd@google.com>,
Chris Li <chrisl@kernel.org>,
Kemeng Shi <shikemeng@huaweicloud.com>,
Nhat Pham <nphamcs@gmail.com>, Baoquan He <bhe@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Youngjun Park <youngjun.park@lge.com>,
Chengming Zhou <chengming.zhou@linux.dev>,
Roman Gushchin <roman.gushchin@linux.dev>,
Shakeel Butt <shakeel.butt@linux.dev>,
Muchun Song <muchun.song@linux.dev>,
Qi Zheng <zhengqi.arch@bytedance.com>,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
Yosry Ahmed <yosry@kernel.org>, Lorenzo Stoakes <ljs@kernel.org>,
Dev Jain <dev.jain@arm.com>, Lance Yang <lance.yang@linux.dev>,
Michal Hocko <mhocko@suse.com>, Michal Hocko <mhocko@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Axel Rasmussen <axelrasmussen@google.com>
Subject: Re: [PATCH v3 04/12] mm, swap: add support for stable large allocation in swap cache directly
Date: Tue, 12 May 2026 17:48:46 +0800 [thread overview]
Message-ID: <19f31906-d8fb-489b-8e2a-c4414c99f338@linux.alibaba.com> (raw)
In-Reply-To: <20260421-swap-table-p4-v3-4-2f23759a76bc@tencent.com>
On 4/21/26 2:16 PM, Kairui Song via B4 Relay wrote:
> From: Kairui Song <kasong@tencent.com>
>
> To make it possible to allocate large folios directly in swap cache,
> provide a new infrastructure helper to handle the swap cache status
> check, allocation, and order fallback in the swap cache layer
>
> The new helper replaces the existing swap_cache_alloc_folio. Based on
> this, all the separate swap folio allocation that is being done by anon
> / shmem before is converted to use this helper directly, unifying folio
> allocation for anon, shmem, and readahead.
>
> This slightly consolidates how allocation is synchronized, making it
> more stable and less prone to errors. The slot-count and cache-conflict
> check is now always performed with the cluster lock held before
> allocation, and repeated under the same lock right before cache
> insertion. This double check produces a stable result compared to the
> previous anon and shmem mTHP allocation implementation, avoids the
> false-negative conflict checks that the lockless path can return — large
> allocations no longer have to be unwound because the range turned out to
> be occupied — and aborts early for already-freed slots, which helps
> ordinary swapin and especially readahead, with only a marginal increase
> in cluster-lock contention (the lock is very lightly contended and stays
> local in the first place). Hence, callers of swap_cache_alloc_folio() no
> longer need to check the swap slot count or swap cache status
> themselves.
>
> And now whoever first successfully allocates a folio in the swap cache
> will be the one who charges it and performs the swap-in. The race window
> of swapping is also reduced since the loop is much more compact.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
> mm/swap.h | 3 +-
> mm/swap_state.c | 222 +++++++++++++++++++++++++++++++++++++++++---------------
> mm/zswap.c | 2 +-
> 3 files changed, 165 insertions(+), 62 deletions(-)
>
> diff --git a/mm/swap.h b/mm/swap.h
> index ad8b17a93758..6774af10a943 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -280,7 +280,8 @@ bool swap_cache_has_folio(swp_entry_t entry);
> struct folio *swap_cache_get_folio(swp_entry_t entry);
> void *swap_cache_get_shadow(swp_entry_t entry);
> void swap_cache_del_folio(struct folio *folio);
> -struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_flags,
> +struct folio *swap_cache_alloc_folio(swp_entry_t target_entry, gfp_t gfp_mask,
> + unsigned long orders, struct vm_fault *vmf,
> struct mempolicy *mpol, pgoff_t ilx);
> /* Below helpers require the caller to lock and pass in the swap cluster. */
> void __swap_cache_add_folio(struct swap_cluster_info *ci,
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 3da285a891b2..f5c77f348bbd 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -139,10 +139,10 @@ void *swap_cache_get_shadow(swp_entry_t entry)
>
> /**
> * __swap_cache_add_check - Check if a range is suitable for adding a folio.
> - * @ci: The locked swap cluster.
> - * @ci_off: Range start offset.
> - * @nr: Number of slots to check.
> - * @shadow: Returns the shadow value if one exists in the range.
> + * @ci: The locked swap cluster
> + * @targ_entry: The target swap entry to check, will be rounded down by @nr
> + * @nr: Number of slots to check, must be a power of 2
> + * @shadowp: Returns the shadow value if one exists in the range.
> *
> * Check if all slots covered by given range have a swap count >= 1.
> * Retrieves the shadow if there is one.
> @@ -150,22 +150,38 @@ void *swap_cache_get_shadow(swp_entry_t entry)
> * Context: Caller must lock the cluster.
> */
> static int __swap_cache_add_check(struct swap_cluster_info *ci,
> - unsigned int ci_off, unsigned int nr,
> - void **shadow)
> + swp_entry_t targ_entry,
> + unsigned long nr, void **shadowp)
> {
> - unsigned int ci_end = ci_off + nr;
> + unsigned int ci_off, ci_end;
> unsigned long old_tb;
>
> + /*
> + * If the target slot is not swapped out, return
> + * -EEXIST or -ENOENT. If the batch is not suitable, could be a
> + * race with concurrent free or cache add, return -EBUSY.
> + */
> if (unlikely(!ci->table))
> return -ENOENT;
> + ci_off = swp_cluster_offset(targ_entry);
> + old_tb = __swap_table_get(ci, ci_off);
> + if (swp_tb_is_folio(old_tb))
> + return -EEXIST;
> + if (!__swp_tb_get_count(old_tb))
> + return -ENOENT;
> + if (swp_tb_is_shadow(old_tb) && shadowp)
> + *shadowp = swp_tb_to_shadow(old_tb);
> +
> + if (nr == 1)
> + return 0;
> +
> + ci_off = round_down(ci_off, nr);
> + ci_end = ci_off + nr;
> do {
> old_tb = __swap_table_get(ci, ci_off);
> - if (unlikely(swp_tb_is_folio(old_tb)))
> - return -EEXIST;
> - if (unlikely(!__swp_tb_get_count(old_tb)))
> - return -ENOENT;
> - if (swp_tb_is_shadow(old_tb))
> - *shadow = swp_tb_to_shadow(old_tb);
> + if (unlikely(swp_tb_is_folio(old_tb) ||
> + !__swp_tb_get_count(old_tb)))
> + return -EBUSY;
> } while (++ci_off < ci_end);
>
> return 0;
> @@ -244,7 +260,7 @@ static int swap_cache_add_folio(struct folio *folio, swp_entry_t entry,
> si = __swap_entry_to_info(entry);
> ci = swap_cluster_lock(si, swp_offset(entry));
> ci_off = swp_cluster_offset(entry);
> - err = __swap_cache_add_check(ci, ci_off, nr_pages, &shadow);
> + err = __swap_cache_add_check(ci, entry, nr_pages, &shadow);
> if (err) {
> swap_cluster_unlock(ci);
> return err;
> @@ -399,6 +415,137 @@ void __swap_cache_replace_folio(struct swap_cluster_info *ci,
> }
> }
>
> +/*
> + * Try to allocate a folio of given order in the swap cache.
> + *
> + * This helper resolves the potential races of swap allocation
> + * and prepares a folio to be used for swap IO. May return following
> + * value:
> + *
> + * -ENOMEM / -EBUSY: Order is too large or in conflict with sub slot,
> + * caller should shrink the order and retry
> + * -ENOENT / -EEXIST: Target swap entry is unavailable or cached, the caller
> + * should abort or try to use the cached folio instead
> + */
> +static struct folio *__swap_cache_alloc(struct swap_cluster_info *ci,
> + swp_entry_t targ_entry, gfp_t gfp,
> + unsigned int order, struct vm_fault *vmf,
> + struct mempolicy *mpol, pgoff_t ilx)
> +{
> + int err;
> + swp_entry_t entry;
> + struct folio *folio;
> + void *shadow = NULL;
> + unsigned long address, nr_pages = 1 << order;
> + struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
> +
> + entry.val = round_down(targ_entry.val, nr_pages);
> +
> + /* Check if the slot and range are available, skip allocation if not */
> + spin_lock(&ci->lock);
> + err = __swap_cache_add_check(ci, targ_entry, nr_pages, NULL);
> + spin_unlock(&ci->lock);
> + if (unlikely(err))
> + return ERR_PTR(err);
> +
> + /*
> + * Limit THP gfp. The limitation is a no-op for typical
> + * GFP_HIGHUSER_MOVABLE but matters for shmem.
> + */
> + if (order)
> + gfp = thp_limit_gfp_mask(vma_thp_gfp_mask(vma), gfp);
> +
> + if (mpol || !vmf) {
> + folio = folio_alloc_mpol(gfp, order, mpol, ilx, numa_node_id());
> + } else {
> + address = round_down(vmf->address, PAGE_SIZE << order);
> + folio = vma_alloc_folio(gfp, order, vmf->vma, address);
> + }
> + if (unlikely(!folio))
> + return ERR_PTR(-ENOMEM);
> +
> + /* Double check the range is still not in conflict */
> + spin_lock(&ci->lock);
> + err = __swap_cache_add_check(ci, targ_entry, nr_pages, &shadow);
> + if (unlikely(err)) {
> + spin_unlock(&ci->lock);
> + folio_put(folio);
> + return ERR_PTR(err);
> + }
> +
> + __folio_set_locked(folio);
> + __folio_set_swapbacked(folio);
> + __swap_cache_do_add_folio(ci, folio, entry);
> + spin_unlock(&ci->lock);
> +
> + if (mem_cgroup_swapin_charge_folio(folio, vmf ? vmf->vma->vm_mm : NULL,
> + gfp, entry)) {
> + spin_lock(&ci->lock);
> + __swap_cache_do_del_folio(ci, folio, entry, shadow);
> + spin_unlock(&ci->lock);
> + folio_unlock(folio);
> + /* nr_pages refs from swap cache, 1 from allocation */
> + folio_put_refs(folio, nr_pages + 1);
> + count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK_CHARGE);
> + return ERR_PTR(-ENOMEM);
> + }
> +
> + /* For memsw accounting, swap is uncharged when folio is added to swap cache */
> + memcg1_swapin(entry, 1 << order);
> + if (shadow)
> + workingset_refault(folio, shadow);
> +
> + node_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages);
> + lruvec_stat_mod_folio(folio, NR_SWAPCACHE, nr_pages);
> +
> + /* Caller will initiate read into locked new_folio */
> + folio_add_lru(folio);
> + return folio;
> +}
> +
> +/**
> + * swap_cache_alloc_folio - Allocate folio for swapped out slot in swap cache.
> + * @targ_entry: swap entry indicating the target slot
> + * @gfp: memory allocation flags
> + * @orders: allocation orders
> + * @vmf: fault information
> + * @mpol: NUMA memory allocation policy to be applied
> + * @ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE
> + *
> + * Allocate a folio in the swap cache for one swap slot, typically before
> + * doing IO (e.g. swap in or zswap writeback). The swap slot indicated by
> + * @targ_entry must have a non-zero swap count (swapped out).
> + *
> + * Context: Caller must protect the swap device with reference count or locks.
> + * Return: Returns the folio if allocation succeeded and folio is added to
> + * swap cache. Returns error code if allocation failed due to race.
> + */
> +struct folio *swap_cache_alloc_folio(swp_entry_t targ_entry, gfp_t gfp,
> + unsigned long orders, struct vm_fault *vmf,
> + struct mempolicy *mpol, pgoff_t ilx)
> +{
> + int order, err;
> + struct folio *ret;
> + struct swap_cluster_info *ci;
> +
> + /* Always allow order 0 so swap won't fail under pressure. */
> + order = orders ? highest_order(orders |= BIT(0)) : 0;
This seems a bit odd here. In THP/mTHP operations, it's usually the
callers' responsibility to determine the allowable orders. So I think we
should not implicitly set order 0 here. Instead, we should let callers
explicitly set it. What do you think?
diff --git a/mm/shmem.c b/mm/shmem.c
index f0da10054620..fb05daeab59a 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2023,7 +2023,8 @@ static struct folio *shmem_swap_alloc_folio(struct
inode *inode,
pgoff_t ilx;
struct folio *folio;
struct mempolicy *mpol;
- unsigned long orders = BIT(order);
+ /* Always allow order 0 so swap won't fail under pressure. */
+ unsigned long orders = BIT(order) | BIT(0);
struct shmem_inode_info *info = SHMEM_I(inode);
if ((vmf && unlikely(userfaultfd_armed(vmf->vma))) ||
next prev parent reply other threads:[~2026-05-12 9:48 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-21 6:16 [PATCH v3 00/12] mm, swap: swap table phase IV: unify allocation and reduce static metadata Kairui Song via B4 Relay
2026-04-21 6:16 ` [PATCH v3 01/12] mm, swap: simplify swap cache allocation helper Kairui Song via B4 Relay
2026-05-06 13:51 ` Chris Li
2026-05-11 8:57 ` Kairui Song
2026-04-21 6:16 ` [PATCH v3 02/12] mm, swap: move common swap cache operations into standalone helpers Kairui Song via B4 Relay
2026-05-06 14:42 ` Chris Li
2026-04-21 6:16 ` [PATCH v3 03/12] mm/huge_memory: move THP gfp limit helper into header Kairui Song via B4 Relay
2026-04-21 13:08 ` Zi Yan
2026-04-21 17:21 ` Kairui Song
2026-04-21 17:23 ` Zi Yan
2026-05-12 9:02 ` Baolin Wang
2026-05-06 14:46 ` Chris Li
2026-04-21 6:16 ` [PATCH v3 04/12] mm, swap: add support for stable large allocation in swap cache directly Kairui Song via B4 Relay
2026-05-06 20:27 ` Chris Li
2026-05-12 9:48 ` Baolin Wang [this message]
2026-05-12 9:55 ` Kairui Song
2026-04-21 6:16 ` [PATCH v3 05/12] mm, swap: unify large folio allocation Kairui Song via B4 Relay
2026-05-06 20:48 ` Chris Li
2026-05-11 12:57 ` David Hildenbrand (Arm)
2026-05-11 14:37 ` Kairui Song
2026-05-11 15:15 ` David Hildenbrand (Arm)
2026-05-11 16:44 ` Kairui Song
2026-05-12 6:07 ` David Hildenbrand (Arm)
2026-05-12 10:10 ` Baolin Wang
2026-04-21 6:16 ` [PATCH v3 06/12] mm/memcg, swap: tidy up cgroup v1 memsw swap helpers Kairui Song via B4 Relay
2026-05-06 20:57 ` Chris Li
2026-04-21 6:16 ` [PATCH v3 07/12] mm, swap: support flexible batch freeing of slots in different memcgs Kairui Song via B4 Relay
2026-05-08 4:01 ` Chris Li
2026-04-21 6:16 ` [PATCH v3 08/12] mm, swap: delay and unify memcg lookup and charging for swapin Kairui Song via B4 Relay
2026-05-08 4:46 ` Chris Li
2026-04-21 6:16 ` [PATCH v3 09/12] mm, swap: consolidate cluster allocation helpers Kairui Song via B4 Relay
2026-05-08 5:02 ` Chris Li
2026-04-21 6:16 ` [PATCH v3 10/12] mm/memcg, swap: store cgroup id in cluster table directly Kairui Song via B4 Relay
2026-05-08 22:46 ` Chris Li
2026-04-21 6:16 ` [PATCH v3 11/12] mm/memcg: remove no longer used swap cgroup array Kairui Song via B4 Relay
2026-05-08 22:47 ` Chris Li
2026-04-21 6:16 ` [PATCH v3 12/12] mm, swap: merge zeromap into swap table Kairui Song via B4 Relay
2026-05-11 16:30 ` Chris Li
2026-04-24 18:11 ` [PATCH v3 00/12] mm, swap: swap table phase IV: unify allocation and reduce static metadata Kairui Song
2026-05-11 21:12 ` Andrew Morton
2026-05-12 5:10 ` Kairui Song
2026-05-11 16:34 ` Chris Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=19f31906-d8fb-489b-8e2a-c4414c99f338@linux.alibaba.com \
--to=baolin.wang@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=bhe@redhat.com \
--cc=cgroups@vger.kernel.org \
--cc=chengming.zhou@linux.dev \
--cc=chrisl@kernel.org \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=kasong@tencent.com \
--cc=lance.yang@linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@kernel.org \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=nphamcs@gmail.com \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=shikemeng@huaweicloud.com \
--cc=surenb@google.com \
--cc=yosry@kernel.org \
--cc=youngjun.park@lge.com \
--cc=zhengqi.arch@bytedance.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox