From: Peter Zijlstra <peterz@infradead.org>
To: Nhat Pham <nphamcs@gmail.com>
Cc: kasong@tencent.com, Liam.Howlett@oracle.com,
akpm@linux-foundation.org, apopple@nvidia.com,
axelrasmussen@google.com, baohua@kernel.org,
baolin.wang@linux.alibaba.com, bhe@redhat.com, byungchul@sk.com,
cgroups@vger.kernel.org, chengming.zhou@linux.dev,
chrisl@kernel.org, corbet@lwn.net, david@kernel.org,
dev.jain@arm.com, gourry@gourry.net, hannes@cmpxchg.org,
hughd@google.com, jannh@google.com, joshua.hahnjy@gmail.com,
lance.yang@linux.dev, lenb@kernel.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-pm@vger.kernel.org, lorenzo.stoakes@oracle.com,
matthew.brost@intel.com, mhocko@suse.com, muchun.song@linux.dev,
npache@redhat.com, pavel@kernel.org, peterx@redhat.com,
pfalcato@suse.de, rafael@kernel.org, rakie.kim@sk.com,
roman.gushchin@linux.dev, rppt@kernel.org, ryan.roberts@arm.com,
shakeel.butt@linux.dev, shikemeng@huaweicloud.com,
surenb@google.com, tglx@kernel.org, vbabka@suse.cz,
weixugc@google.com, ying.huang@linux.alibaba.com,
yosry.ahmed@linux.dev, yuanchu@google.com,
zhengqi.arch@bytedance.com, ziy@nvidia.com, kernel-team@meta.com,
riel@surriel.com
Subject: Re: [PATCH v4 09/21] mm: swap: allocate a virtual swap slot for each swapped out page
Date: Thu, 19 Mar 2026 08:56:21 +0100 [thread overview]
Message-ID: <20260319075621.GR3738010@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <20260318222953.441758-10-nphamcs@gmail.com>
On Wed, Mar 18, 2026 at 03:29:40PM -0700, Nhat Pham wrote:
> diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
> index 62cd7b35a29c9..85cb45022e796 100644
> --- a/include/linux/cpuhotplug.h
> +++ b/include/linux/cpuhotplug.h
> @@ -86,6 +86,7 @@ enum cpuhp_state {
> CPUHP_FS_BUFF_DEAD,
> CPUHP_PRINTK_DEAD,
> CPUHP_MM_MEMCQ_DEAD,
> + CPUHP_MM_VSWAP_DEAD,
> CPUHP_PERCPU_CNT_DEAD,
> CPUHP_RADIX_DEAD,
> CPUHP_PAGE_ALLOC,
> +static int vswap_cpu_dead(unsigned int cpu)
> +{
> + struct vswap_cluster *cluster;
> + int order;
> +
> + rcu_read_lock();
nit:
guard(rcu)();
> + for (order = 0; order < SWAP_NR_ORDERS; order++) {
> + cluster = per_cpu(percpu_vswap_cluster.clusters[order], cpu);
> + if (cluster) {
> + per_cpu(percpu_vswap_cluster.clusters[order], cpu) = NULL;
> + spin_lock(&cluster->lock);
This breaks on PREEMPT_RT as this is ran with IRQs disabled. This must
be a raw_spinlock_t.
> + cluster->cached = false;
> + if (refcount_dec_and_test(&cluster->refcnt))
> + vswap_cluster_free(cluster);
And this... below.
> + spin_unlock(&cluster->lock);
> + }
> + }
> + rcu_read_unlock();
> +
> + return 0;
> +}
> +static void vswap_cluster_free(struct vswap_cluster *cluster)
> +{
> + VM_WARN_ON(cluster->count || cluster->cached);
> + VM_WARN_ON(!spin_is_locked(&cluster->lock));
This is terrible, please use:
lockdep_assert_held(&cluster->lock);
> + xa_lock(&vswap_cluster_map);
This is again broken, this cannot be from a DEAD callback with IRQs
disabled.
> + list_del_init(&cluster->list);
> + __xa_erase(&vswap_cluster_map, cluster->id);
Strictly speaking this can end up in xas_alloc(), which is again, not
allowed in a DEAD callback.
> + xa_unlock(&vswap_cluster_map);
> + rcu_head_init(&cluster->rcu);
> + kvfree_rcu(cluster, rcu);
> +}
next prev parent reply other threads:[~2026-03-19 7:57 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-18 22:29 [PATCH v4 00/21] Virtual Swap Space Nhat Pham
2026-03-18 22:29 ` [PATCH v4 01/21] mm/swap: decouple swap cache from physical swap infrastructure Nhat Pham
2026-03-18 22:29 ` [PATCH v4 02/21] swap: rearrange the swap header file Nhat Pham
2026-03-18 22:29 ` [PATCH v4 03/21] mm: swap: add an abstract API for locking out swapoff Nhat Pham
2026-03-18 22:29 ` [PATCH v4 04/21] zswap: add new helpers for zswap entry operations Nhat Pham
2026-03-18 22:29 ` [PATCH v4 05/21] mm/swap: add a new function to check if a swap entry is in swap cached Nhat Pham
2026-03-18 22:29 ` [PATCH v4 06/21] mm: swap: add a separate type for physical swap slots Nhat Pham
2026-03-18 22:29 ` [PATCH v4 07/21] mm: create scaffolds for the new virtual swap implementation Nhat Pham
2026-03-18 22:29 ` [PATCH v4 08/21] zswap: prepare zswap for swap virtualization Nhat Pham
2026-03-18 22:29 ` [PATCH v4 09/21] mm: swap: allocate a virtual swap slot for each swapped out page Nhat Pham
2026-03-19 7:56 ` Peter Zijlstra [this message]
2026-03-19 18:37 ` Nhat Pham
2026-03-19 19:26 ` Nhat Pham
2026-03-19 21:03 ` Peter Zijlstra
2026-03-19 23:27 ` Nhat Pham
2026-03-18 22:29 ` [PATCH v4 10/21] swap: move swap cache to virtual swap descriptor Nhat Pham
2026-03-18 22:29 ` [PATCH v4 11/21] zswap: move zswap entry management to the " Nhat Pham
2026-03-18 22:29 ` [PATCH v4 12/21] swap: implement the swap_cgroup API using virtual swap Nhat Pham
2026-03-18 22:29 ` [PATCH v4 13/21] swap: manage swap entry lifecycle at the virtual swap layer Nhat Pham
2026-03-18 22:29 ` [PATCH v4 14/21] mm: swap: decouple virtual swap slot from backing store Nhat Pham
2026-03-18 22:29 ` [PATCH v4 15/21] zswap: do not start zswap shrinker if there is no physical swap slots Nhat Pham
2026-03-18 22:29 ` [PATCH v4 16/21] swap: do not unnecesarily pin readahead swap entries Nhat Pham
2026-03-18 22:29 ` [PATCH v4 17/21] swapfile: remove zeromap bitmap Nhat Pham
2026-03-18 22:29 ` [PATCH v4 18/21] memcg: swap: only charge physical swap slots Nhat Pham
2026-03-18 22:29 ` [PATCH v4 19/21] swap: simplify swapoff using virtual swap Nhat Pham
2026-03-18 22:29 ` [PATCH v4 20/21] swapfile: replace the swap map with bitmaps Nhat Pham
2026-03-18 22:29 ` [PATCH v4 21/21] vswap: batch contiguous vswap free calls Nhat Pham
2026-03-18 23:12 ` [PATCH v4 00/21] Virtual Swap Space Nhat Pham
2026-03-19 21:36 ` [syzbot ci] " syzbot ci
2026-03-19 23:26 ` Nhat Pham
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260319075621.GR3738010@noisy.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=byungchul@sk.com \
--cc=cgroups@vger.kernel.org \
--cc=chengming.zhou@linux.dev \
--cc=chrisl@kernel.org \
--cc=corbet@lwn.net \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=joshua.hahnjy@gmail.com \
--cc=kasong@tencent.com \
--cc=kernel-team@meta.com \
--cc=lance.yang@linux.dev \
--cc=lenb@kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-pm@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=matthew.brost@intel.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=npache@redhat.com \
--cc=nphamcs@gmail.com \
--cc=pavel@kernel.org \
--cc=peterx@redhat.com \
--cc=pfalcato@suse.de \
--cc=rafael@kernel.org \
--cc=rakie.kim@sk.com \
--cc=riel@surriel.com \
--cc=roman.gushchin@linux.dev \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=shikemeng@huaweicloud.com \
--cc=surenb@google.com \
--cc=tglx@kernel.org \
--cc=vbabka@suse.cz \
--cc=weixugc@google.com \
--cc=ying.huang@linux.alibaba.com \
--cc=yosry.ahmed@linux.dev \
--cc=yuanchu@google.com \
--cc=zhengqi.arch@bytedance.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.