From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-180.mta1.migadu.com (out-180.mta1.migadu.com [95.215.58.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB06E3B6365 for ; Mon, 27 Apr 2026 10:06:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.180 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777284375; cv=none; b=lAlg3g2FZWAV/2wSFxmmDcTAYt+YQDSlNEm5N24Chh8HW7BhWeFhfmMZ8K7L15XMJhfY0VfHwnufKtbLsS15ZNBPfzEFxquvlXNMcjz2vTm0wRr8X5eNjasTXpNhSbv0gupYLW+WLqdMW0aDgrSVy69vlVYr4ZEJ5etM9GWJB0g= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777284375; c=relaxed/simple; bh=VEq/Up7IsyaxDKvvHwNJ38lOOVSRAiaigzLZXEskE7g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YQPHndj2pHUonUgkGbasoDXvwZJuR/Orch4r4l+no6pMXx3QfKuX1pHz1dO00GKN4FcM5Mw/BG5ugPGzCQ0mUhCqmESMnCu/G3x0i827KsGU5o/A+XuvPUi/6YLrx/tefaLmzniVS1z0QpHSEjSTxuPGxO/yHKl8YCw4NB7soHM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=aOhw6XkA; arc=none smtp.client-ip=95.215.58.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="aOhw6XkA" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1777284371; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1sDaL26tTvVssr5ZS2l/nk+ErEssHx199G/HJ8+Wq44=; b=aOhw6XkArjq3b4dj69njXaBW547v7drtMjVMKUX4PD5fnNpVqS72Vgu3K/cTcvbBZZZqxi zi/Zlk2KkLAEUDibkJI8h5pmfcI4ZPuaEahHEUV+KzHzP8kziVjatw6250hYH4w/Lh6rE3 c+gw2pPZmpvOKn+5yEHUQAF6LGDBbVE= From: Usama Arif To: Andrew Morton , david@kernel.org, chrisl@kernel.org, kasong@tencent.com, ljs@kernel.org, ziy@nvidia.com Cc: bhe@redhat.com, willy@infradead.org, youngjun.park@lge.com, hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, alex@ghiti.fr, kas@kernel.org, baohua@kernel.org, dev.jain@arm.com, baolin.wang@linux.alibaba.com, npache@redhat.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, Vlastimil Babka , lance.yang@linux.dev, linux-kernel@vger.kernel.org, nphamcs@gmail.com, shikemeng@huaweicloud.com, kernel-team@meta.com, Usama Arif Subject: [PATCH 02/13] mm: extract ensure_on_mmlist() helper Date: Mon, 27 Apr 2026 03:01:51 -0700 Message-ID: <20260427100553.2754667-3-usama.arif@linux.dev> In-Reply-To: <20260427100553.2754667-1-usama.arif@linux.dev> References: <20260427100553.2754667-1-usama.arif@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT When a swap entry is installed in a page table, the mm must be added to init_mm.mmlist so that swapoff can find and unuse its swap entries. This double-checked locking pattern is currently open-coded in try_to_unmap_one() and copy_nonpresent_pte(). Move it into ensure_on_mmlist() in mm/internal.h and convert both callers so it can be reused by upcoming PMD-level swap entry code paths that also need to register the mm with swapoff. copy_nonpresent_pte() previously inserted into &src_mm->mmlist rather than &init_mm.mmlist, but the insertion point is irrelevant, mmlist is a circular list and swapoff walks it entirely from init_mm.mmlist, so only membership matters, not position. Signed-off-by: Usama Arif --- mm/internal.h | 13 +++++++++++++ mm/memory.c | 9 +-------- mm/rmap.c | 7 +------ 3 files changed, 15 insertions(+), 14 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 5a2ddcf68e0b..7de489689f54 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1952,4 +1952,17 @@ static inline int get_sysctl_max_map_count(void) bool may_expand_vm(struct mm_struct *mm, const vma_flags_t *vma_flags, unsigned long npages); +/* + * Ensure @mm is on the init_mm.mmlist so swapoff can find it. + */ +static inline void ensure_on_mmlist(struct mm_struct *mm) +{ + if (list_empty(&mm->mmlist)) { + spin_lock(&mmlist_lock); + if (list_empty(&mm->mmlist)) + list_add(&mm->mmlist, &init_mm.mmlist); + spin_unlock(&mmlist_lock); + } +} + #endif /* __MM_INTERNAL_H */ diff --git a/mm/memory.c b/mm/memory.c index ea6568571131..33d7cc274e23 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -937,14 +937,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (swap_dup_entry_direct(entry) < 0) return -EIO; - /* make sure dst_mm is on swapoff's mmlist. */ - if (unlikely(list_empty(&dst_mm->mmlist))) { - spin_lock(&mmlist_lock); - if (list_empty(&dst_mm->mmlist)) - list_add(&dst_mm->mmlist, - &src_mm->mmlist); - spin_unlock(&mmlist_lock); - } + ensure_on_mmlist(dst_mm); /* Mark the swap entry as shared. */ if (pte_swp_exclusive(orig_pte)) { pte = pte_swp_clear_exclusive(orig_pte); diff --git a/mm/rmap.c b/mm/rmap.c index 78b7fb5f367c..057e18cb80b0 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2302,12 +2302,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, set_pte_at(mm, address, pvmw.pte, pteval); goto walk_abort; } - if (list_empty(&mm->mmlist)) { - spin_lock(&mmlist_lock); - if (list_empty(&mm->mmlist)) - list_add(&mm->mmlist, &init_mm.mmlist); - spin_unlock(&mmlist_lock); - } + ensure_on_mmlist(mm); dec_mm_counter(mm, MM_ANONPAGES); inc_mm_counter(mm, MM_SWAPENTS); swp_pte = swp_entry_to_pte(entry); -- 2.52.0