From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-171.mta0.migadu.com (out-171.mta0.migadu.com [91.218.175.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 32BD93FD13D for ; Wed, 13 May 2026 17:21:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=91.218.175.171 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778692885; cv=none; b=KzCQkJqRZ60PlcdgZkEc5wUZp7QXiPC7yrX4A1iJ0DbnbGJmN30haCBmK1FiIzldPj0lShecQ9/52MryBKv2YkSz8d0FVfD4eLXcwx1BMaU1nSjXERB/PGUj9H9gMWzKyyWbrZjnvdUMOBRokQF2A7sMCFZhh1FFcmtd+tecaog= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778692885; c=relaxed/simple; bh=45gOOtATtcFmorvtvVXY08TzYAT8gsdoY67gQCgn854=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=uZOWLhbUpYoQNPxbbQ4SkRc5vGP1OkQwOVwanoK1rDcgo+zhJSOyM8R8858e31bVF6IXw/2OKqdj6e8PzYAFMKyxDW7aXCwmcrr1bRkBIucBwDlqgl9e1wuS8WLZZTPhFg/0shw8sC6oOboG3SORRrk8HRVWhtKzIOTdN4yKaeU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=gaLGBIAG; arc=none smtp.client-ip=91.218.175.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="gaLGBIAG" Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1778692881; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UrhGVxyFPXy0xQJvgerPoS9TG48GaE4cxBregPqEi3U=; b=gaLGBIAGheFiIhmaIIWPUjm3SFXPoQSRhKHGEmwMh8yLz/kvLOM44hXLGs48q0RYejR19z Oae1vP82nsB1J4ybUJdHPz/qRGbcM0fiJVYcjFmWtAHgaJnrUBJS77eSfdzP/rm/vbQP6E TUaiI0oMg3CFCka7DIQbDfNFCchw6Yg= Date: Wed, 13 May 2026 10:21:13 -0700 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Subject: Re: [PATCH 02/13] mm: extract ensure_on_mmlist() helper To: "David Hildenbrand (Arm)" , Andrew Morton , chrisl@kernel.org, kasong@tencent.com, ljs@kernel.org, ziy@nvidia.com Cc: bhe@redhat.com, willy@infradead.org, youngjun.park@lge.com, hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, alex@ghiti.fr, kas@kernel.org, baohua@kernel.org, dev.jain@arm.com, baolin.wang@linux.alibaba.com, npache@redhat.com, Liam.Howlett@oracle.com, ryan.roberts@arm.com, Vlastimil Babka , lance.yang@linux.dev, linux-kernel@vger.kernel.org, nphamcs@gmail.com, shikemeng@huaweicloud.com, kernel-team@meta.com References: <20260427100553.2754667-1-usama.arif@linux.dev> <20260427100553.2754667-3-usama.arif@linux.dev> <993c88a1-1644-44e1-94c6-09bac9f02978@kernel.org> Content-Language: en-US X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Usama Arif In-Reply-To: <993c88a1-1644-44e1-94c6-09bac9f02978@kernel.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT On 13/05/2026 14:32, David Hildenbrand (Arm) wrote: > On 4/27/26 12:01, Usama Arif wrote: >> When a swap entry is installed in a page table, the mm must be added >> to init_mm.mmlist so that swapoff can find and unuse its swap entries. >> This double-checked locking pattern is currently open-coded in >> try_to_unmap_one() and copy_nonpresent_pte(). >> >> Move it into ensure_on_mmlist() in mm/internal.h and convert both >> callers so it can be reused by upcoming PMD-level swap entry code >> paths that also need to register the mm with swapoff. >> >> copy_nonpresent_pte() previously inserted into &src_mm->mmlist rather >> than &init_mm.mmlist, but the insertion point is irrelevant, mmlist >> is a circular list and swapoff walks it entirely from init_mm.mmlist, >> so only membership matters, not position. >> >> Signed-off-by: Usama Arif >> --- >> mm/internal.h | 13 +++++++++++++ >> mm/memory.c | 9 +-------- >> mm/rmap.c | 7 +------ >> 3 files changed, 15 insertions(+), 14 deletions(-) >> >> diff --git a/mm/internal.h b/mm/internal.h >> index 5a2ddcf68e0b..7de489689f54 100644 >> --- a/mm/internal.h >> +++ b/mm/internal.h >> @@ -1952,4 +1952,17 @@ static inline int get_sysctl_max_map_count(void) >> bool may_expand_vm(struct mm_struct *mm, const vma_flags_t *vma_flags, >> unsigned long npages); >> >> +/* >> + * Ensure @mm is on the init_mm.mmlist so swapoff can find it. >> + */ >> +static inline void ensure_on_mmlist(struct mm_struct *mm) >> +{ >> + if (list_empty(&mm->mmlist)) { >> + spin_lock(&mmlist_lock); >> + if (list_empty(&mm->mmlist)) >> + list_add(&mm->mmlist, &init_mm.mmlist); >> + spin_unlock(&mmlist_lock); >> + } >> +} > > Instead of talking about the low level detail ("add to mmlist"), maybe we could > just talk about the high-level goal: make sure that the MM can hold swap entries. > > > mm_prepare_for_swap() > > or sth like that? > Thanks for the review! Ah so basically rename the function to mm_prepare_for_swap(). I felt like it makes the function sound more important than it is? But it is a better name than ensure_on_mmlist(). Maybe mm_prepare_for_swapoff()? As the mmlist is only used for swapoff.