From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 662AF42DFEB for ; Tue, 12 May 2026 06:07:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778566074; cv=none; b=uNHgrUyhFqdYHt1U0BzIxiD5/o/kVI/ht0qCzfVqmk2TYHp3gaRHoEmOz0CmEeaSnelUouC85V3uYsjhT+ey2MoOz+mbg9F0ofIikdKez6Jc8RHyo+9yYefnlcMSzBFYsWBlpLvNX/B92v8/ksO2EhtCEHNkNKnj07BU+ZTLPcs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778566074; c=relaxed/simple; bh=9zpM7qKKZXHe+FhT3iNnsFsnYglQawNj7Jl2iPY8FuA=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=mk6pzdBCOUjVOSU/NIq5difRlvS2fu9sH/9R1gvwfmR96EH3Ki9mJJz4jO6O4NDRMWj7YVirdm4waRoAc/hM6hvhvMy5Snhzyx/RrxbBa6QSdQUqBLe4o8oD/uOlQpKuSWw2+knZobYxsrHHOWSKvPnBLfxJT7BwC0tRl4TR1wM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=IWhIlYkj; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="IWhIlYkj" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 046FF14BF; Mon, 11 May 2026 23:07:33 -0700 (PDT) Received: from [10.164.148.42] (MacBook-Pro.blr.arm.com [10.164.148.42]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 411C93F7B4; Mon, 11 May 2026 23:07:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778566058; bh=9zpM7qKKZXHe+FhT3iNnsFsnYglQawNj7Jl2iPY8FuA=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=IWhIlYkj7qNFhRubazVtb962dfKqnU2ziInLb4tN7UYp7Hl7nyngsQcBnhGDGguFq XmaF/P/+IEsuAkSLQbfDDPIL4G2uMibNEWgoDXJ/pS5fblp7xo/P7MrMZ5zhOOuzxW wuHy84WGrIwj+NvKRBvFPW4TMUNcD+jC71uUiIEg= Message-ID: <72d09e59-47f9-4faf-8cd9-0d168895a4c0@arm.com> Date: Tue, 12 May 2026 11:37:23 +0530 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 6/9] mm/swapfile: Add batched version of folio_dup_swap To: "David Hildenbrand (Arm)" , akpm@linux-foundation.org, ljs@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: riel@surriel.com, liam@infradead.org, vbabka@kernel.org, harry@kernel.org, jannh@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, rppt@kernel.org, surenb@google.com, mhocko@suse.com, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, pfalcato@suse.de, ryan.roberts@arm.com, anshuman.khandual@arm.com References: <20260506094504.2588857-1-dev.jain@arm.com> <20260506094504.2588857-7-dev.jain@arm.com> <871d9a91-2d14-4dbb-934d-b52c12dae7cd@kernel.org> Content-Language: en-US From: Dev Jain In-Reply-To: <871d9a91-2d14-4dbb-934d-b52c12dae7cd@kernel.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 11/05/26 1:15 pm, David Hildenbrand (Arm) wrote: > On 5/6/26 11:45, Dev Jain wrote: >> Add folio_dup_swap_pages to handle a batch of consecutive pages. Note >> that folio_dup_swap already can handle a subset of this: nr_pages == 1 and >> nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages. >> >> Currently we have a not-so-nice logic of passing in subpage == NULL if >> we mean to exercise the logic on the entire folio, and subpage != NULL if >> we want to exercise the logic on only that subpage. Remove this >> indirection: the caller invokes folio_dup_swap_pages() if it wants to >> operate on a range of pages in the folio (i.e nr_pages may be anything >> between 1 till folio_nr_pages()), and invokes folio_dup_swap() if it >> wants to operate on the entire folio. >> >> Signed-off-by: Dev Jain >> --- >> mm/rmap.c | 2 +- >> mm/shmem.c | 2 +- >> mm/swap.h | 12 ++++++++++-- >> mm/swapfile.c | 20 ++++++++++++-------- >> 4 files changed, 24 insertions(+), 12 deletions(-) >> >> diff --git a/mm/rmap.c b/mm/rmap.c >> index 25813e3605991..352ba77d90f67 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -2314,7 +2314,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >> goto finish_unmap; >> } >> >> - if (folio_dup_swap(folio, subpage) < 0) { >> + if (folio_dup_swap_pages(folio, subpage, 1) < 0) { > > Can you throw in a patch to rename subpage -> page first? Sure. > >> set_pte_at(mm, address, pvmw.pte, pteval); >> goto walk_abort; >> } >> diff --git a/mm/shmem.c b/mm/shmem.c >> index bab3529af23c5..5e4f521399847 100644 >> --- a/mm/shmem.c >> +++ b/mm/shmem.c >> @@ -1698,7 +1698,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug, >> spin_unlock(&shmem_swaplist_lock); >> } >> >> - folio_dup_swap(folio, NULL); >> + folio_dup_swap(folio); >> shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap)); >> >> BUG_ON(folio_mapped(folio)); >> diff --git a/mm/swap.h b/mm/swap.h >> index a77016f2423b9..3c25f914e908b 100644 >> --- a/mm/swap.h >> +++ b/mm/swap.h >> @@ -206,7 +206,9 @@ extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp); >> * folio_put_swap(): does the opposite thing of folio_dup_swap(). >> */ >> int folio_alloc_swap(struct folio *folio); >> -int folio_dup_swap(struct folio *folio, struct page *subpage); >> +int folio_dup_swap(struct folio *folio); >> +int folio_dup_swap_pages(struct folio *folio, struct page *page, >> + unsigned long nr_pages); >> void folio_put_swap(struct folio *folio, struct page *subpage); >> >> /* For internal use */ >> @@ -390,7 +392,13 @@ static inline int folio_alloc_swap(struct folio *folio) >> return -EINVAL; >> } >> >> -static inline int folio_dup_swap(struct folio *folio, struct page *page) >> +static inline int folio_dup_swap(struct folio *folio) >> +{ >> + return -EINVAL; >> +} >> + > > Much cleaner. > > Can we have a common folio_dup_swap() function instead that does what the > variant below does? (call folio_dup_swap_pages()) I am guessing you are saying to simply remove the folio_dup_swap stub because we have a stub for folio_dup_swap_pages anyways, I'll do that. > >> +static inline int folio_dup_swap_pages(struct folio *folio, struct page *page, >> + unsigned long nr_pages) >> { >> return -EINVAL; >> } >> diff --git a/mm/swapfile.c b/mm/swapfile.c >> index c7e173b93e11d..28daf92839e77 100644 >> --- a/mm/swapfile.c >> +++ b/mm/swapfile.c >> @@ -1740,9 +1740,10 @@ int folio_alloc_swap(struct folio *folio) >> } >> >> /** >> - * folio_dup_swap() - Increase swap count of swap entries of a folio. >> + * folio_dup_swap_pages() - Increase swap count of swap entries of a folio. >> * @folio: folio with swap entries bounded. >> - * @subpage: if not NULL, only increase the swap count of this subpage. >> + * @page: the first page in the folio to increase the swap count for. >> + * @nr_pages: the number of pages in the folio to increase the swap count for. >> * >> * Typically called when the folio is unmapped and have its swap entry to >> * take its place: Swap entries allocated to a folio has count == 0 and pinned >> @@ -1756,23 +1757,26 @@ int folio_alloc_swap(struct folio *folio) >> * swap_put_entries_direct on its swap entry before this helper returns, or >> * the swap count may underflow. >> */ >> -int folio_dup_swap(struct folio *folio, struct page *subpage) >> +int folio_dup_swap_pages(struct folio *folio, struct page *page, >> + unsigned long nr_pages) >> { >> swp_entry_t entry = folio->swap; >> - unsigned long nr_pages = folio_nr_pages(folio); >> >> VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); >> VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio); >> >> - if (subpage) { >> - entry.val += folio_page_idx(folio, subpage); >> - nr_pages = 1; >> - } >> + entry.val += folio_page_idx(folio, page); >> >> return swap_dup_entries_cluster(swap_entry_to_info(entry), >> swp_offset(entry), nr_pages); >> } >> >> +int folio_dup_swap(struct folio *folio) >> +{ >> + return folio_dup_swap_pages(folio, folio_page(folio, 0), >> + folio_nr_pages(folio)); >> +} > > Can you add simplistic kerneldoc for folio_dup_swap() as well, and mostly just > link to folio_dup_swap_pages() ? Okay. >