From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4394337D121 for ; Fri, 10 Apr 2026 10:33:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775817197; cv=none; b=Dinvwc+KRk9xNEut7ZIgEIg4jDf4y0h0Nfn3+iGTZ/D6WBNok4b0Vij92SKcB48YTPUDS2N4NCMTZfzzZaL0iOkoAg6yPjEM0JkgkYfCay0P3n+HjiPWkGjA4ZcGPKOkagm5hSpJ2w+M/Se3J9T4XaNKHSyACraF0q4pm8lPh7Q= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775817197; c=relaxed/simple; bh=ZjAFZQGht/MkomLmPetxzLAvC/RxdkzRQ9Ld8jFfkCA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Gc/sgfJHUCclT18E1hJJNqriEwepiTb8JKFl1d+SQYh5+ZlauKLHe6Z5wIZ7rvvpPFpOlQcRawBTWrfkp3aGpPdPrYxl4ZQzvyVXophaRL04T385mCY/8usCgqk6EfIgu3jv2qBQLLjjM2CtZYct2DBqxezsgqDBhs/HoLcYBnw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=PNexbu38; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="PNexbu38" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8890026AC; Fri, 10 Apr 2026 03:33:07 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 149533FAF5; Fri, 10 Apr 2026 03:33:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1775817193; bh=ZjAFZQGht/MkomLmPetxzLAvC/RxdkzRQ9Ld8jFfkCA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PNexbu38UIpmS8ZurlXAd4ooPHyE+rz9wcSHywjpLXQ11yC1pGXoJoxDKsLldKMPV +zQiwWNDDMKDh/wdoVY1SIGQqe9pwaIOh9+AtTbNIYcDGDb7jywOncX7VWXlg2lV81 oRFQlx2HzlW0TrxUet4ELXPGTXdzPGfypmTHRvKo= From: Dev Jain To: akpm@linux-foundation.org, david@kernel.org, hughd@google.com, chrisl@kernel.org Cc: ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, kasong@tencent.com, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, riel@surriel.com, harry@kernel.org, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH v2 6/9] mm/swapfile: Add batched version of folio_dup_swap Date: Fri, 10 Apr 2026 16:02:01 +0530 Message-Id: <20260410103204.120409-7-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260410103204.120409-1-dev.jain@arm.com> References: <20260410103204.120409-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Add folio_dup_swap_pages to handle a batch of consecutive pages. Note that folio_dup_swap already can handle a subset of this: nr_pages == 1 and nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages. Currently we have a not-so-nice logic of passing in subpage == NULL if we mean to exercise the logic on the entire folio, and subpage != NULL if we want to exercise the logic on only that subpage. Remove this indirection: the caller invokes folio_dup_swap_pages() if it wants to operate on a range of pages in the folio (i.e nr_pages may be anything between 1 till folio_nr_pages()), and invokes folio_dup_swap() if it wants to operate on the entire folio. Signed-off-by: Dev Jain --- mm/rmap.c | 2 +- mm/shmem.c | 2 +- mm/swap.h | 12 ++++++++++-- mm/swapfile.c | 20 ++++++++++++-------- 4 files changed, 24 insertions(+), 12 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 7a150edd96819..6412103fcd6cb 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2311,7 +2311,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, goto finish_unmap; } - if (folio_dup_swap(folio, subpage) < 0) { + if (folio_dup_swap_pages(folio, subpage, 1) < 0) { set_pte_at(mm, address, pvmw.pte, pteval); goto walk_abort; } diff --git a/mm/shmem.c b/mm/shmem.c index 5aa43657886c3..3f9523c97b9ed 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1695,7 +1695,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug, spin_unlock(&shmem_swaplist_lock); } - folio_dup_swap(folio, NULL); + folio_dup_swap(folio); shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap)); BUG_ON(folio_mapped(folio)); diff --git a/mm/swap.h b/mm/swap.h index a77016f2423b9..3c25f914e908b 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -206,7 +206,9 @@ extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp); * folio_put_swap(): does the opposite thing of folio_dup_swap(). */ int folio_alloc_swap(struct folio *folio); -int folio_dup_swap(struct folio *folio, struct page *subpage); +int folio_dup_swap(struct folio *folio); +int folio_dup_swap_pages(struct folio *folio, struct page *page, + unsigned long nr_pages); void folio_put_swap(struct folio *folio, struct page *subpage); /* For internal use */ @@ -390,7 +392,13 @@ static inline int folio_alloc_swap(struct folio *folio) return -EINVAL; } -static inline int folio_dup_swap(struct folio *folio, struct page *page) +static inline int folio_dup_swap(struct folio *folio) +{ + return -EINVAL; +} + +static inline int folio_dup_swap_pages(struct folio *folio, struct page *page, + unsigned long nr_pages) { return -EINVAL; } diff --git a/mm/swapfile.c b/mm/swapfile.c index ff315b752afd3..22be05a0bb200 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1740,9 +1740,10 @@ int folio_alloc_swap(struct folio *folio) } /** - * folio_dup_swap() - Increase swap count of swap entries of a folio. + * folio_dup_swap_pages() - Increase swap count of swap entries of a folio. * @folio: folio with swap entries bounded. - * @subpage: if not NULL, only increase the swap count of this subpage. + * @page: the first page in the folio to increase the swap count for. + * @nr_pages: the number of pages in the folio to increase the swap count for. * * Typically called when the folio is unmapped and have its swap entry to * take its place: Swap entries allocated to a folio has count == 0 and pinned @@ -1756,23 +1757,26 @@ int folio_alloc_swap(struct folio *folio) * swap_put_entries_direct on its swap entry before this helper returns, or * the swap count may underflow. */ -int folio_dup_swap(struct folio *folio, struct page *subpage) +int folio_dup_swap_pages(struct folio *folio, struct page *page, + unsigned long nr_pages) { swp_entry_t entry = folio->swap; - unsigned long nr_pages = folio_nr_pages(folio); VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio); - if (subpage) { - entry.val += folio_page_idx(folio, subpage); - nr_pages = 1; - } + entry.val += folio_page_idx(folio, page); return swap_dup_entries_cluster(swap_entry_to_info(entry), swp_offset(entry), nr_pages); } +int folio_dup_swap(struct folio *folio) +{ + return folio_dup_swap_pages(folio, folio_page(folio, 0), + folio_nr_pages(folio)); +} + /** * folio_put_swap() - Decrease swap count of swap entries of a folio. * @folio: folio with swap entries bounded, must be in swap cache and locked. -- 2.34.1