From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 02BFD1FF1C7 for ; Tue, 10 Mar 2026 07:31:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773127904; cv=none; b=LTeluI3zT32PmsKTNdxya+Mz9PAdMUhx2m3us4XCl+lKJZzUQ4XfMGCbCIJtNZ8OjRsyIqdYXGe6jiqh6sZAPgBjOZ04Kepd4UmUBeq53jKtILJFUFX4eHFFtcEyUoXz3GqJxC0veOW55p1fG0qJniTT95gc0x8nlnWr3CB75k4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773127904; c=relaxed/simple; bh=0nqDKK6XW0P552LFJYysWThjlcoNIxNGwsQv3yeHGL4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kqt8G6+Y60ZbM2dPBhTjxrythVH2njQbkfDmwWxTB/YEQmsDaFby14+8MB3WNCvqUIH2iQS2O3goXXuCKGJ9Rm4FplM7FhUZxFLgTANXtCGj01vglAd8lh13u1DOTpg1xvLbUdTrP7bl6gGYHotVYIyaSzMunw5sincCtJthQQ0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 646211BA8; Tue, 10 Mar 2026 00:31:36 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BC8C73F73B; Tue, 10 Mar 2026 00:31:33 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org, axelrasmussen@google.com, yuanchu@google.com, david@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: weixugc@google.com, ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, baohua@kernel.org, youngjun.park@lge.com, ziy@nvidia.com, kas@kernel.org, willy@infradead.org, yuzhao@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH 6/9] mm/swapfile: Make folio_dup_swap batchable Date: Tue, 10 Mar 2026 13:00:10 +0530 Message-Id: <20260310073013.4069309-7-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260310073013.4069309-1-dev.jain@arm.com> References: <20260310073013.4069309-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Teach folio_dup_swap to handle a batch of consecutive pages. Note that folio_dup_swap already can handle a subset of this: nr_pages == 1 and nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages. Currently we have a not-so-nice logic of passing in subpage == NULL if we mean to exercise the logic on the entire folio, and subpage != NULL if we want to exercise the logic on only that subpage. Remove this indirection, and explicitly pass subpage != NULL, and the number of pages required. Signed-off-by: Dev Jain --- mm/rmap.c | 2 +- mm/shmem.c | 2 +- mm/swap.h | 5 +++-- mm/swapfile.c | 12 +++++------- 4 files changed, 10 insertions(+), 11 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index dd638429c963e..f6d5b187cf09b 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2282,7 +2282,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, goto discard; } - if (folio_dup_swap(folio, subpage) < 0) { + if (folio_dup_swap(folio, subpage, 1) < 0) { set_pte_at(mm, address, pvmw.pte, pteval); goto walk_abort; } diff --git a/mm/shmem.c b/mm/shmem.c index 5e7dcf5bc5d3c..86ee34c9b40b3 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1695,7 +1695,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug, spin_unlock(&shmem_swaplist_lock); } - folio_dup_swap(folio, NULL); + folio_dup_swap(folio, folio_page(folio, 0), folio_nr_pages(folio)); shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap)); BUG_ON(folio_mapped(folio)); diff --git a/mm/swap.h b/mm/swap.h index a77016f2423b9..d9cb58ebbddd1 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -206,7 +206,7 @@ extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp); * folio_put_swap(): does the opposite thing of folio_dup_swap(). */ int folio_alloc_swap(struct folio *folio); -int folio_dup_swap(struct folio *folio, struct page *subpage); +int folio_dup_swap(struct folio *folio, struct page *subpage, unsigned int nr_pages); void folio_put_swap(struct folio *folio, struct page *subpage); /* For internal use */ @@ -390,7 +390,8 @@ static inline int folio_alloc_swap(struct folio *folio) return -EINVAL; } -static inline int folio_dup_swap(struct folio *folio, struct page *page) +static inline int folio_dup_swap(struct folio *folio, struct page *page, + unsigned int nr_pages) { return -EINVAL; } diff --git a/mm/swapfile.c b/mm/swapfile.c index 915bc93964dbd..eaf61ae6c3817 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1738,7 +1738,8 @@ int folio_alloc_swap(struct folio *folio) /** * folio_dup_swap() - Increase swap count of swap entries of a folio. * @folio: folio with swap entries bounded. - * @subpage: if not NULL, only increase the swap count of this subpage. + * @subpage: Increase the swap count of this subpage till nr number of + * pages forward. * * Typically called when the folio is unmapped and have its swap entry to * take its place: Swap entries allocated to a folio has count == 0 and pinned @@ -1752,18 +1753,15 @@ int folio_alloc_swap(struct folio *folio) * swap_put_entries_direct on its swap entry before this helper returns, or * the swap count may underflow. */ -int folio_dup_swap(struct folio *folio, struct page *subpage) +int folio_dup_swap(struct folio *folio, struct page *subpage, + unsigned int nr_pages) { swp_entry_t entry = folio->swap; - unsigned long nr_pages = folio_nr_pages(folio); VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio); - if (subpage) { - entry.val += folio_page_idx(folio, subpage); - nr_pages = 1; - } + entry.val += folio_page_idx(folio, subpage); return swap_dup_entries_cluster(swap_entry_to_info(entry), swp_offset(entry), nr_pages); -- 2.34.1