From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 50E2A3ED5A1 for ; Wed, 6 May 2026 09:46:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778060774; cv=none; b=UBk9cC++PzBxUrlKGPpqav+d/Q/sItfxsGLj/o2k9uuzM7fNsc6+dwPPhGvYK4BUDL9mq/iqnr5QqtlwGFFWBEjknnAl/5Hfm1EdMybnl5XTTILr+gb4kHNtocGL7L0hgIS7ngYWdSEr9LFA/uWSOaAnyLtqbdvZkizuiDmeyo0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778060774; c=relaxed/simple; bh=lHY2l0/xocHky2BKTnqlwct7+gkWQSoxn85NyNv6yjw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CO1qUgEosTAqtQrGscL/Z/RdorCsy88Di/fNxMKGzV4QCLZDVv37bhFJRyXsP8CcBfSPCzXSP72xkrisWAA2AydEH6AaC2mD3H15oeM4r/p4ukh923tB2778mvAz1LwFPgHATq7zCdup/evKekM1Ha+I1yx/5/FAf36PDu7oxTw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=OeADSiwf; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="OeADSiwf" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 20DD41AED; Wed, 6 May 2026 02:46:05 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9219E3F7B4; Wed, 6 May 2026 02:46:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778060770; bh=lHY2l0/xocHky2BKTnqlwct7+gkWQSoxn85NyNv6yjw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OeADSiwf6zbxzH7ZU6UXWQNT0MbjoWHMlLISm2s/ti4sHlhLslh6le9RR8OAj61Jp 9pDYPxfCQCEaSs9OWvhk+NzgjW3mJZlrRTL/Zez+xtizSOSzpep+Er9MUxH9+B7xPV jz1ucIckm41bUG72Xqkv5RJnGJky1BSEomZHB8kc= From: Dev Jain To: akpm@linux-foundation.org, david@kernel.org, ljs@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: Dev Jain , riel@surriel.com, liam@infradead.org, vbabka@kernel.org, harry@kernel.org, jannh@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, rppt@kernel.org, surenb@google.com, mhocko@suse.com, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, pfalcato@suse.de, ryan.roberts@arm.com, anshuman.khandual@arm.com Subject: [PATCH v3 6/9] mm/swapfile: Add batched version of folio_dup_swap Date: Wed, 6 May 2026 15:15:01 +0530 Message-Id: <20260506094504.2588857-7-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260506094504.2588857-1-dev.jain@arm.com> References: <20260506094504.2588857-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Add folio_dup_swap_pages to handle a batch of consecutive pages. Note that folio_dup_swap already can handle a subset of this: nr_pages == 1 and nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages. Currently we have a not-so-nice logic of passing in subpage == NULL if we mean to exercise the logic on the entire folio, and subpage != NULL if we want to exercise the logic on only that subpage. Remove this indirection: the caller invokes folio_dup_swap_pages() if it wants to operate on a range of pages in the folio (i.e nr_pages may be anything between 1 till folio_nr_pages()), and invokes folio_dup_swap() if it wants to operate on the entire folio. Signed-off-by: Dev Jain --- mm/rmap.c | 2 +- mm/shmem.c | 2 +- mm/swap.h | 12 ++++++++++-- mm/swapfile.c | 20 ++++++++++++-------- 4 files changed, 24 insertions(+), 12 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 25813e3605991..352ba77d90f67 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2314,7 +2314,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, goto finish_unmap; } - if (folio_dup_swap(folio, subpage) < 0) { + if (folio_dup_swap_pages(folio, subpage, 1) < 0) { set_pte_at(mm, address, pvmw.pte, pteval); goto walk_abort; } diff --git a/mm/shmem.c b/mm/shmem.c index bab3529af23c5..5e4f521399847 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1698,7 +1698,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug, spin_unlock(&shmem_swaplist_lock); } - folio_dup_swap(folio, NULL); + folio_dup_swap(folio); shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap)); BUG_ON(folio_mapped(folio)); diff --git a/mm/swap.h b/mm/swap.h index a77016f2423b9..3c25f914e908b 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -206,7 +206,9 @@ extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp); * folio_put_swap(): does the opposite thing of folio_dup_swap(). */ int folio_alloc_swap(struct folio *folio); -int folio_dup_swap(struct folio *folio, struct page *subpage); +int folio_dup_swap(struct folio *folio); +int folio_dup_swap_pages(struct folio *folio, struct page *page, + unsigned long nr_pages); void folio_put_swap(struct folio *folio, struct page *subpage); /* For internal use */ @@ -390,7 +392,13 @@ static inline int folio_alloc_swap(struct folio *folio) return -EINVAL; } -static inline int folio_dup_swap(struct folio *folio, struct page *page) +static inline int folio_dup_swap(struct folio *folio) +{ + return -EINVAL; +} + +static inline int folio_dup_swap_pages(struct folio *folio, struct page *page, + unsigned long nr_pages) { return -EINVAL; } diff --git a/mm/swapfile.c b/mm/swapfile.c index c7e173b93e11d..28daf92839e77 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1740,9 +1740,10 @@ int folio_alloc_swap(struct folio *folio) } /** - * folio_dup_swap() - Increase swap count of swap entries of a folio. + * folio_dup_swap_pages() - Increase swap count of swap entries of a folio. * @folio: folio with swap entries bounded. - * @subpage: if not NULL, only increase the swap count of this subpage. + * @page: the first page in the folio to increase the swap count for. + * @nr_pages: the number of pages in the folio to increase the swap count for. * * Typically called when the folio is unmapped and have its swap entry to * take its place: Swap entries allocated to a folio has count == 0 and pinned @@ -1756,23 +1757,26 @@ int folio_alloc_swap(struct folio *folio) * swap_put_entries_direct on its swap entry before this helper returns, or * the swap count may underflow. */ -int folio_dup_swap(struct folio *folio, struct page *subpage) +int folio_dup_swap_pages(struct folio *folio, struct page *page, + unsigned long nr_pages) { swp_entry_t entry = folio->swap; - unsigned long nr_pages = folio_nr_pages(folio); VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio); - if (subpage) { - entry.val += folio_page_idx(folio, subpage); - nr_pages = 1; - } + entry.val += folio_page_idx(folio, page); return swap_dup_entries_cluster(swap_entry_to_info(entry), swp_offset(entry), nr_pages); } +int folio_dup_swap(struct folio *folio) +{ + return folio_dup_swap_pages(folio, folio_page(folio, 0), + folio_nr_pages(folio)); +} + /** * folio_put_swap() - Decrease swap count of swap entries of a folio. * @folio: folio with swap entries bounded, must be in swap cache and locked. -- 2.34.1