From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B074F3D7D76 for ; Tue, 10 Mar 2026 07:31:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773127914; cv=none; b=bG92nIU7yHc6DrPzbcLCwdkCS8QTdKfDQriuVmV6M/ScqGEsPi2n5sg6IctgoYKLfzgTuz/LZmKK6wpiFHaLovM5u7JHudDcMZ8/oHpWVA46ma97gxclg6+5irUQlwsz3Uss63tI3LZcRTsE/Yuk4VC0R70wBhWjtVBJ8Z06lDQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773127914; c=relaxed/simple; bh=zkV2meQ9GWfwoZ/TcLBxK/JKCj5phfpJaDcYCQhnxOs=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=iD/rEGIakJe5oC3D2CH/yQMU0BjigT4sb6PcaaXL9OsYBSUadJNoH5xqbpq8bBOUY8JFDEbEY3BYu5Km3feT43F4M9ashyJs/AMlPAvrw4LIsquOmmFnhlq+PgUub/G7vZGy0XEqKSAESSX7jPaaB960AD9K4T6ZswCkSCbxH6g= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C27501BB0; Tue, 10 Mar 2026 00:31:45 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 21F903F73B; Tue, 10 Mar 2026 00:31:42 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org, axelrasmussen@google.com, yuanchu@google.com, david@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: weixugc@google.com, ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, baohua@kernel.org, youngjun.park@lge.com, ziy@nvidia.com, kas@kernel.org, willy@infradead.org, yuzhao@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH 7/9] mm/swapfile: Make folio_put_swap batchable Date: Tue, 10 Mar 2026 13:00:11 +0530 Message-Id: <20260310073013.4069309-8-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260310073013.4069309-1-dev.jain@arm.com> References: <20260310073013.4069309-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Teach folio_put_swap to handle a batch of consecutive pages. Note that folio_put_swap already can handle a subset of this: nr_pages == 1 and nr_pages == folio_nr_pages(folio). Generalize this to any nr_pages. Currently we have a not-so-nice logic of passing in subpage == NULL if we mean to exercise the logic on the entire folio, and subpage != NULL if we want to exercise the logic on only that subpage. Remove this indirection, and explicitly pass subpage != NULL, and the number of pages required. Signed-off-by: Dev Jain --- mm/memory.c | 6 +++--- mm/rmap.c | 4 ++-- mm/shmem.c | 6 +++--- mm/swap.h | 5 +++-- mm/swapfile.c | 13 +++++-------- 5 files changed, 16 insertions(+), 18 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 768646c0b3b6a..8249a9b7083ab 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5002,7 +5002,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(folio != swapcache)) { folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); - folio_put_swap(swapcache, NULL); + folio_put_swap(swapcache, folio_page(swapcache, 0), folio_nr_pages(swapcache)); } else if (!folio_test_anon(folio)) { /* * We currently only expect !anon folios that are fully @@ -5011,12 +5011,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) VM_WARN_ON_ONCE_FOLIO(folio_nr_pages(folio) != nr_pages, folio); VM_WARN_ON_ONCE_FOLIO(folio_mapped(folio), folio); folio_add_new_anon_rmap(folio, vma, address, rmap_flags); - folio_put_swap(folio, NULL); + folio_put_swap(folio, folio_page(folio, 0), folio_nr_pages(folio)); } else { VM_WARN_ON_ONCE(nr_pages != 1 && nr_pages != folio_nr_pages(folio)); folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, address, rmap_flags); - folio_put_swap(folio, nr_pages == 1 ? page : NULL); + folio_put_swap(folio, page, nr_pages); } VM_BUG_ON(!folio_test_anon(folio) || diff --git a/mm/rmap.c b/mm/rmap.c index f6d5b187cf09b..42f6b00cced01 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2293,7 +2293,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * so we'll not check/care. */ if (arch_unmap_one(mm, vma, address, pteval) < 0) { - folio_put_swap(folio, subpage); + folio_put_swap(folio, subpage, 1); set_pte_at(mm, address, pvmw.pte, pteval); goto walk_abort; } @@ -2301,7 +2301,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, /* See folio_try_share_anon_rmap(): clear PTE first. */ if (anon_exclusive && folio_try_share_anon_rmap_pte(folio, subpage)) { - folio_put_swap(folio, subpage); + folio_put_swap(folio, subpage, 1); set_pte_at(mm, address, pvmw.pte, pteval); goto walk_abort; } diff --git a/mm/shmem.c b/mm/shmem.c index 86ee34c9b40b3..d9d216ea28ecb 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1716,7 +1716,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug, /* Swap entry might be erased by racing shmem_free_swap() */ if (!error) { shmem_recalc_inode(inode, 0, -nr_pages); - folio_put_swap(folio, NULL); + folio_put_swap(folio, folio_page(folio, 0), folio_nr_pages(folio)); } /* @@ -2196,7 +2196,7 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index, nr_pages = folio_nr_pages(folio); folio_wait_writeback(folio); - folio_put_swap(folio, NULL); + folio_put_swap(folio, folio_page(folio, 0), folio_nr_pages(folio)); swap_cache_del_folio(folio); /* * Don't treat swapin error folio as alloced. Otherwise inode->i_blocks @@ -2426,7 +2426,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, if (sgp == SGP_WRITE) folio_mark_accessed(folio); - folio_put_swap(folio, NULL); + folio_put_swap(folio, folio_page(folio, 0), folio_nr_pages(folio)); swap_cache_del_folio(folio); folio_mark_dirty(folio); put_swap_device(si); diff --git a/mm/swap.h b/mm/swap.h index d9cb58ebbddd1..73fd9faa67608 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -207,7 +207,7 @@ extern int swap_retry_table_alloc(swp_entry_t entry, gfp_t gfp); */ int folio_alloc_swap(struct folio *folio); int folio_dup_swap(struct folio *folio, struct page *subpage, unsigned int nr_pages); -void folio_put_swap(struct folio *folio, struct page *subpage); +void folio_put_swap(struct folio *folio, struct page *subpage, unsigned int nr_pages); /* For internal use */ extern void __swap_cluster_free_entries(struct swap_info_struct *si, @@ -396,7 +396,8 @@ static inline int folio_dup_swap(struct folio *folio, struct page *page, return -EINVAL; } -static inline void folio_put_swap(struct folio *folio, struct page *page) +static inline void folio_put_swap(struct folio *folio, struct page *page, + unsigned int nr_pages) { } diff --git a/mm/swapfile.c b/mm/swapfile.c index eaf61ae6c3817..c66aa6d15d479 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1770,25 +1770,22 @@ int folio_dup_swap(struct folio *folio, struct page *subpage, /** * folio_put_swap() - Decrease swap count of swap entries of a folio. * @folio: folio with swap entries bounded, must be in swap cache and locked. - * @subpage: if not NULL, only decrease the swap count of this subpage. + * @subpage: decrease the swap count of this subpage till nr_pages. * * This won't free the swap slots even if swap count drops to zero, they are * still pinned by the swap cache. User may call folio_free_swap to free them. * Context: Caller must ensure the folio is locked and in the swap cache. */ -void folio_put_swap(struct folio *folio, struct page *subpage) +void folio_put_swap(struct folio *folio, struct page *subpage, + unsigned int nr_pages) { swp_entry_t entry = folio->swap; - unsigned long nr_pages = folio_nr_pages(folio); struct swap_info_struct *si = __swap_entry_to_info(entry); VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); VM_WARN_ON_FOLIO(!folio_test_swapcache(folio), folio); - if (subpage) { - entry.val += folio_page_idx(folio, subpage); - nr_pages = 1; - } + entry.val += folio_page_idx(folio, subpage); swap_put_entries_cluster(si, swp_offset(entry), nr_pages, false); } @@ -2334,7 +2331,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, new_pte = pte_mkuffd_wp(new_pte); setpte: set_pte_at(vma->vm_mm, addr, pte, new_pte); - folio_put_swap(swapcache, folio_file_page(swapcache, swp_offset(entry))); + folio_put_swap(swapcache, folio_file_page(swapcache, swp_offset(entry)), 1); out: if (pte) pte_unmap_unlock(pte, ptl); -- 2.34.1