From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8B7B433F5A9 for ; Fri, 10 Apr 2026 10:33:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775817214; cv=none; b=TUEwuHPmNjumXBPfJoPXHLftCAdcQWfCuqhSyK8Wa83MERB66kisUAUgEFVps+2n8JWfG/hcZGKxqpYYPVhhNQ/FBxScdubTjUIUVENg27Je9KuCPefylc2GD120u78o22Y172g4VnIkl1Gs0tMfjF4pY1cEzcX6t1D6ATVuCmI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775817214; c=relaxed/simple; bh=tF3VW2lbJuABvGu5WBZjmvW3rTKVGcTyExhhZXtmOk8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=uIZ+sYK4/UwK2Cvob0W+5rDPVtx+BN0dlSpAjkssC9j2vNf+XGEkivsXAGbAomVQ2hw43We5ey5iFpf1gNnEjjQSTHgvr30+KcXP/bIcz1RaQ8VS7SyapjMZB6ySit+04J8gs6tEaWopWuRoCSWJFwIsKcya0t3T4LaYtmiae6Q= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=B8Zxi1oo; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="B8Zxi1oo" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0B6944529; Fri, 10 Apr 2026 03:33:25 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 8A01F3FAF5; Fri, 10 Apr 2026 03:33:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1775817210; bh=tF3VW2lbJuABvGu5WBZjmvW3rTKVGcTyExhhZXtmOk8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B8Zxi1oowJQksrIiPBubX8jrt3yniQITzN3UZ8rLvHMLHv/Ien8ryoISTvpCoiy6I qWpA8/CW72pBeVLfZYBV9iNeewvz+tzYciTIn+MDm3gV85bUmGJExuFSPQIKV6ow5V 7Q80cSOMa3itWQ//epcQbaltosKJVSg8wXCNYYK4= From: Dev Jain To: akpm@linux-foundation.org, david@kernel.org, hughd@google.com, chrisl@kernel.org Cc: ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, kasong@tencent.com, qi.zheng@linux.dev, shakeel.butt@linux.dev, baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, riel@surriel.com, harry@kernel.org, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH v2 8/9] mm/rmap: Add batched version of folio_try_share_anon_rmap_pte Date: Fri, 10 Apr 2026 16:02:03 +0530 Message-Id: <20260410103204.120409-9-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260410103204.120409-1-dev.jain@arm.com> References: <20260410103204.120409-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit To enabe batched unmapping of anonymous folios, we need to handle the sharing of exclusive pages. Hence, a batched version of folio_try_share_anon_rmap_pte is required. Currently, the sole purpose of nr_pages in __folio_try_share_anon_rmap is to do some rmap sanity checks. Add helpers to set and clear the PageAnonExclusive bit on a batch of nr_pages. Note that __folio_try_share_anon_rmap can receive nr_pages == HPAGE_PMD_NR from the PMD path, but currently we only clear the bit on the head page. Retain this behaviour by setting nr_pages = 1 in case the caller is folio_try_share_anon_rmap_pmd. While at it, convert nr_pages to unsigned long to future-proof from overflow in case P4D-huge mappings etc get supported down the road. Signed-off-by: Dev Jain --- include/linux/mm.h | 11 +++++++++++ include/linux/rmap.h | 27 ++++++++++++++++++++------- 2 files changed, 31 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 633bbf9a184a6..2d20954da652a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -243,6 +243,17 @@ static inline unsigned long folio_page_idx(const struct folio *folio, return page - &folio->page; } +static __always_inline void folio_clear_pages_anon_exclusive(struct page *page, + unsigned long nr_pages) +{ + for (;;) { + ClearPageAnonExclusive(page); + if (--nr_pages == 0) + break; + ++page; + } +} + static inline struct folio *lru_to_folio(struct list_head *head) { return list_entry((head)->prev, struct folio, lru); diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 8dc0871e5f001..f3b3ee3955afc 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -706,15 +706,19 @@ static inline int folio_try_dup_anon_rmap_pmd(struct folio *folio, } static __always_inline int __folio_try_share_anon_rmap(struct folio *folio, - struct page *page, int nr_pages, enum pgtable_level level) + struct page *page, unsigned long nr_pages, enum pgtable_level level) { VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); VM_WARN_ON_FOLIO(!PageAnonExclusive(page), folio); __folio_rmap_sanity_checks(folio, page, nr_pages, level); + /* We only clear anon-exclusive from head page of PMD folio */ + if (level == PGTABLE_LEVEL_PMD) + nr_pages = 1; + /* device private folios cannot get pinned via GUP. */ if (unlikely(folio_is_device_private(folio))) { - ClearPageAnonExclusive(page); + folio_clear_pages_anon_exclusive(page, nr_pages); return 0; } @@ -766,7 +770,7 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio, if (unlikely(folio_maybe_dma_pinned(folio))) return -EBUSY; - ClearPageAnonExclusive(page); + folio_clear_pages_anon_exclusive(page, nr_pages); /* * This is conceptually a smp_wmb() paired with the smp_rmb() in @@ -778,11 +782,12 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio, } /** - * folio_try_share_anon_rmap_pte - try marking an exclusive anonymous page - * mapped by a PTE possibly shared to prepare + * folio_try_share_anon_rmap_ptes - try marking exclusive anonymous pages + * mapped by PTEs possibly shared to prepare * for KSM or temporary unmapping * @folio: The folio to share a mapping of - * @page: The mapped exclusive page + * @page: The first mapped exclusive page of the batch in the folio + * @nr_pages: The number of pages to share in the folio (batch size) * * The caller needs to hold the page table lock and has to have the page table * entries cleared/invalidated. @@ -797,11 +802,19 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio, * * Returns 0 if marking the mapped page possibly shared succeeded. Returns * -EBUSY otherwise. + * + * The caller needs to hold the page table lock. */ +static inline int folio_try_share_anon_rmap_ptes(struct folio *folio, + struct page *page, unsigned long nr_pages) +{ + return __folio_try_share_anon_rmap(folio, page, nr_pages, PGTABLE_LEVEL_PTE); +} + static inline int folio_try_share_anon_rmap_pte(struct folio *folio, struct page *page) { - return __folio_try_share_anon_rmap(folio, page, 1, PGTABLE_LEVEL_PTE); + return folio_try_share_anon_rmap_ptes(folio, page, 1); } /** -- 2.34.1