From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 66E312F0C74 for ; Tue, 10 Mar 2026 07:31:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773127885; cv=none; b=iQxd/UEnAcNd9BHKoEqROjV/49b+cGSoR5zJHdA7N43509lViDR/m3PgLY0RCecSfsJJZfjyfCuoF1CHWTQyGla2zwg8/DpcxDPdeln9gLVXcHVUMW3WaZe/IFi+ve/ysJZ5VJFR6FRqf0q62kR5vwNtKMuPPGjG5O2MXhPLAnU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773127885; c=relaxed/simple; bh=1RHth0AKQ0iQ/QD/EAmiAMaO6b3mkj+ETRkeVR1+Jpg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=gy8mNMyW8CPOgBLfAnDryabi6q8YcYtXusHZomdHTKKqCo4iwLVw/DY00Y6tLpVcDjvFItevYKgTSh03YUcbuKv7nFvyL7leXr/rXd9Cc/nms0/6a/TebNdyLPpo09b4JWsPARba5VLc/+kDca6OfkTfnjMxTweES39lxtGLZlg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A178A1756; Tue, 10 Mar 2026 00:31:17 -0700 (PDT) Received: from a080796.blr.arm.com (a080796.arm.com [10.164.21.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 075CF3F73B; Tue, 10 Mar 2026 00:31:14 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org, axelrasmussen@google.com, yuanchu@google.com, david@kernel.org, hughd@google.com, chrisl@kernel.org, kasong@tencent.com Cc: weixugc@google.com, ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org, rppt@kernel.org, surenb@google.com, mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com, jannh@google.com, pfalcato@suse.de, baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com, nphamcs@gmail.com, bhe@redhat.com, baohua@kernel.org, youngjun.park@lge.com, ziy@nvidia.com, kas@kernel.org, willy@infradead.org, yuzhao@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, Dev Jain Subject: [PATCH 4/9] mm/memory: Batch set uffd-wp markers during zapping Date: Tue, 10 Mar 2026 13:00:08 +0530 Message-Id: <20260310073013.4069309-5-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260310073013.4069309-1-dev.jain@arm.com> References: <20260310073013.4069309-1-dev.jain@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit In preparation for the next patch, enable batch setting of uffd-wp ptes. The code paths passing nr > 1 to zap_install_uffd_wp_if_needed() produce that nr through either folio_pte_batch or swap_pte_batch, guaranteeing that all ptes are the same w.r.t belonging to the same type of VMA (anonymous or non-anonymous, wp-armed or non-wp-armed), and all being marked with uffd-wp or all being not marked. Note that we will have to use set_pte_at() in a loop instead of set_ptes() since the latter cannot handle present->non-present conversion for nr_pages > 1. Convert documentation of install_uffd_wp_ptes_if_needed to kerneldoc format. No functional change is intended. Signed-off-by: Dev Jain --- include/linux/mm_inline.h | 37 +++++++++++++++++++++++-------------- mm/memory.c | 20 +------------------- mm/rmap.c | 2 +- 3 files changed, 25 insertions(+), 34 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index ad50688d89dba..d69b9abbdf2a7 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -560,21 +560,30 @@ static inline pte_marker copy_pte_marker( return dstm; } -/* - * If this pte is wr-protected by uffd-wp in any form, arm the special pte to - * replace a none pte. NOTE! This should only be called when *pte is already +/** + * install_uffd_wp_ptes_if_needed - install uffd-wp marker on PTEs that map + * consecutive pages of the same large folio. + * @vma: The VMA the pages are mapped into. + * @addr: Address the first page of this batch is mapped at. + * @ptep: Page table pointer for the first entry of this batch. + * @pteval: old value of the entry pointed to by ptep. + * @nr: Number of entries to clear (batch size). + * + * If the ptes were wr-protected by uffd-wp in any form, arm special ptes to + * replace none ptes. NOTE! This should only be called when *pte is already * cleared so we will never accidentally replace something valuable. Meanwhile * none pte also means we are not demoting the pte so tlb flushed is not needed. * E.g., when pte cleared the caller should have taken care of the tlb flush. * - * Must be called with pgtable lock held so that no thread will see the none - * pte, and if they see it, they'll fault and serialize at the pgtable lock. + * Context: The caller holds the page table lock. The PTEs map consecutive + * pages that belong to the same folio. The PTEs are all in the same PMD + * and the same VMA. * - * Returns true if an uffd-wp pte was installed, false otherwise. + * Returns true if uffd-wp ptes were installed, false otherwise. */ static inline bool -pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr, - pte_t *pte, pte_t pteval) +install_uffd_wp_ptes_if_needed(struct vm_area_struct *vma, unsigned long addr, + pte_t *pte, pte_t pteval, unsigned int nr) { bool arm_uffd_pte = false; @@ -604,13 +613,13 @@ pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr, if (unlikely(pte_swp_uffd_wp_any(pteval))) arm_uffd_pte = true; - if (unlikely(arm_uffd_pte)) { - set_pte_at(vma->vm_mm, addr, pte, - make_pte_marker(PTE_MARKER_UFFD_WP)); - return true; - } + if (likely(!arm_uffd_pte)) + return false; - return false; + for (int i = 0; i < nr; ++i, ++pte, addr += PAGE_SIZE) + set_pte_at(vma->vm_mm, addr, pte, make_pte_marker(PTE_MARKER_UFFD_WP)); + + return true; } static inline bool vma_has_recency(const struct vm_area_struct *vma) diff --git a/mm/memory.c b/mm/memory.c index 38062f8e11656..768646c0b3b6a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1594,29 +1594,11 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr, pte_t *pte, int nr, struct zap_details *details, pte_t pteval) { - bool was_installed = false; - - if (!uffd_supports_wp_marker()) - return false; - - /* Zap on anonymous always means dropping everything */ - if (vma_is_anonymous(vma)) - return false; - if (zap_drop_markers(details)) return false; - for (;;) { - /* the PFN in the PTE is irrelevant. */ - if (pte_install_uffd_wp_if_needed(vma, addr, pte, pteval)) - was_installed = true; - if (--nr == 0) - break; - pte++; - addr += PAGE_SIZE; - } + return install_uffd_wp_ptes_if_needed(vma, addr, pte, pteval, nr); - return was_installed; } static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb, diff --git a/mm/rmap.c b/mm/rmap.c index a61978141ee3f..a7570cd037344 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2235,7 +2235,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * we may want to replace a none pte with a marker pte if * it's file-backed, so we don't lose the tracking info. */ - pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval); + install_uffd_wp_ptes_if_needed(vma, address, pvmw.pte, pteval, 1); /* Update high watermark before we lower rss */ update_hiwater_rss(mm); -- 2.34.1