From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5A0FEC3ABA5 for ; Tue, 29 Apr 2025 05:39:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=pzkZjylbLLAHjpwW0NmKdLwDoLrTrms8xV7X98/S9fs=; b=Dq0AqSO0oLnotjj1qzqJDrCUUp KMEFY7W8i81Ev10oeWrjYgGp9q/w0xfJL+jc2ppCLdewgGn+6Oo8fJLs9LOsqk/WxuWs0PhsN7nuM uUXQe67hAN3MXybF3onTMJG4/GavwMX1OwE0pt5c7zEsHKGACHBZ3iXxBRzKAWfdrnUww1MjfIVcf iwjQF8rjYGhwPzrfMTInbrERkkghZRE7LUi3k7gbKfRR5BpYgId+3DNjX1zjjeaMovzuI0PeVukAD UtBSqh7sDMii8RSXK3bhFI6JwZf6HEfZjywCC4B2hoV375qifcaS31xylgQVXY04ZMzQf9tEz7V1G qaiJUMqA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1u9dgp-00000008PTm-1dpC; Tue, 29 Apr 2025 05:39:31 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1u9dSo-00000008Nl0-3mKk for linux-arm-kernel@lists.infradead.org; Tue, 29 Apr 2025 05:25:06 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A65EB1515; Mon, 28 Apr 2025 22:24:55 -0700 (PDT) Received: from K4MQJ0H1H2.arm.com (unknown [10.163.78.253]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D2B4A3F5A1; Mon, 28 Apr 2025 22:24:52 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, catalin.marinas@arm.com, will@kernel.org, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, anshuman.khandual@arm.com, peterx@redhat.com, joey.gouly@arm.com, ioworker0@gmail.com, baohua@kernel.org, kevin.brodsky@arm.com, quic_zhenhuah@quicinc.com, christophe.leroy@csgroup.eu, yangyicong@hisilicon.com, linux-arm-kernel@lists.infradead.org, namit@vmware.com, hughd@google.com, yang@os.amperecomputing.com, ziy@nvidia.com, Dev Jain Subject: [PATCH v2 7/7] mm: Optimize mprotect() through PTE-batching Date: Tue, 29 Apr 2025 10:53:36 +0530 Message-Id: <20250429052336.18912-8-dev.jain@arm.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250429052336.18912-1-dev.jain@arm.com> References: <20250429052336.18912-1-dev.jain@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250428_222503_042122_7AA85E22 X-CRM114-Status: GOOD ( 13.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The common pte_present case does not require the folio. Elide the overhead of vm_normal_folio() for the small folio case, by making an approximation: for arm64, pte_batch_hint() is conclusive. For other arches, if the pfns pointed to by the current and the next PTE are contiguous, check whether a large folio is actually mapped, and only then make the batch optimization. Reuse the folio from prot_numa case if possible. Since modify_prot_start_ptes() gathers access/dirty bits, it lets us batch around pte_needs_flush() (for parisc, the definition includes the access bit). Signed-off-by: Dev Jain --- mm/mprotect.c | 49 +++++++++++++++++++++++++++++++++++-------------- 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/mm/mprotect.c b/mm/mprotect.c index baff009fc981..f8382806611f 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -129,7 +129,7 @@ static bool prot_numa_skip(struct vm_area_struct *vma, struct folio *folio, return false; } -static bool prot_numa_avoid_fault(struct vm_area_struct *vma, +static struct folio *prot_numa_avoid_fault(struct vm_area_struct *vma, unsigned long addr, pte_t *pte, pte_t oldpte, int target_node, int max_nr, int *nr) { @@ -139,25 +139,37 @@ static bool prot_numa_avoid_fault(struct vm_area_struct *vma, /* Avoid TLB flush if possible */ if (pte_protnone(oldpte)) - return true; + return NULL; folio = vm_normal_folio(vma, addr, oldpte); if (!folio) - return true; + return NULL; ret = prot_numa_skip(vma, folio, target_node); if (ret) { if (folio_test_large(folio) && max_nr != 1) *nr = folio_pte_batch(folio, addr, pte, oldpte, max_nr, flags, NULL, NULL, NULL); - return ret; + return NULL; } if (folio_use_access_time(folio)) folio_xchg_access_time(folio, jiffies_to_msecs(jiffies)); - return false; + return folio; } +static bool maybe_contiguous_pte_pfns(pte_t *ptep, pte_t pte) +{ + pte_t *next_ptep, next_pte; + + if (pte_batch_hint(ptep, pte) != 1) + return true; + + next_ptep = ptep + 1; + next_pte = ptep_get(next_ptep); + + return unlikely(pte_pfn(next_pte) - pte_pfn(pte) == PAGE_SIZE); +} static long change_pte_range(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, pgprot_t newprot, unsigned long cp_flags) @@ -188,19 +200,28 @@ static long change_pte_range(struct mmu_gather *tlb, oldpte = ptep_get(pte); if (pte_present(oldpte)) { int max_nr = (end - addr) >> PAGE_SHIFT; + const fpb_t flags = FPB_IGNORE_DIRTY; + struct folio *folio = NULL; pte_t ptent; /* * Avoid trapping faults against the zero or KSM * pages. See similar comment in change_huge_pmd. */ - if (prot_numa && - prot_numa_avoid_fault(vma, addr, pte, - oldpte, target_node, - max_nr, &nr)) + if (prot_numa) { + folio = prot_numa_avoid_fault(vma, addr, pte, + oldpte, target_node, max_nr, &nr); + if (!folio) continue; + } - oldpte = ptep_modify_prot_start(vma, addr, pte); + if (!folio && (max_nr != 1) && maybe_contiguous_pte_pfns(pte, oldpte)) { + folio = vm_normal_folio(vma, addr, oldpte); + if (folio_test_large(folio)) + nr = folio_pte_batch(folio, addr, pte, + oldpte, max_nr, flags, NULL, NULL, NULL); + } + oldpte = modify_prot_start_ptes(vma, addr, pte, nr); ptent = pte_modify(oldpte, newprot); if (uffd_wp) @@ -223,13 +244,13 @@ static long change_pte_range(struct mmu_gather *tlb, */ if ((cp_flags & MM_CP_TRY_CHANGE_WRITABLE) && !pte_write(ptent) && - can_change_ptes_writable(vma, addr, ptent, folio, 1)) + can_change_ptes_writable(vma, addr, ptent, folio, nr)) ptent = pte_mkwrite(ptent, vma); - ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); + modify_prot_commit_ptes(vma, addr, pte, oldpte, ptent, nr); if (pte_needs_flush(oldpte, ptent)) - tlb_flush_pte_range(tlb, addr, PAGE_SIZE); - pages++; + tlb_flush_pte_range(tlb, addr, nr * PAGE_SIZE); + pages += nr; } else if (is_swap_pte(oldpte)) { swp_entry_t entry = pte_to_swp_entry(oldpte); pte_t newpte; -- 2.30.2