From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D88FAC77B7F for ; Sat, 28 Jun 2025 11:37:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-Type: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=zHlmKddhyoWW5/eAkf5UH+WTTRDD7SRb+dQwhpy8D2E=; b=JUgM0OmoAKqwgm9QpGfJ6dxiEp k/D/2O1BIw6HLMNxuhBAEJl/9/0Y/2CUuunCz7v4tAViwtoMEBeSpt6uKdQA0UJiA7kDPrKxjfbYU MKN1OZ9byUNr0raCHtCVLuj3+syIx4/NL0Z8Cs3a9YTCRTiP2M2HMo5WgF7b/WajkAGck1vTJFt4a wte9b3ADoob1RhYxXwiYqpW+08Y8wyOhyC/gW7dcvTGI4fRkNPWBfslLL+c514jVgqIm7czvERwog KG4hcHkM7qWMxyOJBRkJpbnLR80aRxOs+LMTyaNdFzaleNjK09Fb+D+3qtUmfiwpeQP+JgDdfO+AP BW7Ytmzw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uVTsA-0000000GjUJ-3yYj; Sat, 28 Jun 2025 11:37:30 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uVTpq-0000000GjBe-2huv for linux-arm-kernel@lists.infradead.org; Sat, 28 Jun 2025 11:35:08 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B49D01BD0; Sat, 28 Jun 2025 04:34:46 -0700 (PDT) Received: from MacBook-Pro.blr.arm.com (MacBook-Pro.blr.arm.com [10.164.18.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CC5763F762; Sat, 28 Jun 2025 04:34:55 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, catalin.marinas@arm.com, will@kernel.org, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, anshuman.khandual@arm.com, peterx@redhat.com, joey.gouly@arm.com, ioworker0@gmail.com, baohua@kernel.org, kevin.brodsky@arm.com, quic_zhenhuah@quicinc.com, christophe.leroy@csgroup.eu, yangyicong@hisilicon.com, linux-arm-kernel@lists.infradead.org, hughd@google.com, yang@os.amperecomputing.com, ziy@nvidia.com, Dev Jain Subject: [PATCH v4 0/4] Optimize mprotect() for large folios Date: Sat, 28 Jun 2025 17:04:31 +0530 Message-Id: <20250628113435.46678-1-dev.jain@arm.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250628_043506_764890_B1EB0A35 X-CRM114-Status: GOOD ( 11.46 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This patchset optimizes the mprotect() system call for large folios by PTE-batching. No issues were observed with mm-selftests, build tested on x86_64. We use the following test cases to measure performance, mprotect()'ing the mapped memory to read-only then read-write 40 times: Test case 1: Mapping 1G of memory, touching it to get PMD-THPs, then pte-mapping those THPs Test case 2: Mapping 1G of memory with 64K mTHPs Test case 3: Mapping 1G of memory with 4K pages Average execution time on arm64, Apple M3: Before the patchset: T1: 7.9 seconds T2: 7.9 seconds T3: 4.2 seconds After the patchset: T1: 2.1 seconds T2: 2.2 seconds T3: 4.3 seconds Observing T1/T2 and T3 before the patchset, we also remove the regression introduced by ptep_get() on a contpte block. And, for large folios we get an almost 74% performance improvement, albeit the trade-off being a slight degradation in the small folio case. Here is the test program: #define _GNU_SOURCE #include #include #include #include #include #define SIZE (1024*1024*1024) unsigned long pmdsize = (1UL << 21); unsigned long pagesize = (1UL << 12); static void pte_map_thps(char *mem, size_t size) { size_t offs; int ret = 0; /* PTE-map each THP by temporarily splitting the VMAs. */ for (offs = 0; offs < size; offs += pmdsize) { ret |= madvise(mem + offs, pagesize, MADV_DONTFORK); ret |= madvise(mem + offs, pagesize, MADV_DOFORK); } if (ret) { fprintf(stderr, "ERROR: mprotect() failed\n"); exit(1); } } int main(int argc, char *argv[]) { char *p; int ret = 0; p = mmap((1UL << 30), SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (p != (1UL << 30)) { perror("mmap"); return 1; } memset(p, 0, SIZE); if (madvise(p, SIZE, MADV_NOHUGEPAGE)) perror("madvise"); explicit_bzero(p, SIZE); pte_map_thps(p, SIZE); for (int loops = 0; loops < 40; loops++) { if (mprotect(p, SIZE, PROT_READ)) perror("mprotect"), exit(1); if (mprotect(p, SIZE, PROT_READ|PROT_WRITE)) perror("mprotect"), exit(1); explicit_bzero(p, SIZE); } } --- The patchset is rebased onto Saturday's mm-new. v3->v4: - Refactor skipping logic into a new function, edit patch 1 subject to highlight it is only for MM_CP_PROT_NUMA case (David H) - Refactor the optimization logic, add more documentation to the generic batched functions, do not add clear_flush_ptes, squash patch 4 and 5 (Ryan) v2->v3: - Add comments for the new APIs (Ryan, Lorenzo) - Instead of refactoring, use a "skip_batch" label - Move arm64 patches at the end (Ryan) - In can_change_pte_writable(), check AnonExclusive page-by-page (David H) - Resolve implicit declaration; tested build on x86 (Lance Yang) v1->v2: - Rebase onto mm-unstable (6ebffe676fcf: util_macros.h: make the header more resilient) - Abridge the anon-exclusive condition (Lance Yang) Dev Jain (4): mm: Optimize mprotect() for MM_CP_PROT_NUMA by batch-skipping PTEs mm: Add batched versions of ptep_modify_prot_start/commit mm: Optimize mprotect() by PTE-batching arm64: Add batched versions of ptep_modify_prot_start/commit arch/arm64/include/asm/pgtable.h | 10 ++ arch/arm64/mm/mmu.c | 28 +++- include/linux/pgtable.h | 83 +++++++++- mm/mprotect.c | 269 +++++++++++++++++++++++-------- 4 files changed, 315 insertions(+), 75 deletions(-) -- 2.30.2