From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77971C83F1A for ; Fri, 18 Jul 2025 09:03:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C61376B0098; Fri, 18 Jul 2025 05:03:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C12086B009A; Fri, 18 Jul 2025 05:03:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD9626B009B; Fri, 18 Jul 2025 05:03:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 94B666B0098 for ; Fri, 18 Jul 2025 05:03:03 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1CB5712E1C8 for ; Fri, 18 Jul 2025 09:03:03 +0000 (UTC) X-FDA: 83676795846.16.B7C7882 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf24.hostedemail.com (Postfix) with ESMTP id 2518418000C for ; Fri, 18 Jul 2025 09:03:00 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752829381; a=rsa-sha256; cv=none; b=x66fXjR4zqtt5rWCuy9RybbLN6N1PuoWZSgKRp6WUpI4mfxKjSNLXrYFo9Beu+iTwolmCf gFpQMqun6ta79FAEBl9uxVUMRsS7+oEDoT/Kc+JukxIgXWR7kxR5gSpxz5uJrzGAtegGGA 9VieFDe4qm0FDpvGu4sl1YByJU+6PPQ= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752829381; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=IHsezmsV+lIKlXL5bqPTVRMX99vHPq5WBsHAmPW0VOw=; b=YLvJOmvFre5hfdwUHz11shYomZeTQD6YWpgy7DZtE8PLuVXSdE1Pa9O5My+x/knqXL6ZKs JRKgETd2nNxD22637aHUAP40AJziXWq09ToIDD83BH58jgiuJSfJc5CJeqlrY0jbHGLOvb yp+Yu0JJMDVv1WZqNbMsy3eqgiFgdVQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AE013176C; Fri, 18 Jul 2025 02:02:52 -0700 (PDT) Received: from MacBook-Pro.blr.arm.com (unknown [10.164.18.51]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id F3E923F66E; Fri, 18 Jul 2025 02:02:51 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, catalin.marinas@arm.com, will@kernel.org, Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, anshuman.khandual@arm.com, peterx@redhat.com, joey.gouly@arm.com, ioworker0@gmail.com, baohua@kernel.org, kevin.brodsky@arm.com, quic_zhenhuah@quicinc.com, christophe.leroy@csgroup.eu, yangyicong@hisilicon.com, linux-arm-kernel@lists.infradead.org, hughd@google.com, yang@os.amperecomputing.com, ziy@nvidia.com, Dev Jain Subject: [PATCH v5 0/7] Optimize mprotect() for large folios Date: Fri, 18 Jul 2025 14:32:37 +0530 Message-Id: <20250718090244.21092-1-dev.jain@arm.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2518418000C X-Stat-Signature: b5orzruaik4hi7bgnowbb3bfxmmooi6b X-Rspam-User: X-HE-Tag: 1752829380-125301 X-HE-Meta: U2FsdGVkX1/DeGx+e8VUZ2XM6ckMBZekGhNXaLeUFZnIkxkNcMm00vdSRIIqZfh3nOOlua6iYPDFDrRXTrJT7kPYsKDlPFd5awtmXe5vbYneNvJr61d4RTu09/a4aYSDcsbA+lXNmiOd+BJUJUkN8etAAyVOvrvfEaCtGQr+A90K0PhH89xIeVCdX/K7IQ5IIMbE9jniyM7g68o1ZTymtD47GXWpo9bxS8eDSGD8kVEfPZefOEu9jHMZs8KtXdFJVdheRM0sKw9XNFOZJk8RVizoR1r6q43DbXl/l4NQcHibnVsd53oaPx4mlPVcacIKh9p4P2vWR2dzcXkBD2JhYdzdXqgpQe/0jBciic/cSAzwSek+mkmYCa+KmP/D9Q2OxKFllBNTVLHcNFT8tzWvNHbJnwDAqNg97WqFH6LVvzYzoD+1SZNXvuSEk8IJDk1eXd6aAzWNf6izTCmvUlCO9KH2D57yXQTBv76cyqTqLvTa0SV9fU9zbY0pcGEA//bRmXEQsKJhc8/CI3rqZdVTmjltPHyb5Eq7IME+c6xUOPcawMrR7qcc8qyxPWMVRUo/ececN7LxCos5e3czPo1zGluPiNQUuxrRjsiv4vEZHqV5S0qa2NVPRnnQq8yxFIIewNaSyTIkLnJnEhMqfvHpMcPW/0H/8S/EjbB9wu+krr/C6ZP9V0GPR8Y+qF7VZA7VXa26fwcr3sOuqK80iTO3sByUYmJJHk5nLNqvH/tdgcf/XtAsmbV6NFQaa1Nko3zjg+M3OzxIVI3TeoJQfjSKr/vghKyKfq7h2NoVsth9fVPuSaG9SP82+ClrcusJ3BxIaIEOdHzZ2YIQSvyPgNLuZ6KJAalHYqzjX71vbbteTWC8j7exTj+1s5e+3vSCT8kyjHKk1mKKgQIB7WbDxlPDPZpxXNTzHP1dnKQGio/isEIwoR+pn7tCDB4wq1jQ2LFGl5TyBPQEKyOxkNnzLrg x/0749DQ WmoaMaIJFQpBQ+GEq/i4FPtKNzYOqf+CA25i079n6/6qp48RTgxJH8xflQJW5W7IAXd1c8l7FKNkn/rHiWS78dc5+40eYSiM/b+UT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use folio_pte_batch() to optimize change_pte_range(). On arm64, if the ptes are painted with the contig bit, then ptep_get() will iterate through all 16 entries to collect a/d bits. Hence this optimization will result in a 16x reduction in the number of ptep_get() calls. Next, ptep_modify_prot_start() will eventually call contpte_try_unfold() on every contig block, thus flushing the TLB for the complete large folio range. Instead, use get_and_clear_full_ptes() so as to elide TLBIs on each contig block, and only do them on the starting and ending contig block. For split folios, there will be no pte batching; the batch size returned by folio_pte_batch() will be 1. For pagetable split folios, the ptes will still point to the same large folio; for arm64, this results in the optimization described above, and for other arches, a minor improvement is expected due to a reduction in the number of function calls. mm-selftests pass on arm64. I have some failing tests on my x86 VM already; no new tests fail as a result of this patchset. We use the following test cases to measure performance, mprotect()'ing the mapped memory to read-only then read-write 40 times: Test case 1: Mapping 1G of memory, touching it to get PMD-THPs, then pte-mapping those THPs Test case 2: Mapping 1G of memory with 64K mTHPs Test case 3: Mapping 1G of memory with 4K pages Average execution time on arm64, Apple M3: Before the patchset: T1: 2.1 seconds T2: 2 seconds T3: 1 second After the patchset: T1: 0.65 seconds T2: 0.7 seconds T3: 1.1 seconds Observing T1/T2 and T3 before the patchset, we also remove the regression introduced by ptep_get() on a contpte block. And, for large folios we get an almost 74% performance improvement, albeit the trade-off being a slight degradation in the small folio case. For x86: Before the patchset: T1: 3.75 seconds T2: 3.7 seconds T3: 3.85 seconds After the patchset: T1: 3.7 seconds T2: 3.7 seconds T3: 3.9 seconds So there is a minor improvement due to reduction in number of function calls, and a slight degradation in the small folio case due to the overhead of vm_normal_folio() + folio_test_large(). Here is the test program: #define _GNU_SOURCE #include #include #include #include #include #define SIZE (1024*1024*1024) unsigned long pmdsize = (1UL << 21); unsigned long pagesize = (1UL << 12); static void pte_map_thps(char *mem, size_t size) { size_t offs; int ret = 0; /* PTE-map each THP by temporarily splitting the VMAs. */ for (offs = 0; offs < size; offs += pmdsize) { ret |= madvise(mem + offs, pagesize, MADV_DONTFORK); ret |= madvise(mem + offs, pagesize, MADV_DOFORK); } if (ret) { fprintf(stderr, "ERROR: mprotect() failed\n"); exit(1); } } int main(int argc, char *argv[]) { char *p; int ret = 0; p = mmap((1UL << 30), SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (p != (1UL << 30)) { perror("mmap"); return 1; } memset(p, 0, SIZE); if (madvise(p, SIZE, MADV_NOHUGEPAGE)) perror("madvise"); explicit_bzero(p, SIZE); pte_map_thps(p, SIZE); for (int loops = 0; loops < 40; loops++) { if (mprotect(p, SIZE, PROT_READ)) perror("mprotect"), exit(1); if (mprotect(p, SIZE, PROT_READ|PROT_WRITE)) perror("mprotect"), exit(1); explicit_bzero(p, SIZE); } } --- v4->v5: - Add patch 4 - Add patch 1 (Lorenzo) - For patch 2, instead of using nr_ptes returned from prot_numa_skip() as a dummy for whether to skip or not, make that function return boolean, and then use folio_pte_batch() to determine how much to skip - Split can_change_pte_writable() (Lorenzo) - Implement patch 6 in a better way v3->v4: - Refactor skipping logic into a new function, edit patch 1 subject to highlight it is only for MM_CP_PROT_NUMA case (David H) - Refactor the optimization logic, add more documentation to the generic batched functions, do not add clear_flush_ptes, squash patch 4 and 5 (Ryan) v2->v3: - Add comments for the new APIs (Ryan, Lorenzo) - Instead of refactoring, use a "skip_batch" label - Move arm64 patches at the end (Ryan) - In can_change_pte_writable(), check AnonExclusive page-by-page (David H) - Resolve implicit declaration; tested build on x86 (Lance Yang) v1->v2: - Rebase onto mm-unstable (6ebffe676fcf: util_macros.h: make the header more resilient) - Abridge the anon-exclusive condition (Lance Yang) Dev Jain (7): mm: Refactor MM_CP_PROT_NUMA skipping case into new function mm: Optimize mprotect() for MM_CP_PROT_NUMA by batch-skipping PTEs mm: Add batched versions of ptep_modify_prot_start/commit mm: Introduce FPB_RESPECT_WRITE for PTE batching infrastructure mm: Split can_change_pte_writable() into private and shared parts mm: Optimize mprotect() by PTE batching arm64: Add batched versions of ptep_modify_prot_start/commit arch/arm64/include/asm/pgtable.h | 10 ++ arch/arm64/mm/mmu.c | 28 ++- include/linux/pgtable.h | 84 ++++++++- mm/internal.h | 11 +- mm/mprotect.c | 295 ++++++++++++++++++++++++------- 5 files changed, 352 insertions(+), 76 deletions(-) -- 2.30.2