From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 628D7C48BC3 for ; Wed, 14 Feb 2024 12:29:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: Mime-Version:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To: References:List-Owner; bh=Mzw3er/KAcwondko59rHHdjm7PR5WbaDviQy58bicDA=; b=llm 0iUg5J8gT+70KxckSQBaXUQUbjacagtPz7VUxhL9QNn0OsbPn5ny+QFi8+fYQaEjl7qXIY4x35Cc5 m1qf29ibRXDRqL9p0xjL+NRABZ9qOP68L56lYI0HjfWNYQFZPddKTiXiLx+zIxpYjhi8ZnZxG+Y39 x+cifWKI0un2s5eF/HA/ZsMyZkPCArCIGicBSvHu5uiE3qeg/89BuDhIMGgnKecYGmN0S9XijKHLF leEHCn9HAu6cO3ny9uk1fOCL0HWBc3GRYHG31JnnZbTz8fWmvzGJBUUiYBesq3nt0pNwJORu+3Irf sVhbn62fiwNahLud678se+Y6Zaz34Xw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1raEOY-0000000CoEO-2UMx; Wed, 14 Feb 2024 12:29:46 +0000 Received: from mail-yb1-xb49.google.com ([2607:f8b0:4864:20::b49]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1raEOV-0000000CoDQ-3X7S for linux-arm-kernel@lists.infradead.org; Wed, 14 Feb 2024 12:29:45 +0000 Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-dc3645a6790so9887744276.0 for ; Wed, 14 Feb 2024 04:29:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707913779; x=1708518579; darn=lists.infradead.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=+PS6oxd+96DGiE076xCvsJQtqrztg85QlOzkMABgD6Q=; b=B6jMPCLwsRUIMu/VfX3508mmgbB48lRS9dpLe0G4oYwcKRN+PANso7ommi6Kd28SS5 do6OoUXE9BhAG3PGt6MZJ4loSrF6yhNRlRSSADZxeypDnl+S1HhiPEeGoKxIO7uGx2Go 03kn8OfYMzJLuS+kznb35KBMAux51NHzlmb949FjF/BRtE68xwqcm13bxfyxH0shsMjG UeOJoBlSyhFX7vY2Mgckz+YoFGe/Z8K1hNKnOxW3oU+1dlGoBVUJ4i4BDlsQqxnP6xmW ziG5vj2G7JgTb8nY78KiaA6UIvJ8OQ1eiqMOLBoJpFpFMNOVrp5CcyiqjEC7WBMrCadR bZaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707913779; x=1708518579; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=+PS6oxd+96DGiE076xCvsJQtqrztg85QlOzkMABgD6Q=; b=ryyxp7ze7weYNdGNBaxcWFBj6dBKbnpowugYrgUo4Zlxp1BvQMnQqURrfl7RBTMNuR ArQIfRYo+WnHSJWEsO+BIbwOczzw/z2WvtgcV1TkzRrvZ/9+XUCv/DOkPsgGOO2uZWZW LaAEpSbl8OgYIrftLPoLBFmGe29i/RKCDAiPCdTnEMsundae1w6HOxYVHC8q8tId4BGH uIOqPLX34bixCEyT/UnfmPoZrHDqlGDl4SOvpniFzr6FJY5BaJaqRnop+QJM4IzsYbxQ 6GpIDPOf7J/EMs17/fv90AqDYSJHLe7V+YHy+XBmFIS9qnNWLOjSqoeXXI20m7YdQioc N8Ww== X-Gm-Message-State: AOJu0YwyAmYIvTbM4QrISpj4nVE2rLHDY4dsLOoxE7x11Of9+uXVHjyy 5iCcO/vDdyoOFcQHQZcK9zN0W1RpR2/Ufu8+GzAypZ81tETkJXJuZNYVjba4jECYWJutoVOsFQI wm7RxnzsQ9jkThZyS3iWyURQC01SUeWb3nFRYJ97844c21mj4vIz0bki+aOPJhp5KldCefxK39a a7ynfAMp2jPx14ZeYJIhIL+oqlDU4Sv3Cs8nE/iOKh X-Google-Smtp-Source: AGHT+IFVyWqunn+azQrIlW9xNP6g2R+qA1RxIqYbqyc6bfek2F5RvIZ1lM7T5W3OGcn/dCv2PvOz8Mbu X-Received: from palermo.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:118a]) (user=ardb job=sendgmr) by 2002:a05:6902:1207:b0:dcc:50ca:e153 with SMTP id s7-20020a056902120700b00dcc50cae153mr492068ybu.7.1707913778815; Wed, 14 Feb 2024 04:29:38 -0800 (PST) Date: Wed, 14 Feb 2024 13:28:46 +0100 Mime-Version: 1.0 X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-Developer-Signature: v=1; a=openpgp-sha256; l=9959; i=ardb@kernel.org; h=from:subject; bh=kvdSqgZef3hMPJQCC/qzk0H/+sfMll+yF06x5hOn0Qg=; b=owGbwMvMwCFmkMcZplerG8N4Wi2JIfXMxn9/n+zy+JJ3tS7wfJb40zo/vxP77T8Kv9nWbby7l lm1Y/20jlIWBjEOBlkxRRaB2X/f7Tw9UarWeZYszBxWJpAhDFycAjARDVOG/+nHN0fwezk59n+6 1J+buL3+07uoaJd40zWTXGXk3U9efcjIcHeD3ra+9Cl6nznyleQ0bjz9etOJb+bi431px19fCo/ 9yQ4A X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240214122845.2033971-45-ardb+git@google.com> Subject: [PATCH v8 00/43] arm64: Add support for LPA2 and WXN at stage 1 From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Catalin Marinas , Will Deacon , Marc Zyngier , Mark Rutland , Ryan Roberts , Anshuman Khandual , Kees Cook X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240214_042943_938857_8B79FABA X-CRM114-Status: GOOD ( 30.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Ard Biesheuvel This v8 covers the remaining changes that implement support for LPA2 and WXN at stage 1, now that some of the prerequisites are in place. v4: https://lore.kernel.org/r/20230912141549.278777-63-ardb@google.com/ v5: https://lore.kernel.org/r/20231124101840.944737-41-ardb@google.com/ v6: https://lore.kernel.org/r/20231129111555.3594833-43-ardb@google.com/ v7: https://lore.kernel.org/r/20240123145258.1462979-52-ardb%2Bgit%40google.com/ -%- Changes in v8: - rebase onto arm64/reorg-va-space and drop the patches that were merged - bring back the KVM change to rely on vabits_actual to decide at which level a walk of the user space page tables should start Changes in v7: - rebase onto v6.8-rc1 which includes some patches of the previous revision, and includes the KVM changes for LPA2 The first ~30 patches rework the early init code, reimplementing most of the page table and relocation handling in C code. There are several reasons why this is needed: - we generally prefer C code over asm for these things, and the macros that currently exist in head.S for creating the kernel page tables are a good example why; - we no longer need to create the kernel mapping in two passes, which means we can remove the logic that copies parts of the fixmap and the KAsan shadow from one set of page tables to the other; this is especially advantageous for KAsan with LPA2, which needs more elaborate shadow handling across multiple levels, since the KAsan region cannot be placed on exact pgd_t boundaries in that case; - we can read the ID registers and parse command line overrides before creating the page tables, which simplifies the LPA2 case, as flicking the global TCR_EL1.DS bit at a later stage would require elaborate repainting of all page table descriptors, some of which with the MMU disabled; - we can use more elaborate logic to create the mappings, which means we can use more precise mappings for code and data sections even when using 2 MiB granularity, and this is a prerequisite for running with WXN. As part of the ID map changes, we decouple the ID map size from the kernel VA size, and switch to a 48-bit VA map for all configurations. The next ~10 patches rework the existing LVA support as a CPU feature, which simplifies some code and gets rid of the vabits_actual variable. Then, LPA2 support is implemented in the same vein. This requires adding support for 5 level paging as well, given that LPA2 introduces a new paging level '-1' when using 4k pages. Combined with the vmemmap changes at the start of the series, the resulting LPA2/4k pages configuration will have the exact same VA space layout as the ordinary 4k/4 levels configuration, and so LPA2 support can reasonably be enabled by default, as the fallback is seamless on non-LPA2 hardware. In the 16k/LPA2 case, the fallback also reduces the number of paging levels, resulting in a 47-bit VA space. This is based on the assumption that hybrid LPA2/non-LPA2 16k pages kernels in production use would prefer not to take the performance hit of 4 level paging to gain only a single additional bit of VA space. (Note that generic Android kernels use only 3 levels of paging today.) Bespoke 16k configurations can still configure 48-bit virtual addressing as before. Finally, enable support for running with the WXN control enabled. This was previously part of a separate series, but given that the delta is tiny, it is included here as well. Cc: Catalin Marinas Cc: Will Deacon Cc: Marc Zyngier Cc: Mark Rutland Cc: Ryan Roberts Cc: Anshuman Khandual Cc: Kees Cook Ard Biesheuvel (43): arm64: kernel: Manage absolute relocations in code built under pi/ arm64: kernel: Don't rely on objcopy to make code under pi/ __init arm64: head: move relocation handling to C code arm64: idreg-override: Move to early mini C runtime arm64: kernel: Remove early fdt remap code arm64: head: Clear BSS and the kernel page tables in one go arm64: Move feature overrides into the BSS section arm64: head: Run feature override detection before mapping the kernel arm64: head: move dynamic shadow call stack patching into early C runtime arm64: cpufeature: Add helper to test for CPU feature overrides arm64: kaslr: Use feature override instead of parsing the cmdline again arm64: idreg-override: Create a pseudo feature for rodata=off arm64: Add helpers to probe local CPU for PAC and BTI support arm64: head: allocate more pages for the kernel mapping arm64: head: move memstart_offset_seed handling to C code arm64: mm: Make kaslr_requires_kpti() a static inline arm64: mmu: Make __cpu_replace_ttbr1() out of line arm64: head: Move early kernel mapping routines into C code arm64: mm: Use 48-bit virtual addressing for the permanent ID map arm64: pgtable: Decouple PGDIR size macros from PGD/PUD/PMD levels arm64: kernel: Create initial ID map from C code arm64: mm: avoid fixmap for early swapper_pg_dir updates arm64: mm: omit redundant remap of kernel image arm64: Revert "mm: provide idmap pointer to cpu_replace_ttbr1()" arm64: mm: Handle LVA support as a CPU feature arm64: mm: Add feature override support for LVA arm64: Avoid #define'ing PTE_MAYBE_NG to 0x0 for asm use arm64: Add ESR decoding for exceptions involving translation level -1 arm64: mm: Wire up TCR.DS bit to PTE shareability fields arm64: mm: Add LPA2 support to phys<->pte conversion routines arm64: mm: Add definitions to support 5 levels of paging arm64: mm: add LPA2 and 5 level paging support to G-to-nG conversion arm64: Enable LPA2 at boot if supported by the system arm64: mm: Add 5 level paging support to fixmap and swapper handling arm64: kasan: Reduce minimum shadow alignment and enable 5 level paging arm64: mm: Add support for folding PUDs at runtime arm64: ptdump: Disregard unaddressable VA space arm64: ptdump: Deal with translation levels folded at runtime arm64: kvm: avoid CONFIG_PGTABLE_LEVELS for runtime levels arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs arm64: defconfig: Enable LPA2 support mm: add arch hook to validate mmap() prot flags arm64: mm: add support for WXN memory translation attribute arch/arm64/Kconfig | 38 +- arch/arm64/configs/defconfig | 1 - arch/arm64/include/asm/archrandom.h | 2 - arch/arm64/include/asm/assembler.h | 55 +-- arch/arm64/include/asm/cpufeature.h | 116 +++++ arch/arm64/include/asm/esr.h | 13 +- arch/arm64/include/asm/fixmap.h | 2 +- arch/arm64/include/asm/kasan.h | 2 - arch/arm64/include/asm/kernel-pgtable.h | 103 ++--- arch/arm64/include/asm/kvm_emulate.h | 10 +- arch/arm64/include/asm/memory.h | 17 +- arch/arm64/include/asm/mman.h | 36 ++ arch/arm64/include/asm/mmu.h | 40 +- arch/arm64/include/asm/mmu_context.h | 83 ++-- arch/arm64/include/asm/pgalloc.h | 53 ++- arch/arm64/include/asm/pgtable-hwdef.h | 33 +- arch/arm64/include/asm/pgtable-prot.h | 20 +- arch/arm64/include/asm/pgtable-types.h | 6 + arch/arm64/include/asm/pgtable.h | 219 ++++++++- arch/arm64/include/asm/scs.h | 36 +- arch/arm64/include/asm/setup.h | 3 - arch/arm64/include/asm/tlb.h | 3 + arch/arm64/kernel/Makefile | 13 +- arch/arm64/kernel/cpufeature.c | 111 +++-- arch/arm64/kernel/head.S | 463 ++------------------ arch/arm64/kernel/image-vars.h | 35 +- arch/arm64/kernel/kaslr.c | 4 +- arch/arm64/kernel/module.c | 2 +- arch/arm64/kernel/pi/Makefile | 27 +- arch/arm64/kernel/{ => pi}/idreg-override.c | 80 ++-- arch/arm64/kernel/pi/kaslr_early.c | 67 +-- arch/arm64/kernel/pi/map_kernel.c | 276 ++++++++++++ arch/arm64/kernel/pi/map_range.c | 105 +++++ arch/arm64/kernel/{ => pi}/patch-scs.c | 36 +- arch/arm64/kernel/pi/pi.h | 36 ++ arch/arm64/kernel/pi/relacheck.c | 130 ++++++ arch/arm64/kernel/pi/relocate.c | 64 +++ arch/arm64/kernel/setup.c | 22 - arch/arm64/kernel/sleep.S | 3 - arch/arm64/kernel/vmlinux.lds.S | 17 +- arch/arm64/kvm/mmu.c | 17 +- arch/arm64/mm/fault.c | 30 +- arch/arm64/mm/fixmap.c | 36 +- arch/arm64/mm/init.c | 2 +- arch/arm64/mm/kasan_init.c | 159 +++++-- arch/arm64/mm/mmap.c | 4 + arch/arm64/mm/mmu.c | 255 ++++++----- arch/arm64/mm/pgd.c | 17 +- arch/arm64/mm/proc.S | 122 +++++- arch/arm64/mm/ptdump.c | 21 +- arch/arm64/tools/cpucaps | 1 + include/linux/mman.h | 15 + mm/mmap.c | 3 + 53 files changed, 1948 insertions(+), 1116 deletions(-) rename arch/arm64/kernel/{ => pi}/idreg-override.c (83%) create mode 100644 arch/arm64/kernel/pi/map_kernel.c create mode 100644 arch/arm64/kernel/pi/map_range.c rename arch/arm64/kernel/{ => pi}/patch-scs.c (89%) create mode 100644 arch/arm64/kernel/pi/pi.h create mode 100644 arch/arm64/kernel/pi/relacheck.c create mode 100644 arch/arm64/kernel/pi/relocate.c -- 2.43.0.687.g38aa6559b0-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel