linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling
@ 2024-12-12  8:18 Ard Biesheuvel
  2024-12-12  8:18 ` [PATCH v3 1/6] arm64/mm: Reduce PA space to 48 bits when LPA2 is not enabled Ard Biesheuvel
                   ` (7 more replies)
  0 siblings, 8 replies; 18+ messages in thread
From: Ard Biesheuvel @ 2024-12-12  8:18 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, Ard Biesheuvel, Catalin Marinas, Will Deacon,
	Marc Zyngier, Mark Rutland, Ryan Roberts, Anshuman Khandual,
	Kees Cook, Quentin Perret

From: Ard Biesheuvel <ardb@kernel.org>

This series addresses a number of buglets related to how we handle the
size of the physical address space when building LPA2 capable kernels:

- reject 52-bit physical addressess in the mapping routines when LPA2 is
  configured but not available at runtime
- ensure that TCR.IPS is not set to 52-bits if LPA2 is not supported
- ensure that TCR_EL2.PS and DS match the host, regardless of whether
  LPA2 is available at stage 2
- don't rely on kvm_get_parange() and invalid physical addresses as
  control flags in the pKVM page donation APIs

Finally, the configurable 48-bit physical address space limit is dropped
entirely, as it doesn't buy us a lot now that all the PARange and {I}PS
handling is done at runtime.

Changes since v2:
- add a definition of MAX_POSSIBLE_PHYSMEM_BITS to fix a build error in
  zsmalloc (#1)
- rename 'owner_update' to 'annotation' (#4)

Changes since v1:
- rebase onto v6.13-rc1
- add Anshuman's ack to patch #1
- incorporate Anshuman's feedback on patches #1 and #2
- tweak owner_update logic in patch #4

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Quentin Perret <qperret@google.com>

Ard Biesheuvel (6):
  arm64/mm: Reduce PA space to 48 bits when LPA2 is not enabled
  arm64/mm: Override PARange for !LPA2 and use it consistently
  arm64/kvm: Configure HYP TCR.PS/DS based on host stage1
  arm64/kvm: Avoid invalid physical addresses to signal owner updates
  arm64: Kconfig: force ARM64_PAN=y when enabling TTBR0 sw PAN
  arm64/mm: Drop configurable 48-bit physical address space limit

 arch/arm64/Kconfig                     | 37 ++------------------
 arch/arm64/include/asm/assembler.h     | 14 +++-----
 arch/arm64/include/asm/cpufeature.h    |  3 +-
 arch/arm64/include/asm/kvm_pgtable.h   |  3 +-
 arch/arm64/include/asm/pgtable-hwdef.h | 12 +------
 arch/arm64/include/asm/pgtable-prot.h  |  7 ++++
 arch/arm64/include/asm/pgtable.h       | 11 +-----
 arch/arm64/include/asm/sparsemem.h     |  5 ++-
 arch/arm64/include/asm/sysreg.h        |  6 ----
 arch/arm64/kernel/cpufeature.c         |  2 +-
 arch/arm64/kernel/pi/idreg-override.c  |  9 +++++
 arch/arm64/kernel/pi/map_kernel.c      |  6 ++++
 arch/arm64/kvm/arm.c                   |  8 ++---
 arch/arm64/kvm/hyp/pgtable.c           | 33 ++++++-----------
 arch/arm64/mm/init.c                   |  7 +++-
 arch/arm64/mm/pgd.c                    |  9 ++---
 arch/arm64/mm/proc.S                   |  2 --
 scripts/gdb/linux/constants.py.in      |  1 -
 tools/arch/arm64/include/asm/sysreg.h  |  6 ----
 19 files changed, 64 insertions(+), 117 deletions(-)

-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/6] arm64/mm: Reduce PA space to 48 bits when LPA2 is not enabled
  2024-12-12  8:18 [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Ard Biesheuvel
@ 2024-12-12  8:18 ` Ard Biesheuvel
  2024-12-12  8:18 ` [PATCH v3 2/6] arm64/mm: Override PARange for !LPA2 and use it consistently Ard Biesheuvel
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Ard Biesheuvel @ 2024-12-12  8:18 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, Ard Biesheuvel, Catalin Marinas, Will Deacon,
	Marc Zyngier, Mark Rutland, Ryan Roberts, Anshuman Khandual,
	Kees Cook, Quentin Perret, stable

From: Ard Biesheuvel <ardb@kernel.org>

Currently, LPA2 kernel support implies support for up to 52 bits of
physical addressing, and this is reflected in global definitions such as
PHYS_MASK_SHIFT and MAX_PHYSMEM_BITS.

This is potentially problematic, given that LPA2 hardware support is
modeled as a CPU feature which can be overridden, and with LPA2 hardware
support turned off, attempting to map physical regions with address bits
[51:48] set (which may exist on LPA2 capable systems booting with
arm64.nolva) will result in corrupted mappings with a truncated output
address and bogus shareability attributes.

This means that the accepted physical address range in the mapping
routines should be at most 48 bits wide when LPA2 support is configured
but not enabled at runtime.

Fixes: 352b0395b505 ("arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs")
Cc: <stable@vger.kernel.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/pgtable-hwdef.h | 6 ------
 arch/arm64/include/asm/pgtable-prot.h  | 7 +++++++
 arch/arm64/include/asm/sparsemem.h     | 5 ++++-
 3 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index c78a988cca93..a9136cc551cc 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -222,12 +222,6 @@
  */
 #define S1_TABLE_AP		(_AT(pmdval_t, 3) << 61)
 
-/*
- * Highest possible physical address supported.
- */
-#define PHYS_MASK_SHIFT		(CONFIG_ARM64_PA_BITS)
-#define PHYS_MASK		((UL(1) << PHYS_MASK_SHIFT) - 1)
-
 #define TTBR_CNP_BIT		(UL(1) << 0)
 
 /*
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 9f9cf13bbd95..a95f1f77bb39 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -81,6 +81,7 @@ extern unsigned long prot_ns_shared;
 #define lpa2_is_enabled()	false
 #define PTE_MAYBE_SHARED	PTE_SHARED
 #define PMD_MAYBE_SHARED	PMD_SECT_S
+#define PHYS_MASK_SHIFT		(CONFIG_ARM64_PA_BITS)
 #else
 static inline bool __pure lpa2_is_enabled(void)
 {
@@ -89,8 +90,14 @@ static inline bool __pure lpa2_is_enabled(void)
 
 #define PTE_MAYBE_SHARED	(lpa2_is_enabled() ? 0 : PTE_SHARED)
 #define PMD_MAYBE_SHARED	(lpa2_is_enabled() ? 0 : PMD_SECT_S)
+#define PHYS_MASK_SHIFT		(lpa2_is_enabled() ? CONFIG_ARM64_PA_BITS : 48)
 #endif
 
+/*
+ * Highest possible physical address supported.
+ */
+#define PHYS_MASK		((UL(1) << PHYS_MASK_SHIFT) - 1)
+
 /*
  * If we have userspace only BTI we don't want to mark kernel pages
  * guarded even if the system does support BTI.
diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
index 8a8acc220371..84783efdc9d1 100644
--- a/arch/arm64/include/asm/sparsemem.h
+++ b/arch/arm64/include/asm/sparsemem.h
@@ -5,7 +5,10 @@
 #ifndef __ASM_SPARSEMEM_H
 #define __ASM_SPARSEMEM_H
 
-#define MAX_PHYSMEM_BITS	CONFIG_ARM64_PA_BITS
+#include <asm/pgtable-prot.h>
+
+#define MAX_PHYSMEM_BITS		PHYS_MASK_SHIFT
+#define MAX_POSSIBLE_PHYSMEM_BITS	(52)
 
 /*
  * Section size must be at least 512MB for 64K base
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/6] arm64/mm: Override PARange for !LPA2 and use it consistently
  2024-12-12  8:18 [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Ard Biesheuvel
  2024-12-12  8:18 ` [PATCH v3 1/6] arm64/mm: Reduce PA space to 48 bits when LPA2 is not enabled Ard Biesheuvel
@ 2024-12-12  8:18 ` Ard Biesheuvel
  2024-12-12  8:18 ` [PATCH v3 3/6] arm64/kvm: Configure HYP TCR.PS/DS based on host stage1 Ard Biesheuvel
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Ard Biesheuvel @ 2024-12-12  8:18 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, Ard Biesheuvel, Catalin Marinas, Will Deacon,
	Marc Zyngier, Mark Rutland, Ryan Roberts, Anshuman Khandual,
	Kees Cook, Quentin Perret, stable

From: Ard Biesheuvel <ardb@kernel.org>

When FEAT_LPA{,2} are not implemented, the ID_AA64MMFR0_EL1.PARange and
TCR.IPS values corresponding with 52-bit physical addressing are
reserved.

Setting the TCR.IPS field to 0b110 (52-bit physical addressing) has side
effects, such as how the TTBRn_ELx.BADDR fields are interpreted, and so
it is important that disabling FEAT_LPA2 (by overriding the
ID_AA64MMFR0.TGran fields) also presents a PARange field consistent with
that.

So limit the field to 48 bits unless LPA2 is enabled, and update
existing references to use the override consistently.

Fixes: 352b0395b505 ("arm64: Enable 52-bit virtual addressing for 4k and 16k granule configs")
Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/assembler.h    | 5 +++++
 arch/arm64/kernel/cpufeature.c        | 2 +-
 arch/arm64/kernel/pi/idreg-override.c | 9 +++++++++
 arch/arm64/kernel/pi/map_kernel.c     | 6 ++++++
 arch/arm64/mm/init.c                  | 7 ++++++-
 5 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 3d8d534a7a77..ad63457a05c5 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -343,6 +343,11 @@ alternative_cb_end
 	// Narrow PARange to fit the PS field in TCR_ELx
 	ubfx	\tmp0, \tmp0, #ID_AA64MMFR0_EL1_PARANGE_SHIFT, #3
 	mov	\tmp1, #ID_AA64MMFR0_EL1_PARANGE_MAX
+#ifdef CONFIG_ARM64_LPA2
+alternative_if_not ARM64_HAS_VA52
+	mov	\tmp1, #ID_AA64MMFR0_EL1_PARANGE_48
+alternative_else_nop_endif
+#endif
 	cmp	\tmp0, \tmp1
 	csel	\tmp0, \tmp1, \tmp0, hi
 	bfi	\tcr, \tmp0, \pos, #3
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6ce71f444ed8..f8cb8a6ab98a 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -3478,7 +3478,7 @@ static void verify_hyp_capabilities(void)
 		return;
 
 	safe_mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
-	mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
+	mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
 	mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
 
 	/* Verify VMID bits */
diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c
index 22159251eb3a..c6b185b885f7 100644
--- a/arch/arm64/kernel/pi/idreg-override.c
+++ b/arch/arm64/kernel/pi/idreg-override.c
@@ -83,6 +83,15 @@ static bool __init mmfr2_varange_filter(u64 val)
 		id_aa64mmfr0_override.val |=
 			(ID_AA64MMFR0_EL1_TGRAN_LPA2 - 1) << ID_AA64MMFR0_EL1_TGRAN_SHIFT;
 		id_aa64mmfr0_override.mask |= 0xfU << ID_AA64MMFR0_EL1_TGRAN_SHIFT;
+
+		/*
+		 * Override PARange to 48 bits - the override will just be
+		 * ignored if the actual PARange is smaller, but this is
+		 * unlikely to be the case for LPA2 capable silicon.
+		 */
+		id_aa64mmfr0_override.val |=
+			ID_AA64MMFR0_EL1_PARANGE_48 << ID_AA64MMFR0_EL1_PARANGE_SHIFT;
+		id_aa64mmfr0_override.mask |= 0xfU << ID_AA64MMFR0_EL1_PARANGE_SHIFT;
 	}
 #endif
 	return true;
diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c
index f374a3e5a5fe..e57b043f324b 100644
--- a/arch/arm64/kernel/pi/map_kernel.c
+++ b/arch/arm64/kernel/pi/map_kernel.c
@@ -136,6 +136,12 @@ static void noinline __section(".idmap.text") set_ttbr0_for_lpa2(u64 ttbr)
 {
 	u64 sctlr = read_sysreg(sctlr_el1);
 	u64 tcr = read_sysreg(tcr_el1) | TCR_DS;
+	u64 mmfr0 = read_sysreg(id_aa64mmfr0_el1);
+	u64 parange = cpuid_feature_extract_unsigned_field(mmfr0,
+							   ID_AA64MMFR0_EL1_PARANGE_SHIFT);
+
+	tcr &= ~TCR_IPS_MASK;
+	tcr |= parange << TCR_IPS_SHIFT;
 
 	asm("	msr	sctlr_el1, %0		;"
 	    "	isb				;"
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index d21f67d67cf5..2b2289d55eaa 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -280,7 +280,12 @@ void __init arm64_memblock_init(void)
 
 	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
 		extern u16 memstart_offset_seed;
-		u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
+
+		/*
+		 * Use the sanitised version of id_aa64mmfr0_el1 so that linear
+		 * map randomization can be enabled by shrinking the IPA space.
+		 */
+		u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
 		int parange = cpuid_feature_extract_unsigned_field(
 					mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
 		s64 range = linear_region_size -
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/6] arm64/kvm: Configure HYP TCR.PS/DS based on host stage1
  2024-12-12  8:18 [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Ard Biesheuvel
  2024-12-12  8:18 ` [PATCH v3 1/6] arm64/mm: Reduce PA space to 48 bits when LPA2 is not enabled Ard Biesheuvel
  2024-12-12  8:18 ` [PATCH v3 2/6] arm64/mm: Override PARange for !LPA2 and use it consistently Ard Biesheuvel
@ 2024-12-12  8:18 ` Ard Biesheuvel
  2024-12-12  8:18 ` [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates Ard Biesheuvel
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Ard Biesheuvel @ 2024-12-12  8:18 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, Ard Biesheuvel, Catalin Marinas, Will Deacon,
	Marc Zyngier, Mark Rutland, Ryan Roberts, Anshuman Khandual,
	Kees Cook, Quentin Perret, stable

From: Ard Biesheuvel <ardb@kernel.org>

When the host stage1 is configured for LPA2, the value currently being
programmed into TCR_EL2.T0SZ may be invalid unless LPA2 is configured
at HYP as well.  This means kvm_lpa2_is_enabled() is not the right
condition to test when setting TCR_EL2.DS, as it will return false if
LPA2 is only available for stage 1 but not for stage 2.

Similary, programming TCR_EL2.PS based on a limited IPA range due to
lack of stage2 LPA2 support could potentially result in problems.

So use lpa2_is_enabled() instead, and set the PS field according to the
host's IPS, which is capped at 48 bits if LPA2 support is absent or
disabled. Whether or not we can make meaningful use of such a
configuration is a different question.

Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kvm/arm.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index a102c3aebdbc..7b2735ad32e9 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1990,8 +1990,7 @@ static int kvm_init_vector_slots(void)
 static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
 {
 	struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu);
-	u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
-	unsigned long tcr;
+	unsigned long tcr, ips;
 
 	/*
 	 * Calculate the raw per-cpu offset without a translation from the
@@ -2005,6 +2004,7 @@ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
 	params->mair_el2 = read_sysreg(mair_el1);
 
 	tcr = read_sysreg(tcr_el1);
+	ips = FIELD_GET(TCR_IPS_MASK, tcr);
 	if (cpus_have_final_cap(ARM64_KVM_HVHE)) {
 		tcr |= TCR_EPD1_MASK;
 	} else {
@@ -2014,8 +2014,8 @@ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
 	tcr &= ~TCR_T0SZ_MASK;
 	tcr |= TCR_T0SZ(hyp_va_bits);
 	tcr &= ~TCR_EL2_PS_MASK;
-	tcr |= FIELD_PREP(TCR_EL2_PS_MASK, kvm_get_parange(mmfr0));
-	if (kvm_lpa2_is_enabled())
+	tcr |= FIELD_PREP(TCR_EL2_PS_MASK, ips);
+	if (lpa2_is_enabled())
 		tcr |= TCR_EL2_DS;
 	params->tcr_el2 = tcr;
 
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates
  2024-12-12  8:18 [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2024-12-12  8:18 ` [PATCH v3 3/6] arm64/kvm: Configure HYP TCR.PS/DS based on host stage1 Ard Biesheuvel
@ 2024-12-12  8:18 ` Ard Biesheuvel
  2024-12-12 11:33   ` Quentin Perret
  2024-12-12  8:18 ` [PATCH v3 5/6] arm64: Kconfig: force ARM64_PAN=y when enabling TTBR0 sw PAN Ard Biesheuvel
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 18+ messages in thread
From: Ard Biesheuvel @ 2024-12-12  8:18 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, Ard Biesheuvel, Catalin Marinas, Will Deacon,
	Marc Zyngier, Mark Rutland, Ryan Roberts, Anshuman Khandual,
	Kees Cook, Quentin Perret

From: Ard Biesheuvel <ardb@kernel.org>

The pKVM stage2 mapping code relies on an invalid physical address to
signal to the internal API that only the annotations of descriptors
should be updated, and these are stored in the high bits of invalid
descriptors covering memory that has been donated to protected guests,
and is therefore unmapped from the host stage-2 page tables.

Given that these invalid PAs are never stored into the descriptors, it
is better to rely on an explicit flag, to clarify the API and to avoid
confusion regarding whether or not the output address of a descriptor
can ever be invalid to begin with (which is not the case with LPA2).

That removes a dependency on the logic that reasons about the maximum PA
range, which differs on LPA2 capable CPUs based on whether LPA2 is
enabled or not, and will be further clarified in subsequent patches.

Cc: Quentin Perret <qperret@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/kvm/hyp/pgtable.c | 33 ++++++--------------
 1 file changed, 10 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 40bd55966540..ed600126161a 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -35,14 +35,6 @@ static bool kvm_pgtable_walk_skip_cmo(const struct kvm_pgtable_visit_ctx *ctx)
 	return unlikely(ctx->flags & KVM_PGTABLE_WALK_SKIP_CMO);
 }
 
-static bool kvm_phys_is_valid(u64 phys)
-{
-	u64 parange_max = kvm_get_parange_max();
-	u8 shift = id_aa64mmfr0_parange_to_phys_shift(parange_max);
-
-	return phys < BIT(shift);
-}
-
 static bool kvm_block_mapping_supported(const struct kvm_pgtable_visit_ctx *ctx, u64 phys)
 {
 	u64 granule = kvm_granule_size(ctx->level);
@@ -53,7 +45,7 @@ static bool kvm_block_mapping_supported(const struct kvm_pgtable_visit_ctx *ctx,
 	if (granule > (ctx->end - ctx->addr))
 		return false;
 
-	if (kvm_phys_is_valid(phys) && !IS_ALIGNED(phys, granule))
+	if (!IS_ALIGNED(phys, granule))
 		return false;
 
 	return IS_ALIGNED(ctx->addr, granule);
@@ -587,6 +579,9 @@ struct stage2_map_data {
 
 	/* Force mappings to page granularity */
 	bool				force_pte;
+
+	/* Walk should update owner_id only */
+	bool				annotation;
 };
 
 u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
@@ -885,18 +880,7 @@ static u64 stage2_map_walker_phys_addr(const struct kvm_pgtable_visit_ctx *ctx,
 {
 	u64 phys = data->phys;
 
-	/*
-	 * Stage-2 walks to update ownership data are communicated to the map
-	 * walker using an invalid PA. Avoid offsetting an already invalid PA,
-	 * which could overflow and make the address valid again.
-	 */
-	if (!kvm_phys_is_valid(phys))
-		return phys;
-
-	/*
-	 * Otherwise, work out the correct PA based on how far the walk has
-	 * gotten.
-	 */
+	/* Work out the correct PA based on how far the walk has gotten */
 	return phys + (ctx->addr - ctx->start);
 }
 
@@ -908,6 +892,9 @@ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
 	if (data->force_pte && ctx->level < KVM_PGTABLE_LAST_LEVEL)
 		return false;
 
+	if (data->annotation && ctx->level == KVM_PGTABLE_LAST_LEVEL)
+		return true;
+
 	return kvm_block_mapping_supported(ctx, phys);
 }
 
@@ -923,7 +910,7 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
 	if (!stage2_leaf_mapping_allowed(ctx, data))
 		return -E2BIG;
 
-	if (kvm_phys_is_valid(phys))
+	if (!data->annotation)
 		new = kvm_init_valid_leaf_pte(phys, data->attr, ctx->level);
 	else
 		new = kvm_init_invalid_leaf_owner(data->owner_id);
@@ -1085,11 +1072,11 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
 {
 	int ret;
 	struct stage2_map_data map_data = {
-		.phys		= KVM_PHYS_INVALID,
 		.mmu		= pgt->mmu,
 		.memcache	= mc,
 		.owner_id	= owner_id,
 		.force_pte	= true,
+		.annotation	= true,
 	};
 	struct kvm_pgtable_walker walker = {
 		.cb		= stage2_map_walker,
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 5/6] arm64: Kconfig: force ARM64_PAN=y when enabling TTBR0 sw PAN
  2024-12-12  8:18 [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2024-12-12  8:18 ` [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates Ard Biesheuvel
@ 2024-12-12  8:18 ` Ard Biesheuvel
  2024-12-12  8:18 ` [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit Ard Biesheuvel
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 18+ messages in thread
From: Ard Biesheuvel @ 2024-12-12  8:18 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, Ard Biesheuvel, Catalin Marinas, Will Deacon,
	Marc Zyngier, Mark Rutland, Ryan Roberts, Anshuman Khandual,
	Kees Cook, Quentin Perret

From: Ard Biesheuvel <ardb@kernel.org>

There are a couple of instances of Kconfig constraints where PAN must be
enabled too if TTBR0 sw PAN is enabled, primarily to avoid dealing with
the modified TTBR0_EL1 sysreg format that is used when 52-bit physical
addressing and/or CnP are enabled (support for either implies support
for hardware PAN as well, which will supersede PAN emulation if both are
available)

Let's simplify this, and always enable ARM64_PAN when enabling TTBR0 sw
PAN. This decouples the PAN configuration from the VA size selection,
permitting us to simplify the latter in subsequent patches. (Note that
PAN and TTBR0 sw PAN can still be disabled after this patch, but not
independently)

To avoid a convoluted circular Kconfig dependency involving KCSAN, make
ARM64_MTE select ARM64_PAN too, instead of depending on it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/Kconfig | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 100570a048c5..c1ca21adddc1 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1379,7 +1379,6 @@ config ARM64_VA_BITS_48
 
 config ARM64_VA_BITS_52
 	bool "52-bit"
-	depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN
 	help
 	  Enable 52-bit virtual addressing for userspace when explicitly
 	  requested via a hint to mmap(). The kernel will also use 52-bit
@@ -1431,7 +1430,6 @@ config ARM64_PA_BITS_48
 config ARM64_PA_BITS_52
 	bool "52-bit"
 	depends on ARM64_64K_PAGES || ARM64_VA_BITS_52
-	depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN
 	help
 	  Enable support for a 52-bit physical address space, introduced as
 	  part of the ARMv8.2-LPA extension.
@@ -1681,6 +1679,7 @@ config RODATA_FULL_DEFAULT_ENABLED
 config ARM64_SW_TTBR0_PAN
 	bool "Emulate Privileged Access Never using TTBR0_EL1 switching"
 	depends on !KCSAN
+	select ARM64_PAN
 	help
 	  Enabling this option prevents the kernel from accessing
 	  user-space memory directly by pointing TTBR0_EL1 to a reserved
@@ -1937,7 +1936,6 @@ config ARM64_RAS_EXTN
 config ARM64_CNP
 	bool "Enable support for Common Not Private (CNP) translations"
 	default y
-	depends on ARM64_PAN || !ARM64_SW_TTBR0_PAN
 	help
 	  Common Not Private (CNP) allows translation table entries to
 	  be shared between different PEs in the same inner shareable
@@ -2132,7 +2130,7 @@ config ARM64_MTE
 	depends on AS_HAS_ARMV8_5
 	depends on AS_HAS_LSE_ATOMICS
 	# Required for tag checking in the uaccess routines
-	depends on ARM64_PAN
+	select ARM64_PAN
 	select ARCH_HAS_SUBPAGE_FAULTS
 	select ARCH_USES_HIGH_VMA_FLAGS
 	select ARCH_USES_PG_ARCH_2
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit
  2024-12-12  8:18 [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2024-12-12  8:18 ` [PATCH v3 5/6] arm64: Kconfig: force ARM64_PAN=y when enabling TTBR0 sw PAN Ard Biesheuvel
@ 2024-12-12  8:18 ` Ard Biesheuvel
  2024-12-20 23:39   ` Klara Modin
                     ` (2 more replies)
  2024-12-19 17:11 ` [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Marc Zyngier
  2024-12-19 19:47 ` Will Deacon
  7 siblings, 3 replies; 18+ messages in thread
From: Ard Biesheuvel @ 2024-12-12  8:18 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, Ard Biesheuvel, Catalin Marinas, Will Deacon,
	Marc Zyngier, Mark Rutland, Ryan Roberts, Anshuman Khandual,
	Kees Cook, Quentin Perret

From: Ard Biesheuvel <ardb@kernel.org>

Currently, the maximum supported physical address space can be
configured as either 48 bits or 52 bits. The only remaining difference
between these in practice is that the former omits the masking and
shifting required to construct TTBR and PTE values, which carry bits #48
and higher disjoint from the rest of the physical address.

The overhead of performing these additional calculations is negligible,
and so there is little reason to retain support for two different
configurations, and we can simply support whatever the hardware
supports.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/Kconfig                     | 31 +-------------------
 arch/arm64/include/asm/assembler.h     | 13 ++------
 arch/arm64/include/asm/cpufeature.h    |  3 +-
 arch/arm64/include/asm/kvm_pgtable.h   |  3 +-
 arch/arm64/include/asm/pgtable-hwdef.h |  6 +---
 arch/arm64/include/asm/pgtable-prot.h  |  4 +--
 arch/arm64/include/asm/pgtable.h       | 11 +------
 arch/arm64/include/asm/sysreg.h        |  6 ----
 arch/arm64/mm/pgd.c                    |  9 +++---
 arch/arm64/mm/proc.S                   |  2 --
 scripts/gdb/linux/constants.py.in      |  1 -
 tools/arch/arm64/include/asm/sysreg.h  |  6 ----
 12 files changed, 14 insertions(+), 81 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c1ca21adddc1..7ebd0ba32a32 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1416,38 +1416,9 @@ config ARM64_VA_BITS
 	default 48 if ARM64_VA_BITS_48
 	default 52 if ARM64_VA_BITS_52
 
-choice
-	prompt "Physical address space size"
-	default ARM64_PA_BITS_48
-	help
-	  Choose the maximum physical address range that the kernel will
-	  support.
-
-config ARM64_PA_BITS_48
-	bool "48-bit"
-	depends on ARM64_64K_PAGES || !ARM64_VA_BITS_52
-
-config ARM64_PA_BITS_52
-	bool "52-bit"
-	depends on ARM64_64K_PAGES || ARM64_VA_BITS_52
-	help
-	  Enable support for a 52-bit physical address space, introduced as
-	  part of the ARMv8.2-LPA extension.
-
-	  With this enabled, the kernel will also continue to work on CPUs that
-	  do not support ARMv8.2-LPA, but with some added memory overhead (and
-	  minor performance overhead).
-
-endchoice
-
-config ARM64_PA_BITS
-	int
-	default 48 if ARM64_PA_BITS_48
-	default 52 if ARM64_PA_BITS_52
-
 config ARM64_LPA2
 	def_bool y
-	depends on ARM64_PA_BITS_52 && !ARM64_64K_PAGES
+	depends on !ARM64_64K_PAGES
 
 choice
 	prompt "Endianness"
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index ad63457a05c5..01a1e3c16283 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -342,14 +342,13 @@ alternative_cb_end
 	mrs	\tmp0, ID_AA64MMFR0_EL1
 	// Narrow PARange to fit the PS field in TCR_ELx
 	ubfx	\tmp0, \tmp0, #ID_AA64MMFR0_EL1_PARANGE_SHIFT, #3
-	mov	\tmp1, #ID_AA64MMFR0_EL1_PARANGE_MAX
 #ifdef CONFIG_ARM64_LPA2
 alternative_if_not ARM64_HAS_VA52
 	mov	\tmp1, #ID_AA64MMFR0_EL1_PARANGE_48
-alternative_else_nop_endif
-#endif
 	cmp	\tmp0, \tmp1
 	csel	\tmp0, \tmp1, \tmp0, hi
+alternative_else_nop_endif
+#endif
 	bfi	\tcr, \tmp0, \pos, #3
 	.endm
 
@@ -599,21 +598,13 @@ alternative_endif
  * 	ttbr:	returns the TTBR value
  */
 	.macro	phys_to_ttbr, ttbr, phys
-#ifdef CONFIG_ARM64_PA_BITS_52
 	orr	\ttbr, \phys, \phys, lsr #46
 	and	\ttbr, \ttbr, #TTBR_BADDR_MASK_52
-#else
-	mov	\ttbr, \phys
-#endif
 	.endm
 
 	.macro	phys_to_pte, pte, phys
-#ifdef CONFIG_ARM64_PA_BITS_52
 	orr	\pte, \phys, \phys, lsr #PTE_ADDR_HIGH_SHIFT
 	and	\pte, \pte, #PHYS_TO_PTE_ADDR_MASK
-#else
-	mov	\pte, \phys
-#endif
 	.endm
 
 /*
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index b64e49bd9d10..ed327358e734 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -885,9 +885,8 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
 	 * However, by the "D10.1.4 Principles of the ID scheme
 	 * for fields in ID registers", ARM DDI 0487C.a, any new
 	 * value is guaranteed to be higher than what we know already.
-	 * As a safe limit, we return the limit supported by the kernel.
 	 */
-	default: return CONFIG_ARM64_PA_BITS;
+	default: return 52;
 	}
 }
 
diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index aab04097b505..525aef178cb4 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -30,8 +30,7 @@
 
 static inline u64 kvm_get_parange_max(void)
 {
-	if (kvm_lpa2_is_enabled() ||
-	   (IS_ENABLED(CONFIG_ARM64_PA_BITS_52) && PAGE_SHIFT == 16))
+	if (kvm_lpa2_is_enabled() || PAGE_SHIFT == 16)
 		return ID_AA64MMFR0_EL1_PARANGE_52;
 	else
 		return ID_AA64MMFR0_EL1_PARANGE_48;
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index a9136cc551cc..9b34180042b2 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -176,7 +176,6 @@
 #define PTE_SWBITS_MASK		_AT(pteval_t, (BIT(63) | GENMASK(58, 55)))
 
 #define PTE_ADDR_LOW		(((_AT(pteval_t, 1) << (50 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
-#ifdef CONFIG_ARM64_PA_BITS_52
 #ifdef CONFIG_ARM64_64K_PAGES
 #define PTE_ADDR_HIGH		(_AT(pteval_t, 0xf) << 12)
 #define PTE_ADDR_HIGH_SHIFT	36
@@ -186,7 +185,6 @@
 #define PTE_ADDR_HIGH_SHIFT	42
 #define PHYS_TO_PTE_ADDR_MASK	GENMASK_ULL(49, 8)
 #endif
-#endif
 
 /*
  * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
@@ -327,12 +325,10 @@
 /*
  * TTBR.
  */
-#ifdef CONFIG_ARM64_PA_BITS_52
 /*
- * TTBR_ELx[1] is RES0 in this configuration.
+ * TTBR_ELx[1] is RES0 when using 52-bit physical addressing
  */
 #define TTBR_BADDR_MASK_52	GENMASK_ULL(47, 2)
-#endif
 
 #ifdef CONFIG_ARM64_VA_BITS_52
 /* Must be at least 64-byte aligned to prevent corruption of the TTBR */
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index a95f1f77bb39..b73acf25341f 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -81,7 +81,7 @@ extern unsigned long prot_ns_shared;
 #define lpa2_is_enabled()	false
 #define PTE_MAYBE_SHARED	PTE_SHARED
 #define PMD_MAYBE_SHARED	PMD_SECT_S
-#define PHYS_MASK_SHIFT		(CONFIG_ARM64_PA_BITS)
+#define PHYS_MASK_SHIFT		(52)
 #else
 static inline bool __pure lpa2_is_enabled(void)
 {
@@ -90,7 +90,7 @@ static inline bool __pure lpa2_is_enabled(void)
 
 #define PTE_MAYBE_SHARED	(lpa2_is_enabled() ? 0 : PTE_SHARED)
 #define PMD_MAYBE_SHARED	(lpa2_is_enabled() ? 0 : PMD_SECT_S)
-#define PHYS_MASK_SHIFT		(lpa2_is_enabled() ? CONFIG_ARM64_PA_BITS : 48)
+#define PHYS_MASK_SHIFT		(lpa2_is_enabled() ? 52 : 48)
 #endif
 
 /*
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 6986345b537a..ec8124d66b9c 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -69,10 +69,9 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
 	pr_err("%s:%d: bad pte %016llx.\n", __FILE__, __LINE__, pte_val(e))
 
 /*
- * Macros to convert between a physical address and its placement in a
+ * Helpers to convert between a physical address and its placement in a
  * page table entry, taking care of 52-bit addresses.
  */
-#ifdef CONFIG_ARM64_PA_BITS_52
 static inline phys_addr_t __pte_to_phys(pte_t pte)
 {
 	pte_val(pte) &= ~PTE_MAYBE_SHARED;
@@ -83,10 +82,6 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
 {
 	return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PHYS_TO_PTE_ADDR_MASK;
 }
-#else
-#define __pte_to_phys(pte)	(pte_val(pte) & PTE_ADDR_LOW)
-#define __phys_to_pte_val(phys)	(phys)
-#endif
 
 #define pte_pfn(pte)		(__pte_to_phys(pte) >> PAGE_SHIFT)
 #define pfn_pte(pfn,prot)	\
@@ -1495,11 +1490,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 	update_mmu_cache_range(NULL, vma, addr, ptep, 1)
 #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0)
 
-#ifdef CONFIG_ARM64_PA_BITS_52
 #define phys_to_ttbr(addr)	(((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52)
-#else
-#define phys_to_ttbr(addr)	(addr)
-#endif
 
 /*
  * On arm64 without hardware Access Flag, copying from user will fail because
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b8303a83c0bf..f902893ec903 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -916,12 +916,6 @@
 #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_LPA2		0x3
 #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX		0x7
 
-#ifdef CONFIG_ARM64_PA_BITS_52
-#define ID_AA64MMFR0_EL1_PARANGE_MAX	ID_AA64MMFR0_EL1_PARANGE_52
-#else
-#define ID_AA64MMFR0_EL1_PARANGE_MAX	ID_AA64MMFR0_EL1_PARANGE_48
-#endif
-
 #if defined(CONFIG_ARM64_4K_PAGES)
 #define ID_AA64MMFR0_EL1_TGRAN_SHIFT		ID_AA64MMFR0_EL1_TGRAN4_SHIFT
 #define ID_AA64MMFR0_EL1_TGRAN_LPA2		ID_AA64MMFR0_EL1_TGRAN4_52_BIT
diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c
index 0c501cabc238..8722ab6d4b1c 100644
--- a/arch/arm64/mm/pgd.c
+++ b/arch/arm64/mm/pgd.c
@@ -48,20 +48,21 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 
 void __init pgtable_cache_init(void)
 {
+	unsigned int pgd_size = PGD_SIZE;
+
 	if (pgdir_is_page_size())
 		return;
 
-#ifdef CONFIG_ARM64_PA_BITS_52
 	/*
 	 * With 52-bit physical addresses, the architecture requires the
 	 * top-level table to be aligned to at least 64 bytes.
 	 */
-	BUILD_BUG_ON(PGD_SIZE < 64);
-#endif
+	if (PHYS_MASK_SHIFT >= 52)
+		pgd_size = max(pgd_size, 64);
 
 	/*
 	 * Naturally aligned pgds required by the architecture.
 	 */
-	pgd_cache = kmem_cache_create("pgd_cache", PGD_SIZE, PGD_SIZE,
+	pgd_cache = kmem_cache_create("pgd_cache", pgd_size, pgd_size,
 				      SLAB_PANIC, NULL);
 }
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index b8edc5765441..51ed0e9d0a0d 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -197,10 +197,8 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
 
 	.macro	pte_to_phys, phys, pte
 	and	\phys, \pte, #PTE_ADDR_LOW
-#ifdef CONFIG_ARM64_PA_BITS_52
 	and	\pte, \pte, #PTE_ADDR_HIGH
 	orr	\phys, \phys, \pte, lsl #PTE_ADDR_HIGH_SHIFT
-#endif
 	.endm
 
 	.macro	kpti_mk_tbl_ng, type, num_entries
diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
index fd6bd69c5096..05034c0b8fd7 100644
--- a/scripts/gdb/linux/constants.py.in
+++ b/scripts/gdb/linux/constants.py.in
@@ -141,7 +141,6 @@ LX_CONFIG(CONFIG_ARM64_4K_PAGES)
 LX_CONFIG(CONFIG_ARM64_16K_PAGES)
 LX_CONFIG(CONFIG_ARM64_64K_PAGES)
 if IS_BUILTIN(CONFIG_ARM64):
-    LX_VALUE(CONFIG_ARM64_PA_BITS)
     LX_VALUE(CONFIG_ARM64_VA_BITS)
     LX_VALUE(CONFIG_PAGE_SHIFT)
     LX_VALUE(CONFIG_ARCH_FORCE_MAX_ORDER)
diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
index cd8420e8c3ad..daeecb1a5366 100644
--- a/tools/arch/arm64/include/asm/sysreg.h
+++ b/tools/arch/arm64/include/asm/sysreg.h
@@ -574,12 +574,6 @@
 #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MIN		0x2
 #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX		0x7
 
-#ifdef CONFIG_ARM64_PA_BITS_52
-#define ID_AA64MMFR0_EL1_PARANGE_MAX	ID_AA64MMFR0_EL1_PARANGE_52
-#else
-#define ID_AA64MMFR0_EL1_PARANGE_MAX	ID_AA64MMFR0_EL1_PARANGE_48
-#endif
-
 #if defined(CONFIG_ARM64_4K_PAGES)
 #define ID_AA64MMFR0_EL1_TGRAN_SHIFT		ID_AA64MMFR0_EL1_TGRAN4_SHIFT
 #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN	ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN
-- 
2.47.1.613.gc27f4b7a9f-goog


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates
  2024-12-12  8:18 ` [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates Ard Biesheuvel
@ 2024-12-12 11:33   ` Quentin Perret
  2024-12-12 11:44     ` Ard Biesheuvel
  0 siblings, 1 reply; 18+ messages in thread
From: Quentin Perret @ 2024-12-12 11:33 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux-kernel, Ard Biesheuvel, Catalin Marinas,
	Will Deacon, Marc Zyngier, Mark Rutland, Ryan Roberts,
	Anshuman Khandual, Kees Cook

On Thursday 12 Dec 2024 at 09:18:46 (+0100), Ard Biesheuvel wrote:
> @@ -908,6 +892,9 @@ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
>  	if (data->force_pte && ctx->level < KVM_PGTABLE_LAST_LEVEL)
>  		return false;
>  
> +	if (data->annotation && ctx->level == KVM_PGTABLE_LAST_LEVEL)
> +		return true;
> +

I don't think it's a problem, but what's the rationale for checking
ctx->level here? The data->force_pte logic should already do this for us
and be somewhat orthogonal to data->annotation, no?

Either way, the patch looks good to me

  Reviewed-by: Quentin Perret <qperret@google.com>

Cheers,
Quentin

>  	return kvm_block_mapping_supported(ctx, phys);
>  }

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates
  2024-12-12 11:33   ` Quentin Perret
@ 2024-12-12 11:44     ` Ard Biesheuvel
  2024-12-12 12:27       ` Quentin Perret
  0 siblings, 1 reply; 18+ messages in thread
From: Ard Biesheuvel @ 2024-12-12 11:44 UTC (permalink / raw)
  To: Quentin Perret
  Cc: Ard Biesheuvel, linux-arm-kernel, linux-kernel, Catalin Marinas,
	Will Deacon, Marc Zyngier, Mark Rutland, Ryan Roberts,
	Anshuman Khandual, Kees Cook

On Thu, 12 Dec 2024 at 12:33, Quentin Perret <qperret@google.com> wrote:
>
> On Thursday 12 Dec 2024 at 09:18:46 (+0100), Ard Biesheuvel wrote:
> > @@ -908,6 +892,9 @@ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
> >       if (data->force_pte && ctx->level < KVM_PGTABLE_LAST_LEVEL)
> >               return false;
> >
> > +     if (data->annotation && ctx->level == KVM_PGTABLE_LAST_LEVEL)
> > +             return true;
> > +
>
> I don't think it's a problem, but what's the rationale for checking
> ctx->level here? The data->force_pte logic should already do this for us
> and be somewhat orthogonal to data->annotation, no?
>

So you are saying this could be

> > +     if (data->annotation)
> > +             return true;

right? That hides the fact that we expect data->annotation to imply
data->force_pte, but other than that, it should work the same, yes.

> Either way, the patch looks good to me
>
>   Reviewed-by: Quentin Perret <qperret@google.com>
>

Thanks!

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates
  2024-12-12 11:44     ` Ard Biesheuvel
@ 2024-12-12 12:27       ` Quentin Perret
  0 siblings, 0 replies; 18+ messages in thread
From: Quentin Perret @ 2024-12-12 12:27 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Ard Biesheuvel, linux-arm-kernel, linux-kernel, Catalin Marinas,
	Will Deacon, Marc Zyngier, Mark Rutland, Ryan Roberts,
	Anshuman Khandual, Kees Cook

On Thursday 12 Dec 2024 at 12:44:38 (+0100), Ard Biesheuvel wrote:
> On Thu, 12 Dec 2024 at 12:33, Quentin Perret <qperret@google.com> wrote:
> >
> > On Thursday 12 Dec 2024 at 09:18:46 (+0100), Ard Biesheuvel wrote:
> > > @@ -908,6 +892,9 @@ static bool stage2_leaf_mapping_allowed(const struct kvm_pgtable_visit_ctx *ctx,
> > >       if (data->force_pte && ctx->level < KVM_PGTABLE_LAST_LEVEL)
> > >               return false;
> > >
> > > +     if (data->annotation && ctx->level == KVM_PGTABLE_LAST_LEVEL)
> > > +             return true;
> > > +
> >
> > I don't think it's a problem, but what's the rationale for checking
> > ctx->level here? The data->force_pte logic should already do this for us
> > and be somewhat orthogonal to data->annotation, no?
> >
> 
> So you are saying this could be
> 
> > > +     if (data->annotation)
> > > +             return true;
> 
> right?

Yep, exactly.

> That hides the fact that we expect data->annotation to imply
> data->force_pte, but other than that, it should work the same, yes.

Eventually we'll want to make the two orthogonal to each other (e.g. to
annotate blocks when donating huge pages to protected guests), but
that'll require more work so again I don't mind that check in the
current code. We can always get rid of it when annotations on blocks
are supported.

Cheers,
Quentin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling
  2024-12-12  8:18 [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Ard Biesheuvel
                   ` (5 preceding siblings ...)
  2024-12-12  8:18 ` [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit Ard Biesheuvel
@ 2024-12-19 17:11 ` Marc Zyngier
  2024-12-19 19:47 ` Will Deacon
  7 siblings, 0 replies; 18+ messages in thread
From: Marc Zyngier @ 2024-12-19 17:11 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux-kernel, Ard Biesheuvel, Catalin Marinas,
	Will Deacon, Mark Rutland, Ryan Roberts, Anshuman Khandual,
	Kees Cook, Quentin Perret

On Thu, 12 Dec 2024 08:18:42 +0000,
Ard Biesheuvel <ardb+git@google.com> wrote:
> 
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> This series addresses a number of buglets related to how we handle the
> size of the physical address space when building LPA2 capable kernels:
> 
> - reject 52-bit physical addressess in the mapping routines when LPA2 is
>   configured but not available at runtime
> - ensure that TCR.IPS is not set to 52-bits if LPA2 is not supported
> - ensure that TCR_EL2.PS and DS match the host, regardless of whether
>   LPA2 is available at stage 2
> - don't rely on kvm_get_parange() and invalid physical addresses as
>   control flags in the pKVM page donation APIs
> 
> Finally, the configurable 48-bit physical address space limit is dropped
> entirely, as it doesn't buy us a lot now that all the PARange and {I}PS
> handling is done at runtime.

Acked-by: Marc Zyngier <maz@kernel.org>

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling
  2024-12-12  8:18 [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Ard Biesheuvel
                   ` (6 preceding siblings ...)
  2024-12-19 17:11 ` [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Marc Zyngier
@ 2024-12-19 19:47 ` Will Deacon
  7 siblings, 0 replies; 18+ messages in thread
From: Will Deacon @ 2024-12-19 19:47 UTC (permalink / raw)
  To: linux-arm-kernel, Ard Biesheuvel
  Cc: catalin.marinas, kernel-team, Will Deacon, linux-kernel,
	Ard Biesheuvel, Marc Zyngier, Mark Rutland, Ryan Roberts,
	Anshuman Khandual, Quentin Perret, Kees Cook

On Thu, 12 Dec 2024 09:18:42 +0100, Ard Biesheuvel wrote:
> This series addresses a number of buglets related to how we handle the
> size of the physical address space when building LPA2 capable kernels:
> 
> - reject 52-bit physical addressess in the mapping routines when LPA2 is
>   configured but not available at runtime
> - ensure that TCR.IPS is not set to 52-bits if LPA2 is not supported
> - ensure that TCR_EL2.PS and DS match the host, regardless of whether
>   LPA2 is available at stage 2
> - don't rely on kvm_get_parange() and invalid physical addresses as
>   control flags in the pKVM page donation APIs
> 
> [...]

Applied to arm64 (for-next/mm), thanks!

[1/6] arm64/mm: Reduce PA space to 48 bits when LPA2 is not enabled
      https://git.kernel.org/arm64/c/bf74bb73cd87
[2/6] arm64/mm: Override PARange for !LPA2 and use it consistently
      https://git.kernel.org/arm64/c/62cffa496aac
[3/6] arm64/kvm: Configure HYP TCR.PS/DS based on host stage1
      https://git.kernel.org/arm64/c/f0da16992aef
[4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates
      https://git.kernel.org/arm64/c/9d86c3c97434
[5/6] arm64: Kconfig: force ARM64_PAN=y when enabling TTBR0 sw PAN
      https://git.kernel.org/arm64/c/92b6919d7fb2
[6/6] arm64/mm: Drop configurable 48-bit physical address space limit
      https://git.kernel.org/arm64/c/32d053d6f5e9

Cheers,
-- 
Will

https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit
  2024-12-12  8:18 ` [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit Ard Biesheuvel
@ 2024-12-20 23:39   ` Klara Modin
  2024-12-20 23:41     ` Ard Biesheuvel
  2024-12-21  0:29   ` Nathan Chancellor
  2024-12-22 12:05   ` Nick Chan
  2 siblings, 1 reply; 18+ messages in thread
From: Klara Modin @ 2024-12-20 23:39 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel
  Cc: linux-kernel, Ard Biesheuvel, Catalin Marinas, Will Deacon,
	Marc Zyngier, Mark Rutland, Ryan Roberts, Anshuman Khandual,
	Kees Cook, Quentin Perret

[-- Attachment #1: Type: text/plain, Size: 12913 bytes --]

Hi,

On 2024-12-12 09:18, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Currently, the maximum supported physical address space can be
> configured as either 48 bits or 52 bits. The only remaining difference
> between these in practice is that the former omits the masking and
> shifting required to construct TTBR and PTE values, which carry bits #48
> and higher disjoint from the rest of the physical address.
> 
> The overhead of performing these additional calculations is negligible,
> and so there is little reason to retain support for two different
> configurations, and we can simply support whatever the hardware
> supports.
> 

With this patch (32d053d6f5e92efd82349e7c481cba5a43dc1a22 in 
next-20241220), my Raspberry Pi 3 won't boot unless I set it to use 
52-bit virtual address space (i.e. neither 39 or 48 work with a 4 KiB 
page size), nothing appears on the serial console. I didn't see anyghing 
suspicious in the kernel log for the 52-bit case but I attached it as I 
don't exactly have much else.

I see that 52 bit physical address space previously depended on either 
64 KiB pages or 52 bit virtual address space, could that be related?

Please let me know if there's anything else you need.

Regards,
Klara Modin

> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>   arch/arm64/Kconfig                     | 31 +-------------------
>   arch/arm64/include/asm/assembler.h     | 13 ++------
>   arch/arm64/include/asm/cpufeature.h    |  3 +-
>   arch/arm64/include/asm/kvm_pgtable.h   |  3 +-
>   arch/arm64/include/asm/pgtable-hwdef.h |  6 +---
>   arch/arm64/include/asm/pgtable-prot.h  |  4 +--
>   arch/arm64/include/asm/pgtable.h       | 11 +------
>   arch/arm64/include/asm/sysreg.h        |  6 ----
>   arch/arm64/mm/pgd.c                    |  9 +++---
>   arch/arm64/mm/proc.S                   |  2 --
>   scripts/gdb/linux/constants.py.in      |  1 -
>   tools/arch/arm64/include/asm/sysreg.h  |  6 ----
>   12 files changed, 14 insertions(+), 81 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index c1ca21adddc1..7ebd0ba32a32 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1416,38 +1416,9 @@ config ARM64_VA_BITS
>   	default 48 if ARM64_VA_BITS_48
>   	default 52 if ARM64_VA_BITS_52
>   
> -choice
> -	prompt "Physical address space size"
> -	default ARM64_PA_BITS_48
> -	help
> -	  Choose the maximum physical address range that the kernel will
> -	  support.
> -
> -config ARM64_PA_BITS_48
> -	bool "48-bit"
> -	depends on ARM64_64K_PAGES || !ARM64_VA_BITS_52
> -
> -config ARM64_PA_BITS_52
> -	bool "52-bit"
> -	depends on ARM64_64K_PAGES || ARM64_VA_BITS_52
> -	help
> -	  Enable support for a 52-bit physical address space, introduced as
> -	  part of the ARMv8.2-LPA extension.
> -
> -	  With this enabled, the kernel will also continue to work on CPUs that
> -	  do not support ARMv8.2-LPA, but with some added memory overhead (and
> -	  minor performance overhead).
> -
> -endchoice
> -
> -config ARM64_PA_BITS
> -	int
> -	default 48 if ARM64_PA_BITS_48
> -	default 52 if ARM64_PA_BITS_52
> -
>   config ARM64_LPA2
>   	def_bool y
> -	depends on ARM64_PA_BITS_52 && !ARM64_64K_PAGES
> +	depends on !ARM64_64K_PAGES
>   
>   choice
>   	prompt "Endianness"
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index ad63457a05c5..01a1e3c16283 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -342,14 +342,13 @@ alternative_cb_end
>   	mrs	\tmp0, ID_AA64MMFR0_EL1
>   	// Narrow PARange to fit the PS field in TCR_ELx
>   	ubfx	\tmp0, \tmp0, #ID_AA64MMFR0_EL1_PARANGE_SHIFT, #3
> -	mov	\tmp1, #ID_AA64MMFR0_EL1_PARANGE_MAX
>   #ifdef CONFIG_ARM64_LPA2
>   alternative_if_not ARM64_HAS_VA52
>   	mov	\tmp1, #ID_AA64MMFR0_EL1_PARANGE_48
> -alternative_else_nop_endif
> -#endif
>   	cmp	\tmp0, \tmp1
>   	csel	\tmp0, \tmp1, \tmp0, hi
> +alternative_else_nop_endif
> +#endif
>   	bfi	\tcr, \tmp0, \pos, #3
>   	.endm
>   
> @@ -599,21 +598,13 @@ alternative_endif
>    * 	ttbr:	returns the TTBR value
>    */
>   	.macro	phys_to_ttbr, ttbr, phys
> -#ifdef CONFIG_ARM64_PA_BITS_52
>   	orr	\ttbr, \phys, \phys, lsr #46
>   	and	\ttbr, \ttbr, #TTBR_BADDR_MASK_52
> -#else
> -	mov	\ttbr, \phys
> -#endif
>   	.endm
>   
>   	.macro	phys_to_pte, pte, phys
> -#ifdef CONFIG_ARM64_PA_BITS_52
>   	orr	\pte, \phys, \phys, lsr #PTE_ADDR_HIGH_SHIFT
>   	and	\pte, \pte, #PHYS_TO_PTE_ADDR_MASK
> -#else
> -	mov	\pte, \phys
> -#endif
>   	.endm
>   
>   /*
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index b64e49bd9d10..ed327358e734 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -885,9 +885,8 @@ static inline u32 id_aa64mmfr0_parange_to_phys_shift(int parange)
>   	 * However, by the "D10.1.4 Principles of the ID scheme
>   	 * for fields in ID registers", ARM DDI 0487C.a, any new
>   	 * value is guaranteed to be higher than what we know already.
> -	 * As a safe limit, we return the limit supported by the kernel.
>   	 */
> -	default: return CONFIG_ARM64_PA_BITS;
> +	default: return 52;
>   	}
>   }
>   
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index aab04097b505..525aef178cb4 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -30,8 +30,7 @@
>   
>   static inline u64 kvm_get_parange_max(void)
>   {
> -	if (kvm_lpa2_is_enabled() ||
> -	   (IS_ENABLED(CONFIG_ARM64_PA_BITS_52) && PAGE_SHIFT == 16))
> +	if (kvm_lpa2_is_enabled() || PAGE_SHIFT == 16)
>   		return ID_AA64MMFR0_EL1_PARANGE_52;
>   	else
>   		return ID_AA64MMFR0_EL1_PARANGE_48;
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index a9136cc551cc..9b34180042b2 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -176,7 +176,6 @@
>   #define PTE_SWBITS_MASK		_AT(pteval_t, (BIT(63) | GENMASK(58, 55)))
>   
>   #define PTE_ADDR_LOW		(((_AT(pteval_t, 1) << (50 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
> -#ifdef CONFIG_ARM64_PA_BITS_52
>   #ifdef CONFIG_ARM64_64K_PAGES
>   #define PTE_ADDR_HIGH		(_AT(pteval_t, 0xf) << 12)
>   #define PTE_ADDR_HIGH_SHIFT	36
> @@ -186,7 +185,6 @@
>   #define PTE_ADDR_HIGH_SHIFT	42
>   #define PHYS_TO_PTE_ADDR_MASK	GENMASK_ULL(49, 8)
>   #endif
> -#endif
>   
>   /*
>    * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
> @@ -327,12 +325,10 @@
>   /*
>    * TTBR.
>    */
> -#ifdef CONFIG_ARM64_PA_BITS_52
>   /*
> - * TTBR_ELx[1] is RES0 in this configuration.
> + * TTBR_ELx[1] is RES0 when using 52-bit physical addressing
>    */
>   #define TTBR_BADDR_MASK_52	GENMASK_ULL(47, 2)
> -#endif
>   
>   #ifdef CONFIG_ARM64_VA_BITS_52
>   /* Must be at least 64-byte aligned to prevent corruption of the TTBR */
> diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
> index a95f1f77bb39..b73acf25341f 100644
> --- a/arch/arm64/include/asm/pgtable-prot.h
> +++ b/arch/arm64/include/asm/pgtable-prot.h
> @@ -81,7 +81,7 @@ extern unsigned long prot_ns_shared;
>   #define lpa2_is_enabled()	false
>   #define PTE_MAYBE_SHARED	PTE_SHARED
>   #define PMD_MAYBE_SHARED	PMD_SECT_S
> -#define PHYS_MASK_SHIFT		(CONFIG_ARM64_PA_BITS)
> +#define PHYS_MASK_SHIFT		(52)
>   #else
>   static inline bool __pure lpa2_is_enabled(void)
>   {
> @@ -90,7 +90,7 @@ static inline bool __pure lpa2_is_enabled(void)
>   
>   #define PTE_MAYBE_SHARED	(lpa2_is_enabled() ? 0 : PTE_SHARED)
>   #define PMD_MAYBE_SHARED	(lpa2_is_enabled() ? 0 : PMD_SECT_S)
> -#define PHYS_MASK_SHIFT		(lpa2_is_enabled() ? CONFIG_ARM64_PA_BITS : 48)
> +#define PHYS_MASK_SHIFT		(lpa2_is_enabled() ? 52 : 48)
>   #endif
>   
>   /*
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 6986345b537a..ec8124d66b9c 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -69,10 +69,9 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
>   	pr_err("%s:%d: bad pte %016llx.\n", __FILE__, __LINE__, pte_val(e))
>   
>   /*
> - * Macros to convert between a physical address and its placement in a
> + * Helpers to convert between a physical address and its placement in a
>    * page table entry, taking care of 52-bit addresses.
>    */
> -#ifdef CONFIG_ARM64_PA_BITS_52
>   static inline phys_addr_t __pte_to_phys(pte_t pte)
>   {
>   	pte_val(pte) &= ~PTE_MAYBE_SHARED;
> @@ -83,10 +82,6 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
>   {
>   	return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PHYS_TO_PTE_ADDR_MASK;
>   }
> -#else
> -#define __pte_to_phys(pte)	(pte_val(pte) & PTE_ADDR_LOW)
> -#define __phys_to_pte_val(phys)	(phys)
> -#endif
>   
>   #define pte_pfn(pte)		(__pte_to_phys(pte) >> PAGE_SHIFT)
>   #define pfn_pte(pfn,prot)	\
> @@ -1495,11 +1490,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
>   	update_mmu_cache_range(NULL, vma, addr, ptep, 1)
>   #define update_mmu_cache_pmd(vma, address, pmd) do { } while (0)
>   
> -#ifdef CONFIG_ARM64_PA_BITS_52
>   #define phys_to_ttbr(addr)	(((addr) | ((addr) >> 46)) & TTBR_BADDR_MASK_52)
> -#else
> -#define phys_to_ttbr(addr)	(addr)
> -#endif
>   
>   /*
>    * On arm64 without hardware Access Flag, copying from user will fail because
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index b8303a83c0bf..f902893ec903 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -916,12 +916,6 @@
>   #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_LPA2		0x3
>   #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX		0x7
>   
> -#ifdef CONFIG_ARM64_PA_BITS_52
> -#define ID_AA64MMFR0_EL1_PARANGE_MAX	ID_AA64MMFR0_EL1_PARANGE_52
> -#else
> -#define ID_AA64MMFR0_EL1_PARANGE_MAX	ID_AA64MMFR0_EL1_PARANGE_48
> -#endif
> -
>   #if defined(CONFIG_ARM64_4K_PAGES)
>   #define ID_AA64MMFR0_EL1_TGRAN_SHIFT		ID_AA64MMFR0_EL1_TGRAN4_SHIFT
>   #define ID_AA64MMFR0_EL1_TGRAN_LPA2		ID_AA64MMFR0_EL1_TGRAN4_52_BIT
> diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c
> index 0c501cabc238..8722ab6d4b1c 100644
> --- a/arch/arm64/mm/pgd.c
> +++ b/arch/arm64/mm/pgd.c
> @@ -48,20 +48,21 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd)
>   
>   void __init pgtable_cache_init(void)
>   {
> +	unsigned int pgd_size = PGD_SIZE;
> +
>   	if (pgdir_is_page_size())
>   		return;
>   
> -#ifdef CONFIG_ARM64_PA_BITS_52
>   	/*
>   	 * With 52-bit physical addresses, the architecture requires the
>   	 * top-level table to be aligned to at least 64 bytes.
>   	 */
> -	BUILD_BUG_ON(PGD_SIZE < 64);
> -#endif
> +	if (PHYS_MASK_SHIFT >= 52)
> +		pgd_size = max(pgd_size, 64);
>   
>   	/*
>   	 * Naturally aligned pgds required by the architecture.
>   	 */
> -	pgd_cache = kmem_cache_create("pgd_cache", PGD_SIZE, PGD_SIZE,
> +	pgd_cache = kmem_cache_create("pgd_cache", pgd_size, pgd_size,
>   				      SLAB_PANIC, NULL);
>   }
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index b8edc5765441..51ed0e9d0a0d 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -197,10 +197,8 @@ SYM_FUNC_ALIAS(__pi_idmap_cpu_replace_ttbr1, idmap_cpu_replace_ttbr1)
>   
>   	.macro	pte_to_phys, phys, pte
>   	and	\phys, \pte, #PTE_ADDR_LOW
> -#ifdef CONFIG_ARM64_PA_BITS_52
>   	and	\pte, \pte, #PTE_ADDR_HIGH
>   	orr	\phys, \phys, \pte, lsl #PTE_ADDR_HIGH_SHIFT
> -#endif
>   	.endm
>   
>   	.macro	kpti_mk_tbl_ng, type, num_entries
> diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
> index fd6bd69c5096..05034c0b8fd7 100644
> --- a/scripts/gdb/linux/constants.py.in
> +++ b/scripts/gdb/linux/constants.py.in
> @@ -141,7 +141,6 @@ LX_CONFIG(CONFIG_ARM64_4K_PAGES)
>   LX_CONFIG(CONFIG_ARM64_16K_PAGES)
>   LX_CONFIG(CONFIG_ARM64_64K_PAGES)
>   if IS_BUILTIN(CONFIG_ARM64):
> -    LX_VALUE(CONFIG_ARM64_PA_BITS)
>       LX_VALUE(CONFIG_ARM64_VA_BITS)
>       LX_VALUE(CONFIG_PAGE_SHIFT)
>       LX_VALUE(CONFIG_ARCH_FORCE_MAX_ORDER)
> diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
> index cd8420e8c3ad..daeecb1a5366 100644
> --- a/tools/arch/arm64/include/asm/sysreg.h
> +++ b/tools/arch/arm64/include/asm/sysreg.h
> @@ -574,12 +574,6 @@
>   #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MIN		0x2
>   #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX		0x7
>   
> -#ifdef CONFIG_ARM64_PA_BITS_52
> -#define ID_AA64MMFR0_EL1_PARANGE_MAX	ID_AA64MMFR0_EL1_PARANGE_52
> -#else
> -#define ID_AA64MMFR0_EL1_PARANGE_MAX	ID_AA64MMFR0_EL1_PARANGE_48
> -#endif
> -
>   #if defined(CONFIG_ARM64_4K_PAGES)
>   #define ID_AA64MMFR0_EL1_TGRAN_SHIFT		ID_AA64MMFR0_EL1_TGRAN4_SHIFT
>   #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN	ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN

[-- Attachment #2: rpi3-no-boot-bisect --]
[-- Type: text/plain, Size: 2469 bytes --]

# bad: [8155b4ef3466f0e289e8fcc9e6e62f3f4dceeac2] Add linux-next specific files for 20241220
# good: [8faabc041a001140564f718dabe37753e88b37fa] Merge tag 'net-6.13-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
git bisect start 'next/master' 'next/stable'
# bad: [d711d1b348a1574a2c24872512067d190b63fd68] Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next.git
git bisect bad d711d1b348a1574a2c24872512067d190b63fd68
# bad: [3aa602263e025bb42ca8766a16bceff287a8f0ee] Merge branch 'xtensa-for-next' of git://github.com/jcmvbkbc/linux-xtensa.git
git bisect bad 3aa602263e025bb42ca8766a16bceff287a8f0ee
# good: [c4262cd734f8695b217332ed2ca7237a7e753b62] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git
git bisect good c4262cd734f8695b217332ed2ca7237a7e753b62
# bad: [a03c7cb185a0648d4f67fb63acf4b98b6fe8d0f7] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mediatek/linux.git
git bisect bad a03c7cb185a0648d4f67fb63acf4b98b6fe8d0f7
# bad: [2086880948dccced5110e472a99915913cec1b8b] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/amlogic/linux.git
git bisect bad 2086880948dccced5110e472a99915913cec1b8b
# good: [e7bb49e3f6435ff3611b83f78a61d387f24d80f8] perf x86: Define arch_fetch_insn in NO_AUXTRACE builds
git bisect good e7bb49e3f6435ff3611b83f78a61d387f24d80f8
# bad: [7bdd902c162d5576785095a0f8885df84bb472f5] Merge branch 'for-next/core' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
git bisect bad 7bdd902c162d5576785095a0f8885df84bb472f5
# bad: [d6ab634f1b323db6639b8b776f5d95ae747b342a] Merge branches 'for-next/cpufeature', 'for-next/docs', 'for-next/misc' and 'for-next/mm' into for-next/core
git bisect bad d6ab634f1b323db6639b8b776f5d95ae747b342a
# bad: [32d053d6f5e92efd82349e7c481cba5a43dc1a22] arm64/mm: Drop configurable 48-bit physical address space limit
git bisect bad 32d053d6f5e92efd82349e7c481cba5a43dc1a22
# good: [f0da16992aef7e246b2f3bba1492e3a52c38ca0e] arm64/kvm: Configure HYP TCR.PS/DS based on host stage1
git bisect good f0da16992aef7e246b2f3bba1492e3a52c38ca0e
# good: [92b6919d7fb29691a8bc5aca49044056683542ca] arm64: Kconfig: force ARM64_PAN=y when enabling TTBR0 sw PAN
git bisect good 92b6919d7fb29691a8bc5aca49044056683542ca
# first bad commit: [32d053d6f5e92efd82349e7c481cba5a43dc1a22] arm64/mm: Drop configurable 48-bit physical address space limit

[-- Attachment #3: config.gz --]
[-- Type: application/gzip, Size: 34839 bytes --]

[-- Attachment #4: dmesg-52-bit.log.gz --]
[-- Type: application/gzip, Size: 7734 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit
  2024-12-20 23:39   ` Klara Modin
@ 2024-12-20 23:41     ` Ard Biesheuvel
  0 siblings, 0 replies; 18+ messages in thread
From: Ard Biesheuvel @ 2024-12-20 23:41 UTC (permalink / raw)
  To: Klara Modin
  Cc: Ard Biesheuvel, linux-arm-kernel, linux-kernel, Catalin Marinas,
	Will Deacon, Marc Zyngier, Mark Rutland, Ryan Roberts,
	Anshuman Khandual, Kees Cook, Quentin Perret

On Sat, 21 Dec 2024 at 00:39, Klara Modin <klarasmodin@gmail.com> wrote:
>
> Hi,
>
> On 2024-12-12 09:18, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > Currently, the maximum supported physical address space can be
> > configured as either 48 bits or 52 bits. The only remaining difference
> > between these in practice is that the former omits the masking and
> > shifting required to construct TTBR and PTE values, which carry bits #48
> > and higher disjoint from the rest of the physical address.
> >
> > The overhead of performing these additional calculations is negligible,
> > and so there is little reason to retain support for two different
> > configurations, and we can simply support whatever the hardware
> > supports.
> >
>
> With this patch (32d053d6f5e92efd82349e7c481cba5a43dc1a22 in
> next-20241220), my Raspberry Pi 3 won't boot unless I set it to use
> 52-bit virtual address space (i.e. neither 39 or 48 work with a 4 KiB
> page size), nothing appears on the serial console. I didn't see anyghing
> suspicious in the kernel log for the 52-bit case but I attached it as I
> don't exactly have much else.
>
> I see that 52 bit physical address space previously depended on either
> 64 KiB pages or 52 bit virtual address space, could that be related?
>
> Please let me know if there's anything else you need.
>

Thanks for the report. This patch has already been dropped from the
arm64 tree, so it should be gone from linux-next once it gets
regenerated.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit
  2024-12-12  8:18 ` [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit Ard Biesheuvel
  2024-12-20 23:39   ` Klara Modin
@ 2024-12-21  0:29   ` Nathan Chancellor
  2024-12-21 12:10     ` Will Deacon
  2024-12-22 12:05   ` Nick Chan
  2 siblings, 1 reply; 18+ messages in thread
From: Nathan Chancellor @ 2024-12-21  0:29 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-arm-kernel, linux-kernel, Ard Biesheuvel, Catalin Marinas,
	Will Deacon, Marc Zyngier, Mark Rutland, Ryan Roberts,
	Anshuman Khandual, Kees Cook, Quentin Perret

Hi Ard,

On Thu, Dec 12, 2024 at 09:18:48AM +0100, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Currently, the maximum supported physical address space can be
> configured as either 48 bits or 52 bits. The only remaining difference
> between these in practice is that the former omits the masking and
> shifting required to construct TTBR and PTE values, which carry bits #48
> and higher disjoint from the rest of the physical address.
> 
> The overhead of performing these additional calculations is negligible,
> and so there is little reason to retain support for two different
> configurations, and we can simply support whatever the hardware
> supports.

I am seeing a boot failure after this change as commit 32d053d6f5e9
("arm64/mm: Drop configurable 48-bit physical address space limit") in
next-20241220 with several distribution configurations that all set
ARM64_VA_BITS_48. I can reproduce it on bare metal and in QEMU. Simply:

$ echo 'CONFIG_ARM64_VA_BITS_52=n
CONFIG_ARM64_VA_BITS_48=y' >kernel/configs/repro.config

$ make -skj"$(nproc)" ARCH=arm64 CROSS_COMPILE=aarch64-linux- mrproper defconfig repro.config Image.gz

$ git diff --no-index .config.old .config
diff --git a/.config.old b/.config
index c6dfbacfae06..371145bbe022 100644
--- a/.config.old
+++ b/.config
@@ -290,7 +290,7 @@ CONFIG_MMU=y
 CONFIG_ARM64_CONT_PTE_SHIFT=4
 CONFIG_ARM64_CONT_PMD_SHIFT=4
 CONFIG_ARCH_MMAP_RND_BITS_MIN=18
-CONFIG_ARCH_MMAP_RND_BITS_MAX=18
+CONFIG_ARCH_MMAP_RND_BITS_MAX=33
 CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=11
 CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
 CONFIG_STACKTRACE_SUPPORT=y
@@ -304,7 +304,7 @@ CONFIG_GENERIC_CALIBRATE_DELAY=y
 CONFIG_SMP=y
 CONFIG_KERNEL_MODE_NEON=y
 CONFIG_FIX_EARLYCON_MEM=y
-CONFIG_PGTABLE_LEVELS=5
+CONFIG_PGTABLE_LEVELS=4
 CONFIG_ARCH_SUPPORTS_UPROBES=y
 CONFIG_ARCH_PROC_KCORE_TEXT=y
 CONFIG_BUILTIN_RETURN_ADDRESS_STRIPS_PAC=y
@@ -426,7 +426,9 @@ CONFIG_ARM64_4K_PAGES=y
 # CONFIG_ARM64_16K_PAGES is not set
 # CONFIG_ARM64_64K_PAGES is not set
 # CONFIG_ARM64_VA_BITS_39 is not set
-CONFIG_ARM64_VA_BITS=52
+CONFIG_ARM64_VA_BITS_48=y
+# CONFIG_ARM64_VA_BITS_52 is not set
+CONFIG_ARM64_VA_BITS=48
 CONFIG_ARM64_LPA2=y
 # CONFIG_CPU_BIG_ENDIAN is not set
 CONFIG_CPU_LITTLE_ENDIAN=y
@@ -11259,6 +11261,3 @@ CONFIG_MEMTEST=y
 #
 # end of Rust hacking
 # end of Kernel hacking
-
-CONFIG_ARM64_VA_BITS_52=n
-CONFIG_ARM64_VA_BITS_48=y

$ qemu-system-aarch64 --version | head -1
QEMU emulator version 9.2.0 (qemu-9.2.0-1.fc42)

# With TCG, there is a crash
$ qemu-system-aarch64 \
	-display none \
	-nodefaults \
	-cpu max,pauth-impdef=true \
	-machine virt,gic-version=max,virtualization=true \
	-append 'console=ttyAMA0 earlycon' \
	-kernel arch/arm64/boot/Image.gz \
	-initrd rootfs.cpio \
	-m 512m -serial mon:stdio
[    0.000000] Booting Linux on physical CPU 0x0000000000 [0x000f0510]
[    0.000000] Linux version 6.13.0-rc2-00006-g32d053d6f5e9 (nathan@c3-large-arm64) (aarch64-linux-gcc (GCC) 14.2.0, GNU ld (GNU Binutils) 2.42) #1 SMP PREEMPT Fri Dec 20 23:42:18 UTC 2024
...
[    0.000000] Unable to handle kernel paging request at virtual address ffff80008001ffe8
[    0.000000] Mem abort info:
[    0.000000]   ESR = 0x0000000096000004
[    0.000000]   EC = 0x25: DABT (current EL), IL = 32 bits
[    0.000000]   SET = 0, FnV = 0
[    0.000000]   EA = 0, S1PTW = 0
[    0.000000]   FSC = 0x04: level 0 translation fault
[    0.000000] Data abort info:
[    0.000000]   ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000
[    0.000000]   CM = 0, WnR = 0, TnD = 0, TagAccess = 0
[    0.000000]   GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[    0.000000] swapper pgtable: 4k pages, 48-bit VAs, pgdp=0000000041f10000
[    0.000000] [ffff80008001ffe8] pgd=0000000000000000, p4d=0000000000000000, pud=1000000043017403, pmd=1000000043018403, pte=006800000800f713
[    0.000000] Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP
[    0.000000] Modules linked in:
[    0.000000] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.13.0-rc2-00006-g32d053d6f5e9 #1
[    0.000000] Hardware name: linux,dummy-virt (DT)
[    0.000000] pstate: 800000c9 (Nzcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[    0.000000] pc : readl_relaxed+0x0/0x8
[    0.000000] lr : gic_validate_dist_version+0x18/0x3c
[    0.000000] sp : ffffb0df9bf63c90
[    0.000000] x29: ffffb0df9bf63c90 x28: 0000000000000000 x27: 0000000000000000
[    0.000000] x26: ffffb0df9bf63d78 x25: 0000000000000008 x24: dead000000000122
[    0.000000] x23: ffff800080010000 x22: ffffb0df9bf63d88 x21: ffffb0df9bf63d78
[    0.000000] x20: 0000000000000000 x19: ffff39131ff08a68 x18: 0000000000000001
[    0.000000] x17: 0000000000000068 x16: 0000000000000100 x15: ffffb0df9b722ee0
[    0.000000] x14: 0000000000000000 x13: ffff800080021000 x12: ffff80008001ffff
[    0.000000] x11: 0000000000000000 x10: 0000000008010000 x9 : 0000000008010000
[    0.000000] x8 : ffff80008001ffff x7 : ffff391303017008 x6 : ffff800080020000
[    0.000000] x5 : 000000000000003f x4 : 000000000000003f x3 : 0000000000000000
[    0.000000] x2 : 0000000000000000 x1 : 000000000000ffe8 x0 : ffff80008001ffe8
[    0.000000] Call trace:
[    0.000000]  readl_relaxed+0x0/0x8 (P)
[    0.000000]  gic_validate_dist_version+0x18/0x3c (L)
[    0.000000]  gic_of_init+0x98/0x278
[    0.000000]  of_irq_init+0x1d4/0x34c
[    0.000000]  irqchip_init+0x18/0x40
[    0.000000]  init_IRQ+0x9c/0xb4
[    0.000000]  start_kernel+0x528/0x6d4
[    0.000000]  __primary_switched+0x88/0x90
[    0.000000] Code: a8c17bfd d50323bf d65f03c0 d503201f (b9400000)
[    0.000000] ---[ end trace 0000000000000000 ]---
[    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
[    0.000000] ---[ end Kernel panic - not syncing: Attempted to kill the idle task! ]---

# Using KVM, there is just a hang
$ qemu-system-aarch64 \
	-display none \
	-nodefaults \
	-machine virt,gic-version=max \
	-append 'console=ttyAMA0 earlycon' \
	-kernel arch/arm64/boot/Image.gz \
	-initrd rootfs.cpio \
	-cpu host \
	-enable-kvm \
	-m 512m \
	-smp 8 \
	-serial mon:stdio

Is this a configuration issue? Reverting this change makes everything
work again.

Cheers,
Nathan

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit
  2024-12-21  0:29   ` Nathan Chancellor
@ 2024-12-21 12:10     ` Will Deacon
  0 siblings, 0 replies; 18+ messages in thread
From: Will Deacon @ 2024-12-21 12:10 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: Ard Biesheuvel, linux-arm-kernel, linux-kernel, Ard Biesheuvel,
	Catalin Marinas, Marc Zyngier, Mark Rutland, Ryan Roberts,
	Anshuman Khandual, Kees Cook, Quentin Perret

On Fri, Dec 20, 2024 at 05:29:06PM -0700, Nathan Chancellor wrote:
> On Thu, Dec 12, 2024 at 09:18:48AM +0100, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> > 
> > Currently, the maximum supported physical address space can be
> > configured as either 48 bits or 52 bits. The only remaining difference
> > between these in practice is that the former omits the masking and
> > shifting required to construct TTBR and PTE values, which carry bits #48
> > and higher disjoint from the rest of the physical address.
> > 
> > The overhead of performing these additional calculations is negligible,
> > and so there is little reason to retain support for two different
> > configurations, and we can simply support whatever the hardware
> > supports.
> 
> I am seeing a boot failure after this change as commit 32d053d6f5e9
> ("arm64/mm: Drop configurable 48-bit physical address space limit") in
> next-20241220 with several distribution configurations that all set
> ARM64_VA_BITS_48. I can reproduce it on bare metal and in QEMU. Simply:

We dropped the patch yesterday, so hopefully things are better today.

Sorry for the bother and thanks for the report,

Will

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit
  2024-12-12  8:18 ` [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit Ard Biesheuvel
  2024-12-20 23:39   ` Klara Modin
  2024-12-21  0:29   ` Nathan Chancellor
@ 2024-12-22 12:05   ` Nick Chan
  2024-12-22 15:35     ` Ard Biesheuvel
  2 siblings, 1 reply; 18+ messages in thread
From: Nick Chan @ 2024-12-22 12:05 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel
  Cc: linux-kernel, Ard Biesheuvel, Catalin Marinas, Will Deacon,
	Marc Zyngier, Mark Rutland, Ryan Roberts, Anshuman Khandual,
	Kees Cook, Quentin Perret, asahi

Hi Ard,

On 12/12/2024 16:18, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
> 
> Currently, the maximum supported physical address space can be
> configured as either 48 bits or 52 bits. The only remaining difference
> between these in practice is that the former omits the masking and
> shifting required to construct TTBR and PTE values, which carry bits #48
> and higher disjoint from the rest of the physical address.
> 
> The overhead of performing these additional calculations is negligible,
> and so there is little reason to retain support for two different
> configurations, and we can simply support whatever the hardware
> supports.

I am seeing a boot failure on Apple iPad 7 which uses 
CONFIG_ARM64_16K=y after this change in linux-next as commit
32d053d6f5e9, with nothing appearing on serial console with
earlycon enabled unless I set CONFIG_ARM64_VA_BITS_52=y. Reverting
this patch makes the kernel work again.

Nick Chan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit
  2024-12-22 12:05   ` Nick Chan
@ 2024-12-22 15:35     ` Ard Biesheuvel
  0 siblings, 0 replies; 18+ messages in thread
From: Ard Biesheuvel @ 2024-12-22 15:35 UTC (permalink / raw)
  To: Nick Chan
  Cc: Ard Biesheuvel, linux-arm-kernel, linux-kernel, Catalin Marinas,
	Will Deacon, Marc Zyngier, Mark Rutland, Ryan Roberts,
	Anshuman Khandual, Kees Cook, Quentin Perret, asahi

On Sun, 22 Dec 2024 at 13:05, Nick Chan <towinchenmi@gmail.com> wrote:
>
> Hi Ard,
>
> On 12/12/2024 16:18, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > Currently, the maximum supported physical address space can be
> > configured as either 48 bits or 52 bits. The only remaining difference
> > between these in practice is that the former omits the masking and
> > shifting required to construct TTBR and PTE values, which carry bits #48
> > and higher disjoint from the rest of the physical address.
> >
> > The overhead of performing these additional calculations is negligible,
> > and so there is little reason to retain support for two different
> > configurations, and we can simply support whatever the hardware
> > supports.
>
> I am seeing a boot failure on Apple iPad 7 which uses
> CONFIG_ARM64_16K=y after this change in linux-next as commit
> 32d053d6f5e9, with nothing appearing on serial console with
> earlycon enabled unless I set CONFIG_ARM64_VA_BITS_52=y. Reverting
> this patch makes the kernel work again.
>

Thanks for the report.

This is a known issue and has been fixed already in the arm64 tree a
couple of days ago.

Once linux-next is regenerated, things should start working again.

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2024-12-22 15:35 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-12  8:18 [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Ard Biesheuvel
2024-12-12  8:18 ` [PATCH v3 1/6] arm64/mm: Reduce PA space to 48 bits when LPA2 is not enabled Ard Biesheuvel
2024-12-12  8:18 ` [PATCH v3 2/6] arm64/mm: Override PARange for !LPA2 and use it consistently Ard Biesheuvel
2024-12-12  8:18 ` [PATCH v3 3/6] arm64/kvm: Configure HYP TCR.PS/DS based on host stage1 Ard Biesheuvel
2024-12-12  8:18 ` [PATCH v3 4/6] arm64/kvm: Avoid invalid physical addresses to signal owner updates Ard Biesheuvel
2024-12-12 11:33   ` Quentin Perret
2024-12-12 11:44     ` Ard Biesheuvel
2024-12-12 12:27       ` Quentin Perret
2024-12-12  8:18 ` [PATCH v3 5/6] arm64: Kconfig: force ARM64_PAN=y when enabling TTBR0 sw PAN Ard Biesheuvel
2024-12-12  8:18 ` [PATCH v3 6/6] arm64/mm: Drop configurable 48-bit physical address space limit Ard Biesheuvel
2024-12-20 23:39   ` Klara Modin
2024-12-20 23:41     ` Ard Biesheuvel
2024-12-21  0:29   ` Nathan Chancellor
2024-12-21 12:10     ` Will Deacon
2024-12-22 12:05   ` Nick Chan
2024-12-22 15:35     ` Ard Biesheuvel
2024-12-19 17:11 ` [PATCH v3 0/6] arm64: Clean up and simplify PA space size handling Marc Zyngier
2024-12-19 19:47 ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).