* [PATCH 0/9] arm64: Fully disable configured-out features
@ 2026-02-19 19:55 Marc Zyngier
2026-02-19 19:55 ` [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers Marc Zyngier
` (8 more replies)
0 siblings, 9 replies; 17+ messages in thread
From: Marc Zyngier @ 2026-02-19 19:55 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Fuad Tabba, Will Deacon, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
Fuad recently reported [1] that when support for FEAT_S1POE is
disabled, but that the HW supports it, the sanitised idreg still show
the value the HW expose, even if this is hidden from userspace. This
ended up advertising S1POE to guests, without the state being
correctly switched. Huhum.
We have a point-fix for this, but it would be good to address the
whole class of similar issues which affect PAuth, SVE, SME, GCS, MTE
and BTI, on top of S1POE. Not we currently leak state S1POE-style, but
we're just pretty lucky. Hence this.
This series tries to align the behaviour of a config option being not
selected with that of the corresponding runtime option (arm64.noFEAT),
with the exception of BTI (but I'm not married with that particular
aspect).
There is a lot more that could be done (Mark has a lot of ideas on
that front), but I wanted to get this out and get the discussion
going.
Another thing is that the proliferation of config options is getting
in the way of maintainability, and at some point, we'll have to pick
our battles. I appreciate that some embedded uses rely on "tinyfying"
the kernel, but maybe we should think of introducing something less
granular, and have KVM to select that (the argument being that if you
want the smallest possible kernel, you don't want anything virt).
Anyway, 'nuf ranting. Patches on top of 6.19.
[1] https://lore.kernel.org/all/20260213143815.1732675-2-tabba@google.com
Marc Zyngier (9):
arm64: Add logic to fully remove features from sanitised id registers
arm64: Convert CONFIG_ARM64_PTR_AUTH to FTR_CONFIG()
arm64: Convert CONFIG_ARM64_SVE to FTR_CONFIG()
arm64: Convert CONFIG_ARM64_SME to FTR_CONFIG()
arm64: Convert CONFIG_ARM64_GCS to FTR_CONFIG()
arm64: Convert CONFIG_ARM64_MTE to FTR_CONFIG()
arm64: Convert CONFIG_ARM64_POE to FTR_CONFIG()
arm64: Convert CONFIG_ARM64_BTI to FTR_CONFIG()
arm64: Remove FTR_VISIBLE_IF_IS_ENABLED()
arch/arm64/include/asm/cpufeature.h | 13 ++--
arch/arm64/kernel/cpufeature.c | 117 +++++++++++++++-------------
2 files changed, 72 insertions(+), 58 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers
2026-02-19 19:55 [PATCH 0/9] arm64: Fully disable configured-out features Marc Zyngier
@ 2026-02-19 19:55 ` Marc Zyngier
2026-02-20 8:36 ` Fuad Tabba
2026-02-19 19:55 ` [PATCH 2/9] arm64: Convert CONFIG_ARM64_PTR_AUTH to FTR_CONFIG() Marc Zyngier
` (7 subsequent siblings)
8 siblings, 1 reply; 17+ messages in thread
From: Marc Zyngier @ 2026-02-19 19:55 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Fuad Tabba, Will Deacon, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
We currently make support for some features such as Pointer Auth,
SVE or S1POE a compile time decision.
However, while we hide that feature from userspace when such support
is disabled, we still leave the value provided by the HW visible to
the rest of the kernel, including KVM.
This has the potential to result in ugly state leakage, as half of
the kernel knows about the feature, and the other doesn't.
Short of completely banning such compilation options and restore
universal knowledge, introduce the possibility to fully remove such
knowledge from the sanitised id registers.
This has more or less the same effect as the idreg override that
a user can pass on the command-line, only defined at build-time.
For that purpose, we provide a new macro (FTR_CONFIG()) that defines
the behaviour of a feature, both when enabled and disabled.
At this stage, nothing is making use of this anti-feature.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 15 ++++++++++-----
arch/arm64/kernel/cpufeature.c | 21 ++++++++++++++++-----
2 files changed, 26 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 4de51f8d92cba..2731ea13c2c86 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -53,15 +53,20 @@ enum ftr_type {
#define FTR_SIGNED true /* Value should be treated as signed */
#define FTR_UNSIGNED false /* Value should be treated as unsigned */
-#define FTR_VISIBLE true /* Feature visible to the user space */
-#define FTR_HIDDEN false /* Feature is hidden from the user */
+enum ftr_visibility {
+ FTR_HIDDEN, /* Feature hidden from the user */
+ FTR_ALL_HIDDEN, /* Feature hidden from kernel, user and KVM */
+ FTR_VISIBLE, /* Feature visible to all observers */
+};
+
+#define FTR_CONFIG(c, e, d) \
+ (IS_ENABLED(c) ? FTR_ ## e : FTR_ ## d)
-#define FTR_VISIBLE_IF_IS_ENABLED(config) \
- (IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN)
+#define FTR_VISIBLE_IF_IS_ENABLED(c) FTR_CONFIG(c, VISIBLE, HIDDEN)
struct arm64_ftr_bits {
bool sign; /* Value is signed ? */
- bool visible;
+ enum ftr_visibility visibility;
bool strict; /* CPU Sanity check: strict matching required ? */
enum ftr_type type;
u8 shift;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c840a93b9ef95..b34a39967d111 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -192,7 +192,7 @@ void dump_cpu_features(void)
#define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
{ \
.sign = SIGNED, \
- .visible = VISIBLE, \
+ .visibility = VISIBLE, \
.strict = STRICT, \
.type = TYPE, \
.shift = SHIFT, \
@@ -1057,17 +1057,28 @@ static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
ftrp->shift);
}
- val = arm64_ftr_set_value(ftrp, val, ftr_new);
-
valid_mask |= ftr_mask;
if (!ftrp->strict)
strict_mask &= ~ftr_mask;
- if (ftrp->visible)
+
+ switch (ftrp->visibility) {
+ case FTR_VISIBLE:
+ val = arm64_ftr_set_value(ftrp, val, ftr_new);
user_mask |= ftr_mask;
- else
+ break;
+ case FTR_ALL_HIDDEN:
+ val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
+ reg->user_val = arm64_ftr_set_value(ftrp,
+ reg->user_val,
+ ftrp->safe_val);
+ break;
+ case FTR_HIDDEN:
+ val = arm64_ftr_set_value(ftrp, val, ftr_new);
reg->user_val = arm64_ftr_set_value(ftrp,
reg->user_val,
ftrp->safe_val);
+ break;
+ }
}
val &= valid_mask;
--
2.47.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 2/9] arm64: Convert CONFIG_ARM64_PTR_AUTH to FTR_CONFIG()
2026-02-19 19:55 [PATCH 0/9] arm64: Fully disable configured-out features Marc Zyngier
2026-02-19 19:55 ` [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers Marc Zyngier
@ 2026-02-19 19:55 ` Marc Zyngier
2026-02-19 19:55 ` [PATCH 3/9] arm64: Convert CONFIG_ARM64_SVE " Marc Zyngier
` (6 subsequent siblings)
8 siblings, 0 replies; 17+ messages in thread
From: Marc Zyngier @ 2026-02-19 19:55 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Fuad Tabba, Will Deacon, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
While CONFIG_ARM64_PTR_AUTH=n prevents userspace from using PAC,
the sanitised ID registers still advertise the feature.
Make it clear that nothing in the kernel should rely on this by
marking the feature as hidden for all when CONFIG_ARM64_PTR_AUTH=n.
This is functionnaly equivalent to using arm64.nopauth on the kernel
command-line.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b34a39967d111..7ad124faae08e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -247,16 +247,16 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_SPECRES_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_SB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_FRINTTS_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_PTR_AUTH, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_GPI_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_PTR_AUTH, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_GPA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_LRCPC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_FCMA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_JSCVT_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_PTR_AUTH, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_EL1_API_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_PTR_AUTH, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64ISAR1_EL1_APA_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_DPB_SHIFT, 4, 0),
ARM64_FTR_END,
@@ -269,9 +269,9 @@ static const struct arm64_ftr_bits ftr_id_aa64isar2[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_EL1_CLRBHB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_EL1_BC_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_EL1_MOPS_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_PTR_AUTH, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64ISAR2_EL1_APA3_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_PTR_AUTH),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_PTR_AUTH, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_EL1_GPA3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_EL1_RPRES_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR2_EL1_WFxT_SHIFT, 4, 0),
--
2.47.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 3/9] arm64: Convert CONFIG_ARM64_SVE to FTR_CONFIG()
2026-02-19 19:55 [PATCH 0/9] arm64: Fully disable configured-out features Marc Zyngier
2026-02-19 19:55 ` [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers Marc Zyngier
2026-02-19 19:55 ` [PATCH 2/9] arm64: Convert CONFIG_ARM64_PTR_AUTH to FTR_CONFIG() Marc Zyngier
@ 2026-02-19 19:55 ` Marc Zyngier
2026-02-19 19:55 ` [PATCH 4/9] arm64: Convert CONFIG_ARM64_SME " Marc Zyngier
` (5 subsequent siblings)
8 siblings, 0 replies; 17+ messages in thread
From: Marc Zyngier @ 2026-02-19 19:55 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Fuad Tabba, Will Deacon, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
While CONFIG_ARM64_SVE=n prevents userspace from using SVE,
the sanitised ID registers still advertise the feature.
Make it clear that nothing in the kernel should rely on this by
marking the feature as hidden for all when CONFIG_ARM64_SVE=n.
This is functionnaly equivalent to using arm64.nosve on the kernel
command-line.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7ad124faae08e..9f631658de4b3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -292,7 +292,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_AMU_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_MPAM_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SEL2_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_SVE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_RAS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL1_GIC_SHIFT, 4, 0),
@@ -330,29 +330,29 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr2[] = {
};
static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_F64MM_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_F32MM_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_F16MM_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_I8MM_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_SM4_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_SHA3_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_B16B16_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_BF16_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_BitPerm_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_EltPerm_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_AES_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SVE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ZFR0_EL1_SVEver_SHIFT, 4, 0),
ARM64_FTR_END,
};
--
2.47.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 4/9] arm64: Convert CONFIG_ARM64_SME to FTR_CONFIG()
2026-02-19 19:55 [PATCH 0/9] arm64: Fully disable configured-out features Marc Zyngier
` (2 preceding siblings ...)
2026-02-19 19:55 ` [PATCH 3/9] arm64: Convert CONFIG_ARM64_SVE " Marc Zyngier
@ 2026-02-19 19:55 ` Marc Zyngier
2026-02-19 19:55 ` [PATCH 5/9] arm64: Convert CONFIG_ARM64_GCS " Marc Zyngier
` (4 subsequent siblings)
8 siblings, 0 replies; 17+ messages in thread
From: Marc Zyngier @ 2026-02-19 19:55 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Fuad Tabba, Will Deacon, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
While CONFIG_ARM64_SME=n prevents userspace from using SME,
the sanitised ID registers still advertise the feature.
Make it clear that nothing in the kernel should rely on this by
marking the feature as hidden for all when CONFIG_ARM64_SME=n.
This is functionnaly equivalent to using arm64.nosme on the kernel
command-line.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 48 +++++++++++++++++-----------------
1 file changed, 24 insertions(+), 24 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9f631658de4b3..3d7083280cdde 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -310,7 +310,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_GCS),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_GCS_SHIFT, 4, 0),
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MTE_frac_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_RAS_frac_SHIFT, 4, 0),
@@ -358,51 +358,51 @@ static const struct arm64_ftr_bits ftr_id_aa64zfr0[] = {
};
static const struct arm64_ftr_bits ftr_id_aa64smfr0[] = {
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_FA64_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_LUTv2_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_SMEver_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_I16I64_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_F64F64_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_I16I32_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_B16B16_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_F16F16_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_F8F16_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_F8F32_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_I8I32_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_F16F32_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_B16F32_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_BI32I32_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_F32F32_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_SF8FMA_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_SF8DP4_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_SF8DP2_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_SBitPerm_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_AES_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_SFEXPA_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_STMOP_SHIFT, 1, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SME),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_EXACT, ID_AA64SMFR0_EL1_SMOP4_SHIFT, 1, 0),
ARM64_FTR_END,
};
--
2.47.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 5/9] arm64: Convert CONFIG_ARM64_GCS to FTR_CONFIG()
2026-02-19 19:55 [PATCH 0/9] arm64: Fully disable configured-out features Marc Zyngier
` (3 preceding siblings ...)
2026-02-19 19:55 ` [PATCH 4/9] arm64: Convert CONFIG_ARM64_SME " Marc Zyngier
@ 2026-02-19 19:55 ` Marc Zyngier
2026-02-19 19:55 ` [PATCH 6/9] arm64: Convert CONFIG_ARM64_MTE " Marc Zyngier
` (3 subsequent siblings)
8 siblings, 0 replies; 17+ messages in thread
From: Marc Zyngier @ 2026-02-19 19:55 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Fuad Tabba, Will Deacon, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
While CONFIG_ARM64_GCS=n prevents userspace from using GCS,
the sanitised ID registers still advertise the feature.
Make it clear that nothing in the kernel should rely on this by
marking the feature as hidden for all when CONFIG_ARM64_GCS=n.
This is functionnaly equivalent to using arm64.nogcs on the kernel
command-line.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3d7083280cdde..ca4aae48ace66 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -307,7 +307,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_DF2_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_GCS),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_GCS, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_GCS_SHIFT, 4, 0),
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MTE_frac_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_SME, VISIBLE, ALL_HIDDEN),
--
2.47.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 6/9] arm64: Convert CONFIG_ARM64_MTE to FTR_CONFIG()
2026-02-19 19:55 [PATCH 0/9] arm64: Fully disable configured-out features Marc Zyngier
` (4 preceding siblings ...)
2026-02-19 19:55 ` [PATCH 5/9] arm64: Convert CONFIG_ARM64_GCS " Marc Zyngier
@ 2026-02-19 19:55 ` Marc Zyngier
2026-02-19 19:55 ` [PATCH 7/9] arm64: Convert CONFIG_ARM64_POE " Marc Zyngier
` (2 subsequent siblings)
8 siblings, 0 replies; 17+ messages in thread
From: Marc Zyngier @ 2026-02-19 19:55 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Fuad Tabba, Will Deacon, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
While CONFIG_ARM64_MTE=n prevents userspace from using MTE,
the sanitised ID registers still advertise the feature.
Make it clear that nothing in the kernel should rely on this by
marking the feature as hidden for all when CONFIG_ARM64_MTE=n.
This is functionnaly equivalent to using arm64.nomte on the kernel
command-line.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index ca4aae48ace66..2b9d03c9564e6 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -314,7 +314,7 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SME_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MPAM_frac_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_RAS_frac_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_MTE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_MTE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MTE_SHIFT, 4, ID_AA64PFR1_EL1_MTE_NI),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SSBS_SHIFT, 4, ID_AA64PFR1_EL1_SSBS_NI),
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI),
--
2.47.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 7/9] arm64: Convert CONFIG_ARM64_POE to FTR_CONFIG()
2026-02-19 19:55 [PATCH 0/9] arm64: Fully disable configured-out features Marc Zyngier
` (5 preceding siblings ...)
2026-02-19 19:55 ` [PATCH 6/9] arm64: Convert CONFIG_ARM64_MTE " Marc Zyngier
@ 2026-02-19 19:55 ` Marc Zyngier
2026-02-19 19:55 ` [PATCH 8/9] arm64: Convert CONFIG_ARM64_BTI " Marc Zyngier
2026-02-19 19:55 ` [PATCH 9/9] arm64: Remove FTR_VISIBLE_IF_IS_ENABLED() Marc Zyngier
8 siblings, 0 replies; 17+ messages in thread
From: Marc Zyngier @ 2026-02-19 19:55 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Fuad Tabba, Will Deacon, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
While CONFIG_ARM64_POE=n prevents userspace from using S1POE,
the sanitised ID registers still advertise the feature.
Make it clear that nothing in the kernel should rely on this by
marking the feature as hidden for all when CONFIG_ARM64_POE=n.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 2b9d03c9564e6..8eb9dc35cdba4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -503,7 +503,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
};
static const struct arm64_ftr_bits ftr_id_aa64mmfr3[] = {
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_POE),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_POE, VISIBLE, ALL_HIDDEN),
FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR3_EL1_S1POE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR3_EL1_S1PIE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR3_EL1_SCTLRX_SHIFT, 4, 0),
--
2.47.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 8/9] arm64: Convert CONFIG_ARM64_BTI to FTR_CONFIG()
2026-02-19 19:55 [PATCH 0/9] arm64: Fully disable configured-out features Marc Zyngier
` (6 preceding siblings ...)
2026-02-19 19:55 ` [PATCH 7/9] arm64: Convert CONFIG_ARM64_POE " Marc Zyngier
@ 2026-02-19 19:55 ` Marc Zyngier
2026-02-19 19:55 ` [PATCH 9/9] arm64: Remove FTR_VISIBLE_IF_IS_ENABLED() Marc Zyngier
8 siblings, 0 replies; 17+ messages in thread
From: Marc Zyngier @ 2026-02-19 19:55 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Fuad Tabba, Will Deacon, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
Even if the kernel doesn't use BTI and doesn't expose it to userspace,
it is still OK to expose the feature to the rest of the kernel including
KVM, as there is no additional state attached to this feature.
The only purpose of this change is to kill the last user of the
FTR_VISIBLE_IF_IS_ENABLED() macro.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 8eb9dc35cdba4..d58931e63a0b6 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -317,8 +317,8 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr1[] = {
ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_MTE, VISIBLE, ALL_HIDDEN),
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_MTE_SHIFT, 4, ID_AA64PFR1_EL1_MTE_NI),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_SSBS_SHIFT, 4, ID_AA64PFR1_EL1_SSBS_NI),
- ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_BTI),
- FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_BT_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_CONFIG(CONFIG_ARM64_BTI, VISIBLE, HIDDEN),
+ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR1_EL1_BT_SHIFT, 4, 0),
ARM64_FTR_END,
};
--
2.47.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH 9/9] arm64: Remove FTR_VISIBLE_IF_IS_ENABLED()
2026-02-19 19:55 [PATCH 0/9] arm64: Fully disable configured-out features Marc Zyngier
` (7 preceding siblings ...)
2026-02-19 19:55 ` [PATCH 8/9] arm64: Convert CONFIG_ARM64_BTI " Marc Zyngier
@ 2026-02-19 19:55 ` Marc Zyngier
8 siblings, 0 replies; 17+ messages in thread
From: Marc Zyngier @ 2026-02-19 19:55 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm
Cc: Fuad Tabba, Will Deacon, Catalin Marinas, Mark Rutland,
Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu
Now that FTR_VISIBLE_IF_IS_ENABLED() is completely unused, remove it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 2731ea13c2c86..adaae3060851c 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -62,8 +62,6 @@ enum ftr_visibility {
#define FTR_CONFIG(c, e, d) \
(IS_ENABLED(c) ? FTR_ ## e : FTR_ ## d)
-#define FTR_VISIBLE_IF_IS_ENABLED(c) FTR_CONFIG(c, VISIBLE, HIDDEN)
-
struct arm64_ftr_bits {
bool sign; /* Value is signed ? */
enum ftr_visibility visibility;
--
2.47.3
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers
2026-02-19 19:55 ` [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers Marc Zyngier
@ 2026-02-20 8:36 ` Fuad Tabba
2026-02-20 10:09 ` Marc Zyngier
0 siblings, 1 reply; 17+ messages in thread
From: Fuad Tabba @ 2026-02-20 8:36 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, Will Deacon, Catalin Marinas,
Mark Rutland, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Hi Marc,
On Thu, 19 Feb 2026 at 19:55, Marc Zyngier <maz@kernel.org> wrote:
>
> We currently make support for some features such as Pointer Auth,
> SVE or S1POE a compile time decision.
>
> However, while we hide that feature from userspace when such support
> is disabled, we still leave the value provided by the HW visible to
> the rest of the kernel, including KVM.
>
> This has the potential to result in ugly state leakage, as half of
> the kernel knows about the feature, and the other doesn't.
>
> Short of completely banning such compilation options and restore
> universal knowledge, introduce the possibility to fully remove such
> knowledge from the sanitised id registers.
>
> This has more or less the same effect as the idreg override that
> a user can pass on the command-line, only defined at build-time.
>
> For that purpose, we provide a new macro (FTR_CONFIG()) that defines
> the behaviour of a feature, both when enabled and disabled.
>
> At this stage, nothing is making use of this anti-feature.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/cpufeature.h | 15 ++++++++++-----
> arch/arm64/kernel/cpufeature.c | 21 ++++++++++++++++-----
> 2 files changed, 26 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index 4de51f8d92cba..2731ea13c2c86 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -53,15 +53,20 @@ enum ftr_type {
> #define FTR_SIGNED true /* Value should be treated as signed */
> #define FTR_UNSIGNED false /* Value should be treated as unsigned */
>
> -#define FTR_VISIBLE true /* Feature visible to the user space */
> -#define FTR_HIDDEN false /* Feature is hidden from the user */
> +enum ftr_visibility {
> + FTR_HIDDEN, /* Feature hidden from the user */
> + FTR_ALL_HIDDEN, /* Feature hidden from kernel, user and KVM */
> + FTR_VISIBLE, /* Feature visible to all observers */
> +};
> +
> +#define FTR_CONFIG(c, e, d) \
> + (IS_ENABLED(c) ? FTR_ ## e : FTR_ ## d)
>
> -#define FTR_VISIBLE_IF_IS_ENABLED(config) \
> - (IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN)
> +#define FTR_VISIBLE_IF_IS_ENABLED(c) FTR_CONFIG(c, VISIBLE, HIDDEN)
>
> struct arm64_ftr_bits {
> bool sign; /* Value is signed ? */
> - bool visible;
> + enum ftr_visibility visibility;
> bool strict; /* CPU Sanity check: strict matching required ? */
> enum ftr_type type;
> u8 shift;
This introduces bloat. Should you group the bools together and the
enums together?
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index c840a93b9ef95..b34a39967d111 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -192,7 +192,7 @@ void dump_cpu_features(void)
> #define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
> { \
> .sign = SIGNED, \
> - .visible = VISIBLE, \
> + .visibility = VISIBLE, \
> .strict = STRICT, \
> .type = TYPE, \
> .shift = SHIFT, \
> @@ -1057,17 +1057,28 @@ static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
> ftrp->shift);
> }
>
> - val = arm64_ftr_set_value(ftrp, val, ftr_new);
> -
> valid_mask |= ftr_mask;
> if (!ftrp->strict)
Should FTR_ALL_HIDDEN also be removed from strict_mask? i.e.
- if (!ftrp->strict)
+ if (!ftrp->strict || || ftrp->visibility == FTR_ALL_HIDDEN)
(or under the ALL_HIDDEN case below).
> strict_mask &= ~ftr_mask;
> - if (ftrp->visible)
> +
> + switch (ftrp->visibility) {
> + case FTR_VISIBLE:
> + val = arm64_ftr_set_value(ftrp, val, ftr_new);
> user_mask |= ftr_mask;
> - else
> + break;
> + case FTR_ALL_HIDDEN:
> + val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
> + reg->user_val = arm64_ftr_set_value(ftrp,
> + reg->user_val,
> + ftrp->safe_val);
Should we also take the safe value in update_cpu_ftr_reg() for FTR_ALL_HIDDEN?
Cheers,
/fuad
> + break;
> + case FTR_HIDDEN:
> + val = arm64_ftr_set_value(ftrp, val, ftr_new);
> reg->user_val = arm64_ftr_set_value(ftrp,
> reg->user_val,
> ftrp->safe_val);
> + break;
> + }
> }
>
> val &= valid_mask;
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers
2026-02-20 8:36 ` Fuad Tabba
@ 2026-02-20 10:09 ` Marc Zyngier
2026-02-20 11:06 ` Fuad Tabba
0 siblings, 1 reply; 17+ messages in thread
From: Marc Zyngier @ 2026-02-20 10:09 UTC (permalink / raw)
To: Fuad Tabba
Cc: linux-arm-kernel, kvmarm, Will Deacon, Catalin Marinas,
Mark Rutland, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Hey Fuad,
Thanks for taking a look.
On Fri, 20 Feb 2026 08:36:04 +0000,
Fuad Tabba <tabba@google.com> wrote:
>
> Hi Marc,
>
> On Thu, 19 Feb 2026 at 19:55, Marc Zyngier <maz@kernel.org> wrote:
> >
> > We currently make support for some features such as Pointer Auth,
> > SVE or S1POE a compile time decision.
> >
> > However, while we hide that feature from userspace when such support
> > is disabled, we still leave the value provided by the HW visible to
> > the rest of the kernel, including KVM.
> >
> > This has the potential to result in ugly state leakage, as half of
> > the kernel knows about the feature, and the other doesn't.
> >
> > Short of completely banning such compilation options and restore
> > universal knowledge, introduce the possibility to fully remove such
> > knowledge from the sanitised id registers.
> >
> > This has more or less the same effect as the idreg override that
> > a user can pass on the command-line, only defined at build-time.
> >
> > For that purpose, we provide a new macro (FTR_CONFIG()) that defines
> > the behaviour of a feature, both when enabled and disabled.
> >
> > At this stage, nothing is making use of this anti-feature.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/include/asm/cpufeature.h | 15 ++++++++++-----
> > arch/arm64/kernel/cpufeature.c | 21 ++++++++++++++++-----
> > 2 files changed, 26 insertions(+), 10 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > index 4de51f8d92cba..2731ea13c2c86 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -53,15 +53,20 @@ enum ftr_type {
> > #define FTR_SIGNED true /* Value should be treated as signed */
> > #define FTR_UNSIGNED false /* Value should be treated as unsigned */
> >
> > -#define FTR_VISIBLE true /* Feature visible to the user space */
> > -#define FTR_HIDDEN false /* Feature is hidden from the user */
> > +enum ftr_visibility {
> > + FTR_HIDDEN, /* Feature hidden from the user */
> > + FTR_ALL_HIDDEN, /* Feature hidden from kernel, user and KVM */
> > + FTR_VISIBLE, /* Feature visible to all observers */
> > +};
> > +
> > +#define FTR_CONFIG(c, e, d) \
> > + (IS_ENABLED(c) ? FTR_ ## e : FTR_ ## d)
> >
> > -#define FTR_VISIBLE_IF_IS_ENABLED(config) \
> > - (IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN)
> > +#define FTR_VISIBLE_IF_IS_ENABLED(c) FTR_CONFIG(c, VISIBLE, HIDDEN)
> >
> > struct arm64_ftr_bits {
> > bool sign; /* Value is signed ? */
> > - bool visible;
> > + enum ftr_visibility visibility;
> > bool strict; /* CPU Sanity check: strict matching required ? */
> > enum ftr_type type;
> > u8 shift;
>
> This introduces bloat. Should you group the bools together and the
> enums together?
That should be possible as long as everybody ends up using the
__ARM64_FTR_BITS macro. On the other hand, I could reduce the width of
the enum to something more appropriate and keep the current layout.
With the current changes, this looks like this:
struct arm64_ftr_bits {
bool sign; /* 0 1 */
/* XXX 3 bytes hole, try to pack */
enum ftr_visibility visibility; /* 4 4 */
bool strict; /* 8 1 */
/* XXX 3 bytes hole, try to pack */
enum ftr_type type; /* 12 4 */
u8 shift; /* 16 1 */
u8 width; /* 17 1 */
/* XXX 6 bytes hole, try to pack */
s64 safe_val; /* 24 8 */
/* size: 32, cachelines: 1, members: 7 */
/* sum members: 20, holes: 3, sum holes: 12 */
/* last cacheline: 32 bytes */
};
which is 8 bytes larger than the upstream version. But if I do this:
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index adaae3060851c..d8accc9c94fab 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -64,9 +64,9 @@ enum ftr_visibility {
struct arm64_ftr_bits {
bool sign; /* Value is signed ? */
- enum ftr_visibility visibility;
+ enum ftr_visibility visibility:8;
bool strict; /* CPU Sanity check: strict matching required ? */
- enum ftr_type type;
+ enum ftr_type type:8;
u8 shift;
u8 width;
s64 safe_val; /* safe value for FTR_EXACT features */
I end up with the following layout:
struct arm64_ftr_bits {
bool sign; /* 0 1 */
/* Bitfield combined with previous fields */
enum ftr_visibility visibility:8; /* 0: 8 4 */
/* Bitfield combined with next fields */
bool strict; /* 2 1 */
/* Bitfield combined with previous fields */
enum ftr_type type:8; /* 0:24 4 */
u8 shift; /* 4 1 */
u8 width; /* 5 1 */
/* XXX 2 bytes hole, try to pack */
s64 safe_val; /* 8 8 */
/* size: 16, cachelines: 1, members: 7 */
/* sum members: 12, holes: 1, sum holes: 2 */
/* sum bitfield members: 16 bits (2 bytes) */
/* last cacheline: 16 bytes */
};
which is 8 bytes *smaller* than the upstream version, without
reordering. WDYT?
>
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index c840a93b9ef95..b34a39967d111 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -192,7 +192,7 @@ void dump_cpu_features(void)
> > #define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
> > { \
> > .sign = SIGNED, \
> > - .visible = VISIBLE, \
> > + .visibility = VISIBLE, \
> > .strict = STRICT, \
> > .type = TYPE, \
> > .shift = SHIFT, \
> > @@ -1057,17 +1057,28 @@ static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
> > ftrp->shift);
> > }
> >
> > - val = arm64_ftr_set_value(ftrp, val, ftr_new);
> > -
> > valid_mask |= ftr_mask;
> > if (!ftrp->strict)
>
> Should FTR_ALL_HIDDEN also be removed from strict_mask? i.e.
>
> - if (!ftrp->strict)
> + if (!ftrp->strict || || ftrp->visibility == FTR_ALL_HIDDEN)
>
> (or under the ALL_HIDDEN case below).
If we run on a system that has diverging features, we probably want to
know irrespective of the feature being enabled. After all, the
integration is out of spec, and conveying that information is
important, just in case the diverging feature affects behaviour in
funny ways...
>
>
> > strict_mask &= ~ftr_mask;
> > - if (ftrp->visible)
> > +
> > + switch (ftrp->visibility) {
> > + case FTR_VISIBLE:
> > + val = arm64_ftr_set_value(ftrp, val, ftr_new);
> > user_mask |= ftr_mask;
> > - else
> > + break;
> > + case FTR_ALL_HIDDEN:
> > + val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
> > + reg->user_val = arm64_ftr_set_value(ftrp,
> > + reg->user_val,
> > + ftrp->safe_val);
>
> Should we also take the safe value in update_cpu_ftr_reg() for FTR_ALL_HIDDEN?
I would expect arm64_ftr_safe_value() to do the right thing at that
stage, given that we have primed the boot CPU with the safe value, and
that we rely on that bootstrap to make the registers converge towards
something safe. This is also what happens for the command-line override.
Or have you spotted a case where this go wrong?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers
2026-02-20 10:09 ` Marc Zyngier
@ 2026-02-20 11:06 ` Fuad Tabba
2026-02-20 14:52 ` Marc Zyngier
0 siblings, 1 reply; 17+ messages in thread
From: Fuad Tabba @ 2026-02-20 11:06 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, Will Deacon, Catalin Marinas,
Mark Rutland, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Hi Marc,
On Fri, 20 Feb 2026 at 10:09, Marc Zyngier <maz@kernel.org> wrote:
>
> Hey Fuad,
>
> Thanks for taking a look.
>
> On Fri, 20 Feb 2026 08:36:04 +0000,
> Fuad Tabba <tabba@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Thu, 19 Feb 2026 at 19:55, Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > We currently make support for some features such as Pointer Auth,
> > > SVE or S1POE a compile time decision.
> > >
> > > However, while we hide that feature from userspace when such support
> > > is disabled, we still leave the value provided by the HW visible to
> > > the rest of the kernel, including KVM.
> > >
> > > This has the potential to result in ugly state leakage, as half of
> > > the kernel knows about the feature, and the other doesn't.
> > >
> > > Short of completely banning such compilation options and restore
> > > universal knowledge, introduce the possibility to fully remove such
> > > knowledge from the sanitised id registers.
> > >
> > > This has more or less the same effect as the idreg override that
> > > a user can pass on the command-line, only defined at build-time.
> > >
> > > For that purpose, we provide a new macro (FTR_CONFIG()) that defines
> > > the behaviour of a feature, both when enabled and disabled.
> > >
> > > At this stage, nothing is making use of this anti-feature.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > > arch/arm64/include/asm/cpufeature.h | 15 ++++++++++-----
> > > arch/arm64/kernel/cpufeature.c | 21 ++++++++++++++++-----
> > > 2 files changed, 26 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> > > index 4de51f8d92cba..2731ea13c2c86 100644
> > > --- a/arch/arm64/include/asm/cpufeature.h
> > > +++ b/arch/arm64/include/asm/cpufeature.h
> > > @@ -53,15 +53,20 @@ enum ftr_type {
> > > #define FTR_SIGNED true /* Value should be treated as signed */
> > > #define FTR_UNSIGNED false /* Value should be treated as unsigned */
> > >
> > > -#define FTR_VISIBLE true /* Feature visible to the user space */
> > > -#define FTR_HIDDEN false /* Feature is hidden from the user */
> > > +enum ftr_visibility {
> > > + FTR_HIDDEN, /* Feature hidden from the user */
> > > + FTR_ALL_HIDDEN, /* Feature hidden from kernel, user and KVM */
> > > + FTR_VISIBLE, /* Feature visible to all observers */
> > > +};
> > > +
> > > +#define FTR_CONFIG(c, e, d) \
> > > + (IS_ENABLED(c) ? FTR_ ## e : FTR_ ## d)
> > >
> > > -#define FTR_VISIBLE_IF_IS_ENABLED(config) \
> > > - (IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN)
> > > +#define FTR_VISIBLE_IF_IS_ENABLED(c) FTR_CONFIG(c, VISIBLE, HIDDEN)
> > >
> > > struct arm64_ftr_bits {
> > > bool sign; /* Value is signed ? */
> > > - bool visible;
> > > + enum ftr_visibility visibility;
> > > bool strict; /* CPU Sanity check: strict matching required ? */
> > > enum ftr_type type;
> > > u8 shift;
> >
> > This introduces bloat. Should you group the bools together and the
> > enums together?
>
> That should be possible as long as everybody ends up using the
> __ARM64_FTR_BITS macro. On the other hand, I could reduce the width of
> the enum to something more appropriate and keep the current layout.
>
> With the current changes, this looks like this:
>
> struct arm64_ftr_bits {
> bool sign; /* 0 1 */
>
> /* XXX 3 bytes hole, try to pack */
>
> enum ftr_visibility visibility; /* 4 4 */
> bool strict; /* 8 1 */
>
> /* XXX 3 bytes hole, try to pack */
>
> enum ftr_type type; /* 12 4 */
> u8 shift; /* 16 1 */
> u8 width; /* 17 1 */
>
> /* XXX 6 bytes hole, try to pack */
>
> s64 safe_val; /* 24 8 */
>
> /* size: 32, cachelines: 1, members: 7 */
> /* sum members: 20, holes: 3, sum holes: 12 */
> /* last cacheline: 32 bytes */
> };
>
> which is 8 bytes larger than the upstream version. But if I do this:
>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index adaae3060851c..d8accc9c94fab 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -64,9 +64,9 @@ enum ftr_visibility {
>
> struct arm64_ftr_bits {
> bool sign; /* Value is signed ? */
> - enum ftr_visibility visibility;
> + enum ftr_visibility visibility:8;
> bool strict; /* CPU Sanity check: strict matching required ? */
> - enum ftr_type type;
> + enum ftr_type type:8;
> u8 shift;
> u8 width;
> s64 safe_val; /* safe value for FTR_EXACT features */
>
> I end up with the following layout:
>
> struct arm64_ftr_bits {
> bool sign; /* 0 1 */
>
> /* Bitfield combined with previous fields */
>
> enum ftr_visibility visibility:8; /* 0: 8 4 */
>
> /* Bitfield combined with next fields */
>
> bool strict; /* 2 1 */
>
> /* Bitfield combined with previous fields */
>
> enum ftr_type type:8; /* 0:24 4 */
> u8 shift; /* 4 1 */
> u8 width; /* 5 1 */
>
> /* XXX 2 bytes hole, try to pack */
>
> s64 safe_val; /* 8 8 */
>
> /* size: 16, cachelines: 1, members: 7 */
> /* sum members: 12, holes: 1, sum holes: 2 */
> /* sum bitfield members: 16 bits (2 bytes) */
> /* last cacheline: 16 bytes */
> };
>
> which is 8 bytes *smaller* than the upstream version, without
> reordering. WDYT?
Using 8-bit bitfields for the enums shrinks the footprint from 24 to
16 bytes while preserving the structure's semantic readability. I like
this approach.
>
> >
> > > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > > index c840a93b9ef95..b34a39967d111 100644
> > > --- a/arch/arm64/kernel/cpufeature.c
> > > +++ b/arch/arm64/kernel/cpufeature.c
> > > @@ -192,7 +192,7 @@ void dump_cpu_features(void)
> > > #define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRICT, TYPE, SHIFT, WIDTH, SAFE_VAL) \
> > > { \
> > > .sign = SIGNED, \
> > > - .visible = VISIBLE, \
> > > + .visibility = VISIBLE, \
> > > .strict = STRICT, \
> > > .type = TYPE, \
> > > .shift = SHIFT, \
> > > @@ -1057,17 +1057,28 @@ static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
> > > ftrp->shift);
> > > }
> > >
> > > - val = arm64_ftr_set_value(ftrp, val, ftr_new);
> > > -
> > > valid_mask |= ftr_mask;
> > > if (!ftrp->strict)
> >
> > Should FTR_ALL_HIDDEN also be removed from strict_mask? i.e.
> >
> > - if (!ftrp->strict)
> > + if (!ftrp->strict || || ftrp->visibility == FTR_ALL_HIDDEN)
> >
> > (or under the ALL_HIDDEN case below).
>
> If we run on a system that has diverging features, we probably want to
> know irrespective of the feature being enabled. After all, the
> integration is out of spec, and conveying that information is
> important, just in case the diverging feature affects behaviour in
> funny ways...
I see. If the kernel is booted on broken/asymmetric hardware, we still
want to warn the user about the underlying violation, even if we are
pretending the feature doesn't exist for our own purposes. Leaving it
in the strict_mask is the correct approach to retain that diagnostic
capability.
> >
> >
> > > strict_mask &= ~ftr_mask;
> > > - if (ftrp->visible)
> > > +
> > > + switch (ftrp->visibility) {
> > > + case FTR_VISIBLE:
> > > + val = arm64_ftr_set_value(ftrp, val, ftr_new);
> > > user_mask |= ftr_mask;
> > > - else
> > > + break;
> > > + case FTR_ALL_HIDDEN:
> > > + val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
> > > + reg->user_val = arm64_ftr_set_value(ftrp,
> > > + reg->user_val,
> > > + ftrp->safe_val);
> >
> > Should we also take the safe value in update_cpu_ftr_reg() for FTR_ALL_HIDDEN?
>
> I would expect arm64_ftr_safe_value() to do the right thing at that
> stage, given that we have primed the boot CPU with the safe value, and
> that we rely on that bootstrap to make the registers converge towards
> something safe. This is also what happens for the command-line override.
>
> Or have you spotted a case where this go wrong?
I think so... What if a future FTR_ALL_HIDDEN feature is defined as
FTR_HIGHER_SAFE? Wouldn't that cause problems on secondary CPUs?
init_cpu_ftr_reg() primes sys_val with safe_val on the boot CPU,
update_cpu_ftr_reg() on secondary CPUs compares the hardware value
(ftr_new) against safe_val (ftr_cur). For FTR_HIGHER_SAFE,
arm64_ftr_safe_value() returns max(ftr_new, safe_val). Since the
hardware value is higher, update_cpu_ftr_reg() overwrites sys_val with
the hardware value, resurrecting the hidden feature globally.
The features in this patch are FTR_LOWER_SAFE or FTR_EXACT (which
happen to sink to safe_val), which is why it's not a problem with
these current features.
Cheers,
/fuad
> Thanks,
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers
2026-02-20 11:06 ` Fuad Tabba
@ 2026-02-20 14:52 ` Marc Zyngier
2026-02-20 15:36 ` Fuad Tabba
0 siblings, 1 reply; 17+ messages in thread
From: Marc Zyngier @ 2026-02-20 14:52 UTC (permalink / raw)
To: Fuad Tabba
Cc: linux-arm-kernel, kvmarm, Will Deacon, Catalin Marinas,
Mark Rutland, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
On Fri, 20 Feb 2026 11:06:03 +0000,
Fuad Tabba <tabba@google.com> wrote:
>
> > > > + switch (ftrp->visibility) {
> > > > + case FTR_VISIBLE:
> > > > + val = arm64_ftr_set_value(ftrp, val, ftr_new);
> > > > user_mask |= ftr_mask;
> > > > - else
> > > > + break;
> > > > + case FTR_ALL_HIDDEN:
> > > > + val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
> > > > + reg->user_val = arm64_ftr_set_value(ftrp,
> > > > + reg->user_val,
> > > > + ftrp->safe_val);
> > >
> > > Should we also take the safe value in update_cpu_ftr_reg() for FTR_ALL_HIDDEN?
> >
> > I would expect arm64_ftr_safe_value() to do the right thing at that
> > stage, given that we have primed the boot CPU with the safe value, and
> > that we rely on that bootstrap to make the registers converge towards
> > something safe. This is also what happens for the command-line override.
> >
> > Or have you spotted a case where this go wrong?
>
> I think so... What if a future FTR_ALL_HIDDEN feature is defined as
> FTR_HIGHER_SAFE? Wouldn't that cause problems on secondary CPUs?
> init_cpu_ftr_reg() primes sys_val with safe_val on the boot CPU,
> update_cpu_ftr_reg() on secondary CPUs compares the hardware value
> (ftr_new) against safe_val (ftr_cur). For FTR_HIGHER_SAFE,
> arm64_ftr_safe_value() returns max(ftr_new, safe_val). Since the
> hardware value is higher, update_cpu_ftr_reg() overwrites sys_val with
> the hardware value, resurrecting the hidden feature globally.
Huh, that's an interesting observation.
SpecSEI is the only case we currently deal with that is
HIGHER_SAFE. But look at what this feature describes: bloody
speculative SErrors! Not taking this into account could be really
deadly, and the kernel really ought to know about it.
>
> The features in this patch are FTR_LOWER_SAFE or FTR_EXACT (which
> happen to sink to safe_val), which is why it's not a problem with
> these current features.
My conclusion is that it is simply not safe to make such feature
conditional in any way. Note that's also the case of for an override:
look at how we will refuse to downgrade a value in init_cpu_ftr_reg():
if ((ftr_mask & reg->override->mask) == ftr_mask) {
s64 tmp = arm64_ftr_safe_value(ftrp, ftr_ovr, ftr_new);
char *str = NULL;
if (ftr_ovr != tmp) {
/* Unsafe, remove the override */
reg->override->mask &= ~ftr_mask;
reg->override->val &= ~ftr_mask;
tmp = ftr_ovr;
str = "ignoring override";
[...]
I think we must prevent this downgrade the same way, meaning that
ALL_HIDDEN and FTR_HIGHER are mutually exclusive.
How about that:
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d58931e63a0b6..2cae00b4b0c5f 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1067,7 +1067,14 @@ static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
user_mask |= ftr_mask;
break;
case FTR_ALL_HIDDEN:
- val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
+ /*
+ * ALL_HIDDEN and HIGHER_SAFE are incompatible.
+ * Only hide from userspace, and log the oddity.
+ */
+ if (WARN_ON(ftrp->type == FTR_HIGHER_SAFE))
+ val = arm64_ftr_set_value(ftrp, val, ftr_new);
+ else
+ val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
reg->user_val = arm64_ftr_set_value(ftrp,
reg->user_val,
ftrp->safe_val);
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers
2026-02-20 14:52 ` Marc Zyngier
@ 2026-02-20 15:36 ` Fuad Tabba
2026-02-23 9:48 ` Marc Zyngier
0 siblings, 1 reply; 17+ messages in thread
From: Fuad Tabba @ 2026-02-20 15:36 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, Will Deacon, Catalin Marinas,
Mark Rutland, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Hi Marc,
On Fri, 20 Feb 2026 at 14:52, Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 20 Feb 2026 11:06:03 +0000,
> Fuad Tabba <tabba@google.com> wrote:
> >
> > > > > + switch (ftrp->visibility) {
> > > > > + case FTR_VISIBLE:
> > > > > + val = arm64_ftr_set_value(ftrp, val, ftr_new);
> > > > > user_mask |= ftr_mask;
> > > > > - else
> > > > > + break;
> > > > > + case FTR_ALL_HIDDEN:
> > > > > + val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
> > > > > + reg->user_val = arm64_ftr_set_value(ftrp,
> > > > > + reg->user_val,
> > > > > + ftrp->safe_val);
> > > >
> > > > Should we also take the safe value in update_cpu_ftr_reg() for FTR_ALL_HIDDEN?
> > >
> > > I would expect arm64_ftr_safe_value() to do the right thing at that
> > > stage, given that we have primed the boot CPU with the safe value, and
> > > that we rely on that bootstrap to make the registers converge towards
> > > something safe. This is also what happens for the command-line override.
> > >
> > > Or have you spotted a case where this go wrong?
> >
> > I think so... What if a future FTR_ALL_HIDDEN feature is defined as
> > FTR_HIGHER_SAFE? Wouldn't that cause problems on secondary CPUs?
> > init_cpu_ftr_reg() primes sys_val with safe_val on the boot CPU,
> > update_cpu_ftr_reg() on secondary CPUs compares the hardware value
> > (ftr_new) against safe_val (ftr_cur). For FTR_HIGHER_SAFE,
> > arm64_ftr_safe_value() returns max(ftr_new, safe_val). Since the
> > hardware value is higher, update_cpu_ftr_reg() overwrites sys_val with
> > the hardware value, resurrecting the hidden feature globally.
>
> Huh, that's an interesting observation.
>
> SpecSEI is the only case we currently deal with that is
> HIGHER_SAFE. But look at what this feature describes: bloody
> speculative SErrors! Not taking this into account could be really
> deadly, and the kernel really ought to know about it.
I didn't think that much about it, but I guess features designated as
FTR_HIGHER_SAFE (like SpecSEI) usually represent hardware errata or
critical mitigations. So we should not be hiding them at all.
> >
> > The features in this patch are FTR_LOWER_SAFE or FTR_EXACT (which
> > happen to sink to safe_val), which is why it's not a problem with
> > these current features.
>
> My conclusion is that it is simply not safe to make such feature
> conditional in any way. Note that's also the case of for an override:
> look at how we will refuse to downgrade a value in init_cpu_ftr_reg():
>
> if ((ftr_mask & reg->override->mask) == ftr_mask) {
> s64 tmp = arm64_ftr_safe_value(ftrp, ftr_ovr, ftr_new);
> char *str = NULL;
>
> if (ftr_ovr != tmp) {
> /* Unsafe, remove the override */
> reg->override->mask &= ~ftr_mask;
> reg->override->val &= ~ftr_mask;
> tmp = ftr_ovr;
> str = "ignoring override";
> [...]
>
> I think we must prevent this downgrade the same way, meaning that
> ALL_HIDDEN and FTR_HIGHER are mutually exclusive.
>
> How about that:
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index d58931e63a0b6..2cae00b4b0c5f 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1067,7 +1067,14 @@ static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
> user_mask |= ftr_mask;
> break;
> case FTR_ALL_HIDDEN:
> - val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
> + /*
> + * ALL_HIDDEN and HIGHER_SAFE are incompatible.
> + * Only hide from userspace, and log the oddity.
> + */
> + if (WARN_ON(ftrp->type == FTR_HIGHER_SAFE))
> + val = arm64_ftr_set_value(ftrp, val, ftr_new);
> + else
> + val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
> reg->user_val = arm64_ftr_set_value(ftrp,
> reg->user_val,
> ftrp->safe_val);
>
Yes, I think WARN_ON() here is the right call.
That said, I still think you should explicitly short-circuit
update_cpu_ftr_reg() for FTR_ALL_HIDDEN features, in addition to the
WARN_ON(). Relying on arm64_ftr_safe_value() to naturally preserve the
safe_val during secondary CPU boot seems mathematically fragile.
Take MTE_frac as an example. It uses S_ARM64_FTR_BITS and
FTR_LOWER_SAFE with a safe_val of 0. If it were marked FTR_ALL_HIDDEN,
init_cpu_ftr_reg() would prime sys_val with 0. But if a secondary CPU
boots and reports -1 (NI), arm64_ftr_safe_value() will execute min(-1,
0) and return -1. update_cpu_ftr_reg() will then overwrite the primed
safe_val (0) with -1. The "hidden" state established by the boot CPU
is gone, and the feature's hardware state is now exposed globally.
Note that MTE is currently ALL_HIDDEN when configured out, so it's not
totally inconceivable that someone decides to make MTE_frac ALL_HIDDEN
as well. Explicitly short-circuiting for FTR_ALL_HIDDEN features in
update_cpu_ftr_reg() seems to be the safer bet here.
Cheers,
/fuad
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers
2026-02-20 15:36 ` Fuad Tabba
@ 2026-02-23 9:48 ` Marc Zyngier
2026-02-23 18:18 ` Suzuki K Poulose
0 siblings, 1 reply; 17+ messages in thread
From: Marc Zyngier @ 2026-02-23 9:48 UTC (permalink / raw)
To: Fuad Tabba
Cc: linux-arm-kernel, kvmarm, Will Deacon, Catalin Marinas,
Mark Rutland, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Hi Fuad,
On Fri, 20 Feb 2026 15:36:37 +0000,
Fuad Tabba <tabba@google.com> wrote:
>
> > I think we must prevent this downgrade the same way, meaning that
> > ALL_HIDDEN and FTR_HIGHER are mutually exclusive.
> >
> > How about that:
> >
> > diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> > index d58931e63a0b6..2cae00b4b0c5f 100644
> > --- a/arch/arm64/kernel/cpufeature.c
> > +++ b/arch/arm64/kernel/cpufeature.c
> > @@ -1067,7 +1067,14 @@ static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
> > user_mask |= ftr_mask;
> > break;
> > case FTR_ALL_HIDDEN:
> > - val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
> > + /*
> > + * ALL_HIDDEN and HIGHER_SAFE are incompatible.
> > + * Only hide from userspace, and log the oddity.
> > + */
> > + if (WARN_ON(ftrp->type == FTR_HIGHER_SAFE))
> > + val = arm64_ftr_set_value(ftrp, val, ftr_new);
> > + else
> > + val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
> > reg->user_val = arm64_ftr_set_value(ftrp,
> > reg->user_val,
> > ftrp->safe_val);
> >
>
> Yes, I think WARN_ON() here is the right call.
>
> That said, I still think you should explicitly short-circuit
> update_cpu_ftr_reg() for FTR_ALL_HIDDEN features, in addition to the
> WARN_ON(). Relying on arm64_ftr_safe_value() to naturally preserve the
> safe_val during secondary CPU boot seems mathematically fragile.
>
> Take MTE_frac as an example. It uses S_ARM64_FTR_BITS and
> FTR_LOWER_SAFE with a safe_val of 0. If it were marked FTR_ALL_HIDDEN,
> init_cpu_ftr_reg() would prime sys_val with 0. But if a secondary CPU
> boots and reports -1 (NI), arm64_ftr_safe_value() will execute min(-1,
> 0) and return -1. update_cpu_ftr_reg() will then overwrite the primed
> safe_val (0) with -1. The "hidden" state established by the boot CPU
> is gone, and the feature's hardware state is now exposed globally.
>
> Note that MTE is currently ALL_HIDDEN when configured out, so it's not
> totally inconceivable that someone decides to make MTE_frac ALL_HIDDEN
> as well. Explicitly short-circuiting for FTR_ALL_HIDDEN features in
> update_cpu_ftr_reg() seems to be the safer bet here.
Right, the signed feature is a pretty compelling argument. And we
should do the same thing for overrides, probably as a preliminary
patch.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers
2026-02-23 9:48 ` Marc Zyngier
@ 2026-02-23 18:18 ` Suzuki K Poulose
0 siblings, 0 replies; 17+ messages in thread
From: Suzuki K Poulose @ 2026-02-23 18:18 UTC (permalink / raw)
To: Marc Zyngier, Fuad Tabba
Cc: linux-arm-kernel, kvmarm, Will Deacon, Catalin Marinas,
Mark Rutland, Joey Gouly, Oliver Upton, Zenghui Yu
On 23/02/2026 09:48, Marc Zyngier wrote:
> Hi Fuad,
>
> On Fri, 20 Feb 2026 15:36:37 +0000,
> Fuad Tabba <tabba@google.com> wrote:
>>
>>> I think we must prevent this downgrade the same way, meaning that
>>> ALL_HIDDEN and FTR_HIGHER are mutually exclusive.
>>>
>>> How about that:
>>>
>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>>> index d58931e63a0b6..2cae00b4b0c5f 100644
>>> --- a/arch/arm64/kernel/cpufeature.c
>>> +++ b/arch/arm64/kernel/cpufeature.c
>>> @@ -1067,7 +1067,14 @@ static void init_cpu_ftr_reg(u32 sys_reg, u64 new)
>>> user_mask |= ftr_mask;
>>> break;
>>> case FTR_ALL_HIDDEN:
>>> - val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
>>> + /*
>>> + * ALL_HIDDEN and HIGHER_SAFE are incompatible.
>>> + * Only hide from userspace, and log the oddity.
>>> + */
>>> + if (WARN_ON(ftrp->type == FTR_HIGHER_SAFE))
>>> + val = arm64_ftr_set_value(ftrp, val, ftr_new);
>>> + else
>>> + val = arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
>>> reg->user_val = arm64_ftr_set_value(ftrp,
>>> reg->user_val,
>>> ftrp->safe_val);
>>>
>>
>> Yes, I think WARN_ON() here is the right call.
>>
>> That said, I still think you should explicitly short-circuit
>> update_cpu_ftr_reg() for FTR_ALL_HIDDEN features, in addition to the
>> WARN_ON(). Relying on arm64_ftr_safe_value() to naturally preserve the
>> safe_val during secondary CPU boot seems mathematically fragile.
>>
>> Take MTE_frac as an example. It uses S_ARM64_FTR_BITS and
>> FTR_LOWER_SAFE with a safe_val of 0. If it were marked FTR_ALL_HIDDEN,
>> init_cpu_ftr_reg() would prime sys_val with 0. But if a secondary CPU
>> boots and reports -1 (NI), arm64_ftr_safe_value() will execute min(-1,
>> 0) and return -1. update_cpu_ftr_reg() will then overwrite the primed
>> safe_val (0) with -1. The "hidden" state established by the boot CPU
>> is gone, and the feature's hardware state is now exposed globally.
>>
>> Note that MTE is currently ALL_HIDDEN when configured out, so it's not
>> totally inconceivable that someone decides to make MTE_frac ALL_HIDDEN
>> as well. Explicitly short-circuiting for FTR_ALL_HIDDEN features in
>> update_cpu_ftr_reg() seems to be the safer bet here.
>
> Right, the signed feature is a pretty compelling argument. And we
> should do the same thing for overrides, probably as a preliminary
> patch.
>
Suggestions look good to me and I was thinking on similar lines with the
FTR_CONFIG() (Was away last week, now back from holidays).
One minor nit: Given we now have more uses
of arm64_ftr_set_value(ftrp, x, ftrp->safe_value) , could we wrap it
into something like :
static inline s64 arm64_ftr_set_safe_value(... *ftrp, s64 val);
{
return arm64_ftr_set_value(ftrp, val, ftrp->safe_val);
}
To me that makes it way easier to comprehend what we are doing.
Suzuki
Thanks,
>
> M.
>
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2026-02-23 18:18 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-19 19:55 [PATCH 0/9] arm64: Fully disable configured-out features Marc Zyngier
2026-02-19 19:55 ` [PATCH 1/9] arm64: Add logic to fully remove features from sanitised id registers Marc Zyngier
2026-02-20 8:36 ` Fuad Tabba
2026-02-20 10:09 ` Marc Zyngier
2026-02-20 11:06 ` Fuad Tabba
2026-02-20 14:52 ` Marc Zyngier
2026-02-20 15:36 ` Fuad Tabba
2026-02-23 9:48 ` Marc Zyngier
2026-02-23 18:18 ` Suzuki K Poulose
2026-02-19 19:55 ` [PATCH 2/9] arm64: Convert CONFIG_ARM64_PTR_AUTH to FTR_CONFIG() Marc Zyngier
2026-02-19 19:55 ` [PATCH 3/9] arm64: Convert CONFIG_ARM64_SVE " Marc Zyngier
2026-02-19 19:55 ` [PATCH 4/9] arm64: Convert CONFIG_ARM64_SME " Marc Zyngier
2026-02-19 19:55 ` [PATCH 5/9] arm64: Convert CONFIG_ARM64_GCS " Marc Zyngier
2026-02-19 19:55 ` [PATCH 6/9] arm64: Convert CONFIG_ARM64_MTE " Marc Zyngier
2026-02-19 19:55 ` [PATCH 7/9] arm64: Convert CONFIG_ARM64_POE " Marc Zyngier
2026-02-19 19:55 ` [PATCH 8/9] arm64: Convert CONFIG_ARM64_BTI " Marc Zyngier
2026-02-19 19:55 ` [PATCH 9/9] arm64: Remove FTR_VISIBLE_IF_IS_ENABLED() Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox