* [PATCH v2 00/38] arm64: Remove cpus_have_const_cap()
@ 2023-10-05 9:49 Mark Rutland
2023-10-05 9:49 ` [PATCH v2 01/38] clocksource/drivers/arm_arch_timer: Initialize evtstrm after finalizing cpucaps Mark Rutland
` (38 more replies)
0 siblings, 39 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
For historical reasons, cpus_have_const_cap() does more than its name
implies, and its current behaviour is more harmful than helpful. This
series removes cpus_have_const_cap(), removing some redundant code and
making the kernel more robust.
Currently, cpus_have_const_cap() is implemented as:
| static __always_inline bool cpus_have_const_cap(int num)
| {
| if (is_hyp_code())
| return cpus_have_final_cap(num);
| else if (system_capabilities_finalized())
| return __cpus_have_const_cap(num);
| else
| return cpus_have_cap(num);
| }
For hyp code this is safe and practically ideal. We finalize system
cpucaps and patch the relevant alternatives before KVM is initialized,
and so the alternative branch generated by cpus_have_final_cap() is
guaranteed to observe the finalized value of the cpucap.
For non-hyp code this is potentially unsafe and sub-optimal:
1) System cpucaps are detected on the boot CPU while secondary CPUs are
executing code. This leads to potential races around cpucaps being
detected, where the cpucaps can change at arbitrary points in time,
potentially in the middle of sequences which depend on them not
changing, e.g.
CPU 0 CPU 1
// doesn't save PMR
flags = local_daif_save();
// detects PSEUDO-NMI
// attempts to restore PMR
local_daif_restore(flags);
This can potentially lead to erratic behaviour, and for stateful
sequences it would be better to use alternatives such that the entire
sequence is patched atomically.
2) For several cpucaps we perform some enablement/intialization work
between detecting the cpucap nad patching alternatives. For some
features (e.g. SVE and SME) we need to record some additional
properties (e.g. vector lengths) before patching alternatives.
If patched alternative sequences consume any of the recorded
properties, it's possible that these race with the
enablement/initialization and consume stale values, which could
potentially result in erratic behaviour. It would be better to use
alternatives such that the enablement/initialization is guaranteed to
happen before any such usage.
3) Most code doesn't run between cpucaps being detected and their
alternatives being patched, and will have redundant code generated,
with an alternative branch for system_capabilities_finalized(), and a
bitmap test for cpus_have_cap(). This bloats the kernel and wastes
I-cache resources, and the resulting branching structure pessimizes
compiler output.
This is especially noticeable in part of the kernel which need to
test a number of cpucaps in quick succession, such as exception
handlers in entry-common.c and state save/restore in fpsimd.c. Using
alternative branches directly can dramatically improve the code
generated for such paths (e.g. making the entry code several KB
smaller in some configurations).
This series attempts to address the above issues by removing
cpus_have_const_cap() and migrating code over to alternative branches
wherever possible:
* Patches 1 to 2 address a couple of bugs I spotted where cpucaps
are consumed prior to being initialized.
* Patches 3 to 5 rework some low-level cpucap helpers and add new
helpers which are used later in the series.
* Patches 6 to 8 rework some feature enablement code so that this can
work in the window between cpucap detection and alternative patching
without the need to use cpus_have_const_cap().
* Patch 9 moves KVM entirely over to cpus_have_final_cap().
* Patches 10 to 13 clean up the ARM64_HAS_NO_FPSIMD cpucap, inverting
this and making it behave the same way as all other system cpucaps.
* Patches 14 to 37 migrate code away from cpus_have_const_cap().
* Patch 38 removes the now-unused cpus_have_const_cap().
The series is based on v6.6-rc3.
Since v1 [1]:
* Restore missing tags from Marc and Thomas
* Trivial rebase from v6.6-rc2 to to v6.6-rc3
* Split cpucap ordering assertions into a separate patch
* Removed stale reference in __system_matches_cap()
* Folded in Reviewed-by tags
[1] https://lore.kernel.org/linux-arm-kernel/20230919092850.1940729-1-mark.rutland@arm.com/
Mark.
Mark Rutland (38):
clocksource/drivers/arm_arch_timer: Initialize evtstrm after
finalizing cpucaps
arm64/arm: xen: enlighten: Fix KPTI checks
arm64: Factor out cpucap definitions
arm64: Add cpucap_is_possible()
arm64: Add cpus_have_final_boot_cap()
arm64: Rework setup_cpu_features()
arm64: Fixup user features at boot time
arm64: Split kpti_install_ng_mappings()
arm64: kvm: Use cpus_have_final_cap() explicitly
arm64: Explicitly save/restore CPACR when probing SVE and SME
arm64: Use build-time assertions for cpucap ordering
arm64: Rename SVE/SME cpu_enable functions
arm64: Use a positive cpucap for FP/SIMD
arm64: Avoid cpus_have_const_cap() for
ARM64_HAS_{ADDRESS,GENERIC}_AUTH
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_ARMv8_4_TTL
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_BTI
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CACHE_DIC
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CNP
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_DIT
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_GIC_PRIO_MASKING
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_PAN
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_EPAN
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_RNG
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_WFXT
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_TLB_RANGE
arm64: Avoid cpus_have_const_cap() for ARM64_MTE
arm64: Avoid cpus_have_const_cap() for ARM64_SSBS
arm64: Avoid cpus_have_const_cap() for ARM64_SPECTRE_V2
arm64: Avoid cpus_have_const_cap() for ARM64_{SVE,SME,SME2,FA64}
arm64: Avoid cpus_have_const_cap() for ARM64_UNMAP_KERNEL_AT_EL0
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_843419
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1542419
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1742098
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_2645198
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_CAVIUM_23154
arm64: Avoid cpus_have_const_cap() for
ARM64_WORKAROUND_NVIDIA_CARMEL_CNP
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_REPEAT_TLBI
arm64: Remove cpus_have_const_cap()
arch/arm/xen/enlighten.c | 25 +--
arch/arm64/include/asm/alternative-macros.h | 8 +-
arch/arm64/include/asm/arch_gicv3.h | 8 +
arch/arm64/include/asm/archrandom.h | 2 +-
arch/arm64/include/asm/cacheflush.h | 2 +-
arch/arm64/include/asm/cpucaps.h | 67 ++++++++
arch/arm64/include/asm/cpufeature.h | 96 +++++------
arch/arm64/include/asm/fpsimd.h | 35 +++-
arch/arm64/include/asm/irqflags.h | 20 +--
arch/arm64/include/asm/kvm_emulate.h | 4 +-
arch/arm64/include/asm/kvm_host.h | 2 +-
arch/arm64/include/asm/kvm_mmu.h | 2 +-
arch/arm64/include/asm/mmu.h | 2 +-
arch/arm64/include/asm/mmu_context.h | 28 ++--
arch/arm64/include/asm/module.h | 3 +-
arch/arm64/include/asm/pgtable-prot.h | 6 +-
arch/arm64/include/asm/spectre.h | 2 +-
arch/arm64/include/asm/tlbflush.h | 7 +-
arch/arm64/include/asm/vectors.h | 2 +-
arch/arm64/kernel/cpu_errata.c | 17 --
arch/arm64/kernel/cpufeature.c | 168 ++++++++++++--------
arch/arm64/kernel/efi.c | 3 +-
arch/arm64/kernel/fpsimd.c | 81 ++++++----
arch/arm64/kernel/module-plts.c | 7 +-
arch/arm64/kernel/process.c | 2 +-
arch/arm64/kernel/proton-pack.c | 2 +-
arch/arm64/kernel/smp.c | 3 +-
arch/arm64/kernel/suspend.c | 13 +-
arch/arm64/kernel/sys_compat.c | 2 +-
arch/arm64/kernel/traps.c | 2 +-
arch/arm64/kernel/vdso.c | 2 +-
arch/arm64/kvm/arm.c | 10 +-
arch/arm64/kvm/guest.c | 4 +-
arch/arm64/kvm/hyp/pgtable.c | 4 +-
arch/arm64/kvm/mmu.c | 2 +-
arch/arm64/kvm/sys_regs.c | 2 +-
arch/arm64/kvm/vgic/vgic-v3.c | 2 +-
arch/arm64/lib/delay.c | 2 +-
arch/arm64/mm/fault.c | 2 +-
arch/arm64/mm/hugetlbpage.c | 3 +-
arch/arm64/mm/mmap.c | 2 +-
arch/arm64/mm/mmu.c | 3 +-
arch/arm64/mm/proc.S | 3 +-
arch/arm64/tools/Makefile | 4 +-
arch/arm64/tools/cpucaps | 2 +-
arch/arm64/tools/gen-cpucaps.awk | 6 +-
drivers/clocksource/arm_arch_timer.c | 31 +++-
drivers/irqchip/irq-gic-v3.c | 11 --
include/linux/cpuhotplug.h | 2 +
49 files changed, 429 insertions(+), 289 deletions(-)
create mode 100644 arch/arm64/include/asm/cpucaps.h
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 01/38] clocksource/drivers/arm_arch_timer: Initialize evtstrm after finalizing cpucaps
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 02/38] arm64/arm: xen: enlighten: Fix KPTI checks Mark Rutland
` (37 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
We attempt to initialize each CPU's arch_timer event stream in
arch_timer_evtstrm_enable(), which we call from the
arch_timer_starting_cpu() cpu hotplug callback which is registered early
in boot. As this is registered before we initialize the system cpucaps,
the test for ARM64_HAS_ECV will always be false for CPUs present at boot
time, and will only be taken into account for CPUs onlined late
(including those which are hotplugged out and in again).
Due to this, CPUs present and boot time may not use the intended divider
and scale factor to generate the event stream, and may differ from other
CPUs.
Correct this by only initializing the event stream after cpucaps have been
finalized, registering a separate CPU hotplug callback for the event stream
configuration. Since the caps must be finalized by this point, use
cpus_have_final_cap() to verify this.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <maz@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
drivers/clocksource/arm_arch_timer.c | 31 +++++++++++++++++++++++-----
include/linux/cpuhotplug.h | 1 +
2 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index 7dd2c615bce23..d1e9e556da81a 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -917,7 +917,7 @@ static void arch_timer_evtstrm_enable(unsigned int divider)
#ifdef CONFIG_ARM64
/* ECV is likely to require a large divider. Use the EVNTIS flag. */
- if (cpus_have_const_cap(ARM64_HAS_ECV) && divider > 15) {
+ if (cpus_have_final_cap(ARM64_HAS_ECV) && divider > 15) {
cntkctl |= ARCH_TIMER_EVT_INTERVAL_SCALE;
divider -= 8;
}
@@ -955,6 +955,30 @@ static void arch_timer_configure_evtstream(void)
arch_timer_evtstrm_enable(max(0, lsb));
}
+static int arch_timer_evtstrm_starting_cpu(unsigned int cpu)
+{
+ arch_timer_configure_evtstream();
+ return 0;
+}
+
+static int arch_timer_evtstrm_dying_cpu(unsigned int cpu)
+{
+ cpumask_clear_cpu(smp_processor_id(), &evtstrm_available);
+ return 0;
+}
+
+static int __init arch_timer_evtstrm_register(void)
+{
+ if (!arch_timer_evt || !evtstrm_enable)
+ return 0;
+
+ return cpuhp_setup_state(CPUHP_AP_ARM_ARCH_TIMER_EVTSTRM_STARTING,
+ "clockevents/arm/arch_timer_evtstrm:starting",
+ arch_timer_evtstrm_starting_cpu,
+ arch_timer_evtstrm_dying_cpu);
+}
+core_initcall(arch_timer_evtstrm_register);
+
static void arch_counter_set_user_access(void)
{
u32 cntkctl = arch_timer_get_cntkctl();
@@ -1016,8 +1040,6 @@ static int arch_timer_starting_cpu(unsigned int cpu)
}
arch_counter_set_user_access();
- if (evtstrm_enable)
- arch_timer_configure_evtstream();
return 0;
}
@@ -1164,8 +1186,6 @@ static int arch_timer_dying_cpu(unsigned int cpu)
{
struct clock_event_device *clk = this_cpu_ptr(arch_timer_evt);
- cpumask_clear_cpu(smp_processor_id(), &evtstrm_available);
-
arch_timer_stop(clk);
return 0;
}
@@ -1279,6 +1299,7 @@ static int __init arch_timer_register(void)
out_free:
free_percpu(arch_timer_evt);
+ arch_timer_evt = NULL;
out:
return err;
}
diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index 068f7738be22a..bb4d0bcac81b6 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -172,6 +172,7 @@ enum cpuhp_state {
CPUHP_AP_ARM_L2X0_STARTING,
CPUHP_AP_EXYNOS4_MCT_TIMER_STARTING,
CPUHP_AP_ARM_ARCH_TIMER_STARTING,
+ CPUHP_AP_ARM_ARCH_TIMER_EVTSTRM_STARTING,
CPUHP_AP_ARM_GLOBAL_TIMER_STARTING,
CPUHP_AP_JCORE_TIMER_STARTING,
CPUHP_AP_ARM_TWD_STARTING,
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 02/38] arm64/arm: xen: enlighten: Fix KPTI checks
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
2023-10-05 9:49 ` [PATCH v2 01/38] clocksource/drivers/arm_arch_timer: Initialize evtstrm after finalizing cpucaps Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 03/38] arm64: Factor out cpucap definitions Mark Rutland
` (36 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
When KPTI is in use, we cannot register a runstate region as XEN
requires that this is always a valid VA, which we cannot guarantee. Due
to this, xen_starting_cpu() must avoid registering each CPU's runstate
region, and xen_guest_init() must avoid setting up features that depend
upon it.
We tried to ensure that in commit:
f88af7229f6f22ce (" xen/arm: do not setup the runstate info page if kpti is enabled")
... where we added checks for xen_kernel_unmapped_at_usr(), which wraps
arm64_kernel_unmapped_at_el0() on arm64 and is always false on 32-bit
arm.
Unfortunately, as xen_guest_init() is an early_initcall, this happens
before secondary CPUs are booted and arm64 has finalized the
ARM64_UNMAP_KERNEL_AT_EL0 cpucap which backs
arm64_kernel_unmapped_at_el0(), and so this can subsequently be set as
secondary CPUs are onlined. On a big.LITTLE system where the boot CPU
does not require KPTI but some secondary CPUs do, this will result in
xen_guest_init() intializing features that depend on the runstate
region, and xen_starting_cpu() registering the runstate region on some
CPUs before KPTI is subsequent enabled, resulting the the problems the
aforementioned commit tried to avoid.
Handle this more robsutly by deferring the initialization of the
runstate region until secondary CPUs have been initialized and the
ARM64_UNMAP_KERNEL_AT_EL0 cpucap has been finalized. The per-cpu work is
moved into a new hotplug starting function which is registered later
when we're certain that KPTI will not be used.
Fixes: f88af7229f6f22ce (" xen/arm: do not setup the runstate info page if kpti is enabled")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm/xen/enlighten.c | 25 ++++++++++++++++---------
include/linux/cpuhotplug.h | 1 +
2 files changed, 17 insertions(+), 9 deletions(-)
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index c392e18f1e431..9afdc4c4a5dc1 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -164,9 +164,6 @@ static int xen_starting_cpu(unsigned int cpu)
BUG_ON(err);
per_cpu(xen_vcpu, cpu) = vcpup;
- if (!xen_kernel_unmapped_at_usr())
- xen_setup_runstate_info(cpu);
-
after_register_vcpu_info:
enable_percpu_irq(xen_events_irq, 0);
return 0;
@@ -523,9 +520,6 @@ static int __init xen_guest_init(void)
return -EINVAL;
}
- if (!xen_kernel_unmapped_at_usr())
- xen_time_setup_guest();
-
if (xen_initial_domain())
pvclock_gtod_register_notifier(&xen_pvclock_gtod_notifier);
@@ -535,7 +529,13 @@ static int __init xen_guest_init(void)
}
early_initcall(xen_guest_init);
-static int __init xen_pm_init(void)
+static int xen_starting_runstate_cpu(unsigned int cpu)
+{
+ xen_setup_runstate_info(cpu);
+ return 0;
+}
+
+static int __init xen_late_init(void)
{
if (!xen_domain())
return -ENODEV;
@@ -548,9 +548,16 @@ static int __init xen_pm_init(void)
do_settimeofday64(&ts);
}
- return 0;
+ if (xen_kernel_unmapped_at_usr())
+ return 0;
+
+ xen_time_setup_guest();
+
+ return cpuhp_setup_state(CPUHP_AP_ARM_XEN_RUNSTATE_STARTING,
+ "arm/xen_runstate:starting",
+ xen_starting_runstate_cpu, NULL);
}
-late_initcall(xen_pm_init);
+late_initcall(xen_late_init);
/* empty stubs */
diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h
index bb4d0bcac81b6..d06e31e03a5ec 100644
--- a/include/linux/cpuhotplug.h
+++ b/include/linux/cpuhotplug.h
@@ -190,6 +190,7 @@ enum cpuhp_state {
/* Must be the last timer callback */
CPUHP_AP_DUMMY_TIMER_STARTING,
CPUHP_AP_ARM_XEN_STARTING,
+ CPUHP_AP_ARM_XEN_RUNSTATE_STARTING,
CPUHP_AP_ARM_CORESIGHT_STARTING,
CPUHP_AP_ARM_CORESIGHT_CTI_STARTING,
CPUHP_AP_ARM64_ISNDEP_STARTING,
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 03/38] arm64: Factor out cpucap definitions
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
2023-10-05 9:49 ` [PATCH v2 01/38] clocksource/drivers/arm_arch_timer: Initialize evtstrm after finalizing cpucaps Mark Rutland
2023-10-05 9:49 ` [PATCH v2 02/38] arm64/arm: xen: enlighten: Fix KPTI checks Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 12:29 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 04/38] arm64: Add cpucap_is_possible() Mark Rutland
` (35 subsequent siblings)
38 siblings, 1 reply; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
For clarity it would be nice to factor cpucap manipulation out of
<asm/cpufeature.h>, and the obvious place would be <asm/cpucap.h>, but
this will clash somewhat with <generated/asm/cpucaps.h>.
Rename <generated/asm/cpucaps.h> to <generated/asm/cpucap-defs.h>,
matching what we do for <generated/asm/sysreg-defs.h>, and introduce a
new <asm/cpucaps.h> which includes the generated header.
Subsequent patches will fill out <asm/cpucaps.h>.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpucaps.h | 8 ++++++++
arch/arm64/tools/Makefile | 4 ++--
arch/arm64/tools/gen-cpucaps.awk | 6 +++---
3 files changed, 13 insertions(+), 5 deletions(-)
create mode 100644 arch/arm64/include/asm/cpucaps.h
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
new file mode 100644
index 0000000000000..7333b5bbf4488
--- /dev/null
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef __ASM_CPUCAPS_H
+#define __ASM_CPUCAPS_H
+
+#include <asm/cpucap-defs.h>
+
+#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/tools/Makefile b/arch/arm64/tools/Makefile
index 07a93ab21a62b..fa2251d9762d9 100644
--- a/arch/arm64/tools/Makefile
+++ b/arch/arm64/tools/Makefile
@@ -3,7 +3,7 @@
gen := arch/$(ARCH)/include/generated
kapi := $(gen)/asm
-kapi-hdrs-y := $(kapi)/cpucaps.h $(kapi)/sysreg-defs.h
+kapi-hdrs-y := $(kapi)/cpucap-defs.h $(kapi)/sysreg-defs.h
targets += $(addprefix ../../../, $(kapi-hdrs-y))
@@ -17,7 +17,7 @@ quiet_cmd_gen_cpucaps = GEN $@
quiet_cmd_gen_sysreg = GEN $@
cmd_gen_sysreg = mkdir -p $(dir $@); $(AWK) -f $(real-prereqs) > $@
-$(kapi)/cpucaps.h: $(src)/gen-cpucaps.awk $(src)/cpucaps FORCE
+$(kapi)/cpucap-defs.h: $(src)/gen-cpucaps.awk $(src)/cpucaps FORCE
$(call if_changed,gen_cpucaps)
$(kapi)/sysreg-defs.h: $(src)/gen-sysreg.awk $(src)/sysreg FORCE
diff --git a/arch/arm64/tools/gen-cpucaps.awk b/arch/arm64/tools/gen-cpucaps.awk
index 8525980379d71..2f4f61a0af17e 100755
--- a/arch/arm64/tools/gen-cpucaps.awk
+++ b/arch/arm64/tools/gen-cpucaps.awk
@@ -15,8 +15,8 @@ function fatal(msg) {
/^#/ { next }
BEGIN {
- print "#ifndef __ASM_CPUCAPS_H"
- print "#define __ASM_CPUCAPS_H"
+ print "#ifndef __ASM_CPUCAP_DEFS_H"
+ print "#define __ASM_CPUCAP_DEFS_H"
print ""
print "/* Generated file - do not edit */"
cap_num = 0
@@ -31,7 +31,7 @@ BEGIN {
END {
printf("#define ARM64_NCAPS\t\t\t\t\t%d\n", cap_num)
print ""
- print "#endif /* __ASM_CPUCAPS_H */"
+ print "#endif /* __ASM_CPUCAP_DEFS_H */"
}
# Any lines not handled by previous rules are unexpected
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 04/38] arm64: Add cpucap_is_possible()
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (2 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 03/38] arm64: Factor out cpucap definitions Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 05/38] arm64: Add cpus_have_final_boot_cap() Mark Rutland
` (34 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
Many cpucaps can only be set when certain CONFIG_* options are selected,
and we need to check the CONFIG_* option before the cap in order to
avoid generating redundant code. Due to this, we have a growing number
of helpers in <asm/cpufeature.h> of the form:
| static __always_inline bool system_supports_foo(void)
| {
| return IS_ENABLED(CONFIG_ARM64_FOO) &&
| cpus_have_const_cap(ARM64_HAS_FOO);
| }
This is unfortunate as it forces us to use cpus_have_const_cap()
unnecessarily, resulting in redundant code being generated by the
compiler. In the vast majority of cases, we only require that feature
checks indicate the presence of a feature after cpucaps have been
finalized, and so it would be sufficient to use alternative_has_cap_*().
However some code needs to handle a feature before alternatives have
been patched, and must test the system_cpucaps bitmap via
cpus_have_const_cap(). In other cases we'd like to check for
unintentional usage of a cpucap before alternatives are patched, and so
it would be preferable to use cpus_have_final_cap().
Placing the IS_ENABLED() checks in each callsite is tedious and
error-prone, and the same applies for writing wrappers for each
comination of cpucap and alternative_has_cap_*() / cpus_have_cap() /
cpus_have_final_cap(). It would be nicer if we could centralize the
knowledge of which cpucaps are possible, and have
alternative_has_cap_*(), cpus_have_cap(), and cpus_have_final_cap()
handle this automatically.
This patch adds a new cpucap_is_possible() function which will be
responsible for checking the CONFIG_* option, and updates the low-level
cpucap checks to use this. The existing CONFIG_* checks in
<asm/cpufeature.h> are moved over to cpucap_is_possible(), but the (now
trival) wrapper functions are retained for now.
There should be no functional change as a result of this patch alone.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/alternative-macros.h | 8 ++--
arch/arm64/include/asm/cpucaps.h | 41 +++++++++++++++++++++
arch/arm64/include/asm/cpufeature.h | 39 +++++++-------------
3 files changed, 59 insertions(+), 29 deletions(-)
diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h
index 94b486192e1f1..210bb43cff2c7 100644
--- a/arch/arm64/include/asm/alternative-macros.h
+++ b/arch/arm64/include/asm/alternative-macros.h
@@ -226,8 +226,8 @@ alternative_endif
static __always_inline bool
alternative_has_cap_likely(const unsigned long cpucap)
{
- compiletime_assert(cpucap < ARM64_NCAPS,
- "cpucap must be < ARM64_NCAPS");
+ if (!cpucap_is_possible(cpucap))
+ return false;
asm_volatile_goto(
ALTERNATIVE_CB("b %l[l_no]", %[cpucap], alt_cb_patch_nops)
@@ -244,8 +244,8 @@ alternative_has_cap_likely(const unsigned long cpucap)
static __always_inline bool
alternative_has_cap_unlikely(const unsigned long cpucap)
{
- compiletime_assert(cpucap < ARM64_NCAPS,
- "cpucap must be < ARM64_NCAPS");
+ if (!cpucap_is_possible(cpucap))
+ return false;
asm_volatile_goto(
ALTERNATIVE("nop", "b %l[l_yes]", %[cpucap])
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 7333b5bbf4488..764ad4eef8591 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -5,4 +5,45 @@
#include <asm/cpucap-defs.h>
+#ifndef __ASSEMBLY__
+#include <linux/types.h>
+/*
+ * Check whether a cpucap is possible at compiletime.
+ */
+static __always_inline bool
+cpucap_is_possible(const unsigned int cap)
+{
+ compiletime_assert(__builtin_constant_p(cap),
+ "cap must be a constant");
+ compiletime_assert(cap < ARM64_NCAPS,
+ "cap must be < ARM64_NCAPS");
+
+ switch (cap) {
+ case ARM64_HAS_PAN:
+ return IS_ENABLED(CONFIG_ARM64_PAN);
+ case ARM64_SVE:
+ return IS_ENABLED(CONFIG_ARM64_SVE);
+ case ARM64_SME:
+ case ARM64_SME2:
+ case ARM64_SME_FA64:
+ return IS_ENABLED(CONFIG_ARM64_SME);
+ case ARM64_HAS_CNP:
+ return IS_ENABLED(CONFIG_ARM64_CNP);
+ case ARM64_HAS_ADDRESS_AUTH:
+ case ARM64_HAS_GENERIC_AUTH:
+ return IS_ENABLED(CONFIG_ARM64_PTR_AUTH);
+ case ARM64_HAS_GIC_PRIO_MASKING:
+ return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI);
+ case ARM64_MTE:
+ return IS_ENABLED(CONFIG_ARM64_MTE);
+ case ARM64_BTI:
+ return IS_ENABLED(CONFIG_ARM64_BTI);
+ case ARM64_HAS_TLB_RANGE:
+ return IS_ENABLED(CONFIG_ARM64_TLB_RANGE);
+ }
+
+ return true;
+}
+#endif /* __ASSEMBLY__ */
+
#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 5bba393760557..577d9766e5abc 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -450,6 +450,8 @@ static __always_inline bool system_capabilities_finalized(void)
*/
static __always_inline bool cpus_have_cap(unsigned int num)
{
+ if (__builtin_constant_p(num) && !cpucap_is_possible(num))
+ return false;
if (num >= ARM64_NCAPS)
return false;
return arch_test_bit(num, system_cpucaps);
@@ -465,8 +467,6 @@ static __always_inline bool cpus_have_cap(unsigned int num)
*/
static __always_inline bool __cpus_have_const_cap(int num)
{
- if (num >= ARM64_NCAPS)
- return false;
return alternative_has_cap_unlikely(num);
}
@@ -740,8 +740,7 @@ static __always_inline bool system_supports_fpsimd(void)
static inline bool system_uses_hw_pan(void)
{
- return IS_ENABLED(CONFIG_ARM64_PAN) &&
- cpus_have_const_cap(ARM64_HAS_PAN);
+ return cpus_have_const_cap(ARM64_HAS_PAN);
}
static inline bool system_uses_ttbr0_pan(void)
@@ -752,26 +751,22 @@ static inline bool system_uses_ttbr0_pan(void)
static __always_inline bool system_supports_sve(void)
{
- return IS_ENABLED(CONFIG_ARM64_SVE) &&
- cpus_have_const_cap(ARM64_SVE);
+ return cpus_have_const_cap(ARM64_SVE);
}
static __always_inline bool system_supports_sme(void)
{
- return IS_ENABLED(CONFIG_ARM64_SME) &&
- cpus_have_const_cap(ARM64_SME);
+ return cpus_have_const_cap(ARM64_SME);
}
static __always_inline bool system_supports_sme2(void)
{
- return IS_ENABLED(CONFIG_ARM64_SME) &&
- cpus_have_const_cap(ARM64_SME2);
+ return cpus_have_const_cap(ARM64_SME2);
}
static __always_inline bool system_supports_fa64(void)
{
- return IS_ENABLED(CONFIG_ARM64_SME) &&
- cpus_have_const_cap(ARM64_SME_FA64);
+ return cpus_have_const_cap(ARM64_SME_FA64);
}
static __always_inline bool system_supports_tpidr2(void)
@@ -781,20 +776,17 @@ static __always_inline bool system_supports_tpidr2(void)
static __always_inline bool system_supports_cnp(void)
{
- return IS_ENABLED(CONFIG_ARM64_CNP) &&
- cpus_have_const_cap(ARM64_HAS_CNP);
+ return cpus_have_const_cap(ARM64_HAS_CNP);
}
static inline bool system_supports_address_auth(void)
{
- return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
- cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
+ return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
}
static inline bool system_supports_generic_auth(void)
{
- return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
- cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
+ return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
}
static inline bool system_has_full_ptr_auth(void)
@@ -804,14 +796,12 @@ static inline bool system_has_full_ptr_auth(void)
static __always_inline bool system_uses_irq_prio_masking(void)
{
- return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) &&
- cpus_have_const_cap(ARM64_HAS_GIC_PRIO_MASKING);
+ return cpus_have_const_cap(ARM64_HAS_GIC_PRIO_MASKING);
}
static inline bool system_supports_mte(void)
{
- return IS_ENABLED(CONFIG_ARM64_MTE) &&
- cpus_have_const_cap(ARM64_MTE);
+ return cpus_have_const_cap(ARM64_MTE);
}
static inline bool system_has_prio_mask_debugging(void)
@@ -822,13 +812,12 @@ static inline bool system_has_prio_mask_debugging(void)
static inline bool system_supports_bti(void)
{
- return IS_ENABLED(CONFIG_ARM64_BTI) && cpus_have_const_cap(ARM64_BTI);
+ return cpus_have_const_cap(ARM64_BTI);
}
static inline bool system_supports_tlb_range(void)
{
- return IS_ENABLED(CONFIG_ARM64_TLB_RANGE) &&
- cpus_have_const_cap(ARM64_HAS_TLB_RANGE);
+ return cpus_have_const_cap(ARM64_HAS_TLB_RANGE);
}
int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 05/38] arm64: Add cpus_have_final_boot_cap()
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (3 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 04/38] arm64: Add cpucap_is_possible() Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 06/38] arm64: Rework setup_cpu_features() Mark Rutland
` (33 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
The cpus_have_final_cap() function can be used to test a cpucap while
also verifying that we do not consume the cpucap until system
capabilities have been finalized. It would be helpful if we could do
likewise for boot cpucaps.
This patch adds a new cpus_have_final_boot_cap() helper which can be
used to test a cpucap while also verifying that boot capabilities have
been finalized. Users will be added in subsequent patches.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 577d9766e5abc..ba7c60c9ac695 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -438,6 +438,11 @@ unsigned long cpu_get_elf_hwcap2(void);
#define cpu_set_named_feature(name) cpu_set_feature(cpu_feature(name))
#define cpu_have_named_feature(name) cpu_have_feature(cpu_feature(name))
+static __always_inline bool boot_capabilities_finalized(void)
+{
+ return alternative_has_cap_likely(ARM64_ALWAYS_BOOT);
+}
+
static __always_inline bool system_capabilities_finalized(void)
{
return alternative_has_cap_likely(ARM64_ALWAYS_SYSTEM);
@@ -473,8 +478,26 @@ static __always_inline bool __cpus_have_const_cap(int num)
/*
* Test for a capability without a runtime check.
*
- * Before capabilities are finalized, this will BUG().
- * After capabilities are finalized, this is patched to avoid a runtime check.
+ * Before boot capabilities are finalized, this will BUG().
+ * After boot capabilities are finalized, this is patched to avoid a runtime
+ * check.
+ *
+ * @num must be a compile-time constant.
+ */
+static __always_inline bool cpus_have_final_boot_cap(int num)
+{
+ if (boot_capabilities_finalized())
+ return __cpus_have_const_cap(num);
+ else
+ BUG();
+}
+
+/*
+ * Test for a capability without a runtime check.
+ *
+ * Before system capabilities are finalized, this will BUG().
+ * After system capabilities are finalized, this is patched to avoid a runtime
+ * check.
*
* @num must be a compile-time constant.
*/
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 06/38] arm64: Rework setup_cpu_features()
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (4 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 05/38] arm64: Add cpus_have_final_boot_cap() Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 07/38] arm64: Fixup user features at boot time Mark Rutland
` (32 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
Currently setup_cpu_features() handles a mixture of one-time kernel
feature setup (e.g. cpucaps) and one-time user feature setup (e.g. ELF
hwcaps). Subsequent patches will rework other one-time setup and expand
the logic currently in setup_cpu_features(), and in preparation for this
it would be helpful to split the kernel and user setup into separate
functions.
This patch splits setup_user_features() out of setup_cpu_features(),
with a few additional cleanups of note:
* setup_cpu_features() is renamed to setup_system_features() to make it
clear that it handles system-wide feature setup rather than cpu-local
feature setup.
* setup_system_capabilities() is folded into setup_system_features().
* Presence of TTBR0 pan is logged immediately after
update_cpu_capabilities(), so that this is guaranteed to appear
alongside all the other detected system cpucaps.
* The 'cwg' variable is removed as its value is only consumed once and
it's simpler to use cache_type_cwg() directly without assigning its
return value to a variable.
* The call to setup_user_features() is moved after alternatives are
patched, which will allow user feature setup code to depend on
alternative branches and allow for simplifications in subsequent
patches.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 4 ++-
arch/arm64/kernel/cpufeature.c | 43 ++++++++++++++---------------
arch/arm64/kernel/smp.c | 3 +-
3 files changed, 26 insertions(+), 24 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index ba7c60c9ac695..1233a8ff96a88 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -649,7 +649,9 @@ static inline bool id_aa64pfr1_mte(u64 pfr1)
return val >= ID_AA64PFR1_EL1_MTE_MTE2;
}
-void __init setup_cpu_features(void);
+void __init setup_system_features(void);
+void __init setup_user_features(void);
+
void check_local_cpu_capabilities(void);
u64 read_sanitised_ftr_reg(u32 id);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 444a73c2e6385..234cf3189bee0 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -3328,23 +3328,35 @@ unsigned long cpu_get_elf_hwcap2(void)
return elf_hwcap[1];
}
-static void __init setup_system_capabilities(void)
+void __init setup_system_features(void)
{
/*
- * We have finalised the system-wide safe feature
- * registers, finalise the capabilities that depend
- * on it. Also enable all the available capabilities,
- * that are not enabled already.
+ * The system-wide safe feature feature register values have been
+ * finalized. Finalize and log the available system capabilities.
*/
update_cpu_capabilities(SCOPE_SYSTEM);
+ if (system_uses_ttbr0_pan())
+ pr_info("emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching\n");
+
+ /*
+ * Enable all the available capabilities which have not been enabled
+ * already.
+ */
enable_cpu_capabilities(SCOPE_ALL & ~SCOPE_BOOT_CPU);
+
+ sve_setup();
+ sme_setup();
+
+ /*
+ * Check for sane CTR_EL0.CWG value.
+ */
+ if (!cache_type_cwg())
+ pr_warn("No Cache Writeback Granule information, assuming %d\n",
+ ARCH_DMA_MINALIGN);
}
-void __init setup_cpu_features(void)
+void __init setup_user_features(void)
{
- u32 cwg;
-
- setup_system_capabilities();
setup_elf_hwcaps(arm64_elf_hwcaps);
if (system_supports_32bit_el0()) {
@@ -3352,20 +3364,7 @@ void __init setup_cpu_features(void)
elf_hwcap_fixup();
}
- if (system_uses_ttbr0_pan())
- pr_info("emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching\n");
-
- sve_setup();
- sme_setup();
minsigstksz_setup();
-
- /*
- * Check for sane CTR_EL0.CWG value.
- */
- cwg = cache_type_cwg();
- if (!cwg)
- pr_warn("No Cache Writeback Granule information, assuming %d\n",
- ARCH_DMA_MINALIGN);
}
static int enable_mismatched_32bit_el0(unsigned int cpu)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 960b98b43506d..e0cdf6820f9e9 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -431,9 +431,10 @@ static void __init hyp_mode_check(void)
void __init smp_cpus_done(unsigned int max_cpus)
{
pr_info("SMP: Total of %d processors activated.\n", num_online_cpus());
- setup_cpu_features();
+ setup_system_features();
hyp_mode_check();
apply_alternatives_all();
+ setup_user_features();
mark_linear_text_alias_ro();
}
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 07/38] arm64: Fixup user features at boot time
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (5 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 06/38] arm64: Rework setup_cpu_features() Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 08/38] arm64: Split kpti_install_ng_mappings() Mark Rutland
` (31 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
For ARM64_WORKAROUND_2658417, we use a cpu_enable() callback to hide the
ID_AA64ISAR1_EL1.BF16 ID register field. This is a little awkward as
CPUs may attempt to apply the workaround concurrently, requiring that we
protect the bulk of the callback with a raw_spinlock, and requiring some
pointless work every time a CPU is subsequently hotplugged in.
This patch makes this a little simpler by handling the masking once at
boot time. A new user_feature_fixup() function is called at the start of
setup_user_features() to mask the feature, matching the style of
elf_hwcap_fixup(). The ARM64_WORKAROUND_2658417 cpucap is added to
cpucap_is_possible() so that code can be elided entirely when this is
not possible.
Note that the ARM64_WORKAROUND_2658417 capability is matched with
ERRATA_MIDR_RANGE(), which implicitly gives the capability a
ARM64_CPUCAP_LOCAL_CPU_ERRATUM type, which forbids the late onlining of
a CPU with the erratum if the erratum was not present at boot time.
Therefore this patch doesn't change the behaviour for late onlining.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpucaps.h | 2 ++
arch/arm64/kernel/cpu_errata.c | 17 -----------------
arch/arm64/kernel/cpufeature.c | 13 +++++++++++++
3 files changed, 15 insertions(+), 17 deletions(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 764ad4eef8591..07c9271b534df 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -40,6 +40,8 @@ cpucap_is_possible(const unsigned int cap)
return IS_ENABLED(CONFIG_ARM64_BTI);
case ARM64_HAS_TLB_RANGE:
return IS_ENABLED(CONFIG_ARM64_TLB_RANGE);
+ case ARM64_WORKAROUND_2658417:
+ return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417);
}
return true;
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index be66e94a21bda..86aed329d42c8 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -121,22 +121,6 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCI, 0);
}
-static DEFINE_RAW_SPINLOCK(reg_user_mask_modification);
-static void __maybe_unused
-cpu_clear_bf16_from_user_emulation(const struct arm64_cpu_capabilities *__unused)
-{
- struct arm64_ftr_reg *regp;
-
- regp = get_arm64_ftr_reg(SYS_ID_AA64ISAR1_EL1);
- if (!regp)
- return;
-
- raw_spin_lock(®_user_mask_modification);
- if (regp->user_mask & ID_AA64ISAR1_EL1_BF16_MASK)
- regp->user_mask &= ~ID_AA64ISAR1_EL1_BF16_MASK;
- raw_spin_unlock(®_user_mask_modification);
-}
-
#define CAP_MIDR_RANGE(model, v_min, r_min, v_max, r_max) \
.matches = is_affected_midr_range, \
.midr_range = MIDR_RANGE(model, v_min, r_min, v_max, r_max)
@@ -727,7 +711,6 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
/* Cortex-A510 r0p0 - r1p1 */
ERRATA_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1),
MIDR_FIXED(MIDR_CPU_VAR_REV(1,1), BIT(25)),
- .cpu_enable = cpu_clear_bf16_from_user_emulation,
},
#endif
#ifdef CONFIG_AMPERE_ERRATUM_AC03_CPU_38
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 234cf3189bee0..46508399796f9 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2190,6 +2190,17 @@ static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
}
#endif /* CONFIG_ARM64_MTE */
+static void user_feature_fixup(void)
+{
+ if (cpus_have_cap(ARM64_WORKAROUND_2658417)) {
+ struct arm64_ftr_reg *regp;
+
+ regp = get_arm64_ftr_reg(SYS_ID_AA64ISAR1_EL1);
+ if (regp)
+ regp->user_mask &= ~ID_AA64ISAR1_EL1_BF16_MASK;
+ }
+}
+
static void elf_hwcap_fixup(void)
{
#ifdef CONFIG_ARM64_ERRATUM_1742098
@@ -3357,6 +3368,8 @@ void __init setup_system_features(void)
void __init setup_user_features(void)
{
+ user_feature_fixup();
+
setup_elf_hwcaps(arm64_elf_hwcaps);
if (system_supports_32bit_el0()) {
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 08/38] arm64: Split kpti_install_ng_mappings()
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (6 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 07/38] arm64: Fixup user features at boot time Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 09/38] arm64: kvm: Use cpus_have_final_cap() explicitly Mark Rutland
` (30 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
The arm64_cpu_capabilities::cpu_enable callbacks are intended for
cpu-local feature enablement (e.g. poking system registers). These get
called for each online CPU when boot/system cpucaps get finalized and
enabled, and get called whenever a CPU is subsequently onlined.
For KPTI with the ARM64_UNMAP_KERNEL_AT_EL0 cpucap, we use the
kpti_install_ng_mappings() function as the cpu_enable callback. This
does a mixture of cpu-local configuration (setting VBAR_EL1 to the
appropriate trampoline vectors) and some global configuration (rewriting
the swapper page tables to sue non-glboal mappings) that must happen at
most once.
This patch splits kpti_install_ng_mappings() into a cpu-local
cpu_enable_kpti() initialization function and a system-wide
kpti_install_ng_mappings() function. The cpu_enable_kpti() function is
responsible for selecting the necessary cpu-local vectors each time a
CPU is onlined, and the kpti_install_ng_mappings() function performs the
one-time rewrite of the translation tables too use non-global mappings.
Splitting the two makes the code a bit easier to follow and also allows
the page table rewriting code to be marked as __init such that it can be
freed after use.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 54 +++++++++++++++++++++-------------
1 file changed, 33 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 46508399796f9..5add7d06469d8 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1754,16 +1754,15 @@ void create_kpti_ng_temp_pgd(pgd_t *pgdir, phys_addr_t phys, unsigned long virt,
phys_addr_t size, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int), int flags);
-static phys_addr_t kpti_ng_temp_alloc;
+static phys_addr_t __initdata kpti_ng_temp_alloc;
-static phys_addr_t kpti_ng_pgd_alloc(int shift)
+static phys_addr_t __init kpti_ng_pgd_alloc(int shift)
{
kpti_ng_temp_alloc -= PAGE_SIZE;
return kpti_ng_temp_alloc;
}
-static void
-kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
+static int __init __kpti_install_ng_mappings(void *__unused)
{
typedef void (kpti_remap_fn)(int, int, phys_addr_t, unsigned long);
extern kpti_remap_fn idmap_kpti_install_ng_mappings;
@@ -1776,20 +1775,6 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
pgd_t *kpti_ng_temp_pgd;
u64 alloc = 0;
- if (__this_cpu_read(this_cpu_vector) == vectors) {
- const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI);
-
- __this_cpu_write(this_cpu_vector, v);
- }
-
- /*
- * We don't need to rewrite the page-tables if either we've done
- * it already or we have KASLR enabled and therefore have not
- * created any global mappings at all.
- */
- if (arm64_use_ng_mappings)
- return;
-
remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
if (!cpu) {
@@ -1826,14 +1811,39 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
free_pages(alloc, order);
arm64_use_ng_mappings = true;
}
+
+ return 0;
+}
+
+static void __init kpti_install_ng_mappings(void)
+{
+ /*
+ * We don't need to rewrite the page-tables if either we've done
+ * it already or we have KASLR enabled and therefore have not
+ * created any global mappings at all.
+ */
+ if (arm64_use_ng_mappings)
+ return;
+
+ stop_machine(__kpti_install_ng_mappings, NULL, cpu_online_mask);
}
+
#else
-static void
-kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
+static inline void kpti_install_ng_mappings(void)
{
}
#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+static void cpu_enable_kpti(struct arm64_cpu_capabilities const *cap)
+{
+ if (__this_cpu_read(this_cpu_vector) == vectors) {
+ const char *v = arm64_get_bp_hardening_vector(EL1_VECTOR_KPTI);
+
+ __this_cpu_write(this_cpu_vector, v);
+ }
+
+}
+
static int __init parse_kpti(char *str)
{
bool enabled;
@@ -2362,7 +2372,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.desc = "Kernel page table isolation (KPTI)",
.capability = ARM64_UNMAP_KERNEL_AT_EL0,
.type = ARM64_CPUCAP_BOOT_RESTRICTED_CPU_LOCAL_FEATURE,
- .cpu_enable = kpti_install_ng_mappings,
+ .cpu_enable = cpu_enable_kpti,
.matches = unmap_kernel_at_el0,
/*
* The ID feature fields below are used to indicate that
@@ -3355,6 +3365,8 @@ void __init setup_system_features(void)
*/
enable_cpu_capabilities(SCOPE_ALL & ~SCOPE_BOOT_CPU);
+ kpti_install_ng_mappings();
+
sve_setup();
sme_setup();
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 09/38] arm64: kvm: Use cpus_have_final_cap() explicitly
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (7 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 08/38] arm64: Split kpti_install_ng_mappings() Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 10/38] arm64: Explicitly save/restore CPACR when probing SVE and SME Mark Rutland
` (29 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
Much of the arm64 KVM code uses cpus_have_const_cap() to check for
cpucaps, but this is unnecessary and it would be preferable to use
cpus_have_final_cap().
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
KVM is initialized after cpucaps have been finalized and alternatives
have been patched. Since commit:
d86de40decaa14e6 ("arm64: cpufeature: upgrade hyp caps to final")
... use of cpus_have_const_cap() in hyp code is automatically converted
to use cpus_have_final_cap():
| static __always_inline bool cpus_have_const_cap(int num)
| {
| if (is_hyp_code())
| return cpus_have_final_cap(num);
| else if (system_capabilities_finalized())
| return __cpus_have_const_cap(num);
| else
| return cpus_have_cap(num);
| }
Thus, converting hyp code to use cpus_have_final_cap() directly will not
result in any functional change.
Non-hyp KVM code is also not executed until cpucaps have been finalized,
and it would be preferable to extent the same treatment to this code and
use cpus_have_final_cap() directly.
This patch converts instances of cpus_have_const_cap() in KVM-only code
over to cpus_have_final_cap(). As all of this code runs after cpucaps
have been finalized, there should be no functional change as a result of
this patch, but the redundant instructions generated by
cpus_have_const_cap() will be removed from the non-hyp KVM code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Oliver Upton <oliver.upton@linux.dev>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/kvm_emulate.h | 4 ++--
arch/arm64/include/asm/kvm_host.h | 2 +-
arch/arm64/include/asm/kvm_mmu.h | 2 +-
arch/arm64/kvm/arm.c | 10 +++++-----
arch/arm64/kvm/guest.c | 4 ++--
arch/arm64/kvm/hyp/pgtable.c | 2 +-
arch/arm64/kvm/mmu.c | 2 +-
arch/arm64/kvm/sys_regs.c | 2 +-
arch/arm64/kvm/vgic/vgic-v3.c | 2 +-
9 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 3d6725ff0bf6d..cbd2f163a67d2 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -71,14 +71,14 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
if (has_vhe() || has_hvhe())
vcpu->arch.hcr_el2 |= HCR_E2H;
- if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) {
+ if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) {
/* route synchronous external abort exceptions to EL2 */
vcpu->arch.hcr_el2 |= HCR_TEA;
/* trap error record accesses */
vcpu->arch.hcr_el2 |= HCR_TERR;
}
- if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) {
+ if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) {
vcpu->arch.hcr_el2 |= HCR_FWB;
} else {
/*
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index af06ccb7ee343..e64d64e6ad449 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1052,7 +1052,7 @@ static inline void kvm_init_host_cpu_context(struct kvm_cpu_context *cpu_ctxt)
static inline bool kvm_system_needs_idmapped_vectors(void)
{
- return cpus_have_const_cap(ARM64_SPECTRE_V3A);
+ return cpus_have_final_cap(ARM64_SPECTRE_V3A);
}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 96a80e8f62263..27810667dec7d 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -218,7 +218,7 @@ static inline void __clean_dcache_guest_page(void *va, size_t size)
* faulting in pages. Furthermore, FWB implies IDC, so cleaning to
* PoU is not required either in this case.
*/
- if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
+ if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
return;
kvm_flush_dcache_to_poc(va, size);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 4866b3f7b4ea3..4ea6c22250a51 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -284,7 +284,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = kvm_arm_pvtime_supported();
break;
case KVM_CAP_ARM_EL1_32BIT:
- r = cpus_have_const_cap(ARM64_HAS_32BIT_EL1);
+ r = cpus_have_final_cap(ARM64_HAS_32BIT_EL1);
break;
case KVM_CAP_GUEST_DEBUG_HW_BPS:
r = get_num_brps();
@@ -296,7 +296,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = kvm_arm_support_pmu_v3();
break;
case KVM_CAP_ARM_INJECT_SERROR_ESR:
- r = cpus_have_const_cap(ARM64_HAS_RAS_EXTN);
+ r = cpus_have_final_cap(ARM64_HAS_RAS_EXTN);
break;
case KVM_CAP_ARM_VM_IPA_SIZE:
r = get_kvm_ipa_limit();
@@ -1207,7 +1207,7 @@ static int kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu,
if (!test_bit(KVM_ARM_VCPU_EL1_32BIT, &features))
return 0;
- if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1))
+ if (!cpus_have_final_cap(ARM64_HAS_32BIT_EL1))
return -EINVAL;
/* MTE is incompatible with AArch32 */
@@ -1777,7 +1777,7 @@ static void hyp_install_host_vector(void)
* Call initialization code, and switch to the full blown HYP code.
* If the cpucaps haven't been finalized yet, something has gone very
* wrong, and hyp will crash and burn when it uses any
- * cpus_have_const_cap() wrapper.
+ * cpus_have_*_cap() wrapper.
*/
BUG_ON(!system_capabilities_finalized());
params = this_cpu_ptr_nvhe_sym(kvm_init_params);
@@ -2310,7 +2310,7 @@ static int __init init_hyp_mode(void)
if (is_protected_kvm_enabled()) {
if (IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL) &&
- cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH))
+ cpus_have_final_cap(ARM64_HAS_ADDRESS_AUTH))
pkvm_hyp_init_ptrauth();
init_cpu_logical_map();
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 95f6945c44325..a83c1d56a2940 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -815,7 +815,7 @@ int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu,
struct kvm_vcpu_events *events)
{
events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE);
- events->exception.serror_has_esr = cpus_have_const_cap(ARM64_HAS_RAS_EXTN);
+ events->exception.serror_has_esr = cpus_have_final_cap(ARM64_HAS_RAS_EXTN);
if (events->exception.serror_pending && events->exception.serror_has_esr)
events->exception.serror_esr = vcpu_get_vsesr(vcpu);
@@ -837,7 +837,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
bool ext_dabt_pending = events->exception.ext_dabt_pending;
if (serror_pending && has_esr) {
- if (!cpus_have_const_cap(ARM64_HAS_RAS_EXTN))
+ if (!cpus_have_final_cap(ARM64_HAS_RAS_EXTN))
return -EINVAL;
if (!((events->exception.serror_esr) & ~ESR_ELx_ISS_MASK))
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index f155b8c9e98c7..799d2c204bb8a 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -664,7 +664,7 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
static bool stage2_has_fwb(struct kvm_pgtable *pgt)
{
- if (!cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
+ if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
return false;
return !(pgt->flags & KVM_PGTABLE_S2_NOFWB);
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 482280fe22d7c..e6061fd174b0b 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1578,7 +1578,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (device)
prot |= KVM_PGTABLE_PROT_DEVICE;
- else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
+ else if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC))
prot |= KVM_PGTABLE_PROT_X;
/*
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index e92ec810d4494..9318b6939b788 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -207,7 +207,7 @@ static bool access_dcsw(struct kvm_vcpu *vcpu,
* CPU left in the system, and certainly not from non-secure
* software).
*/
- if (!cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
+ if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
kvm_set_way_flush(vcpu);
return true;
diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
index 3dfc8b84e03e6..9465d3706ab9b 100644
--- a/arch/arm64/kvm/vgic/vgic-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-v3.c
@@ -684,7 +684,7 @@ int vgic_v3_probe(const struct gic_kvm_info *info)
if (kvm_vgic_global_state.vcpu_base == 0)
kvm_info("disabling GICv2 emulation\n");
- if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_30115)) {
+ if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_30115)) {
group0_trap = true;
group1_trap = true;
}
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 10/38] arm64: Explicitly save/restore CPACR when probing SVE and SME
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (8 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 09/38] arm64: kvm: Use cpus_have_final_cap() explicitly Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 11/38] arm64: use build-time assertions for cpucap ordering Mark Rutland
` (28 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
When a CPUs onlined we first probe for supported features and
propetites, and then we subsequently enable features that have been
detected. This is a little problematic for SVE and SME, as some
properties (e.g. vector lengths) cannot be probed while they are
disabled. Due to this, the code probing for SVE properties has to enable
SVE for EL1 prior to proving, and the code probing for SME properties
has to enable SME for EL1 prior to probing. We never disable SVE or SME
for EL1 after probing.
It would be a little nicer to transiently enable SVE and SME during
probing, leaving them both disabled unless explicitly enabled, as this
would make it much easier to catch unintentional usage (e.g. when they
are not present system-wide).
This patch reworks the SVE and SME feature probing code to only
transiently enable support at EL1, disabling after probing is complete.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/fpsimd.h | 26 ++++++++++++++++++++++++++
arch/arm64/kernel/cpufeature.c | 24 ++++++++++++++++++++++--
arch/arm64/kernel/fpsimd.c | 7 ++-----
3 files changed, 50 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index 8df46f186c64b..bd7bea92dae07 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -32,6 +32,32 @@
#define VFP_STATE_SIZE ((32 * 8) + 4)
#endif
+static inline unsigned long cpacr_save_enable_kernel_sve(void)
+{
+ unsigned long old = read_sysreg(cpacr_el1);
+ unsigned long set = CPACR_EL1_FPEN_EL1EN | CPACR_EL1_ZEN_EL1EN;
+
+ write_sysreg(old | set, cpacr_el1);
+ isb();
+ return old;
+}
+
+static inline unsigned long cpacr_save_enable_kernel_sme(void)
+{
+ unsigned long old = read_sysreg(cpacr_el1);
+ unsigned long set = CPACR_EL1_FPEN_EL1EN | CPACR_EL1_SMEN_EL1EN;
+
+ write_sysreg(old | set, cpacr_el1);
+ isb();
+ return old;
+}
+
+static inline void cpacr_restore(unsigned long cpacr)
+{
+ write_sysreg(cpacr, cpacr_el1);
+ isb();
+}
+
/*
* When we defined the maximum SVE vector length we defined the ABI so
* that the maximum vector length included all the reserved for future
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5add7d06469d8..e3e0d3d3bd6b7 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1040,13 +1040,19 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
if (IS_ENABLED(CONFIG_ARM64_SVE) &&
id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
+ unsigned long cpacr = cpacr_save_enable_kernel_sve();
+
info->reg_zcr = read_zcr_features();
init_cpu_ftr_reg(SYS_ZCR_EL1, info->reg_zcr);
vec_init_vq_map(ARM64_VEC_SVE);
+
+ cpacr_restore(cpacr);
}
if (IS_ENABLED(CONFIG_ARM64_SME) &&
id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
+ unsigned long cpacr = cpacr_save_enable_kernel_sme();
+
info->reg_smcr = read_smcr_features();
/*
* We mask out SMPS since even if the hardware
@@ -1056,6 +1062,8 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
init_cpu_ftr_reg(SYS_SMCR_EL1, info->reg_smcr);
vec_init_vq_map(ARM64_VEC_SME);
+
+ cpacr_restore(cpacr);
}
if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
@@ -1291,6 +1299,8 @@ void update_cpu_features(int cpu,
if (IS_ENABLED(CONFIG_ARM64_SVE) &&
id_aa64pfr0_sve(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
+ unsigned long cpacr = cpacr_save_enable_kernel_sve();
+
info->reg_zcr = read_zcr_features();
taint |= check_update_ftr_reg(SYS_ZCR_EL1, cpu,
info->reg_zcr, boot->reg_zcr);
@@ -1298,10 +1308,14 @@ void update_cpu_features(int cpu,
/* Probe vector lengths */
if (!system_capabilities_finalized())
vec_update_vq_map(ARM64_VEC_SVE);
+
+ cpacr_restore(cpacr);
}
if (IS_ENABLED(CONFIG_ARM64_SME) &&
id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
+ unsigned long cpacr = cpacr_save_enable_kernel_sme();
+
info->reg_smcr = read_smcr_features();
/*
* We mask out SMPS since even if the hardware
@@ -1315,6 +1329,8 @@ void update_cpu_features(int cpu,
/* Probe vector lengths */
if (!system_capabilities_finalized())
vec_update_vq_map(ARM64_VEC_SME);
+
+ cpacr_restore(cpacr);
}
/*
@@ -3174,6 +3190,8 @@ static void verify_local_elf_hwcaps(void)
static void verify_sve_features(void)
{
+ unsigned long cpacr = cpacr_save_enable_kernel_sve();
+
u64 safe_zcr = read_sanitised_ftr_reg(SYS_ZCR_EL1);
u64 zcr = read_zcr_features();
@@ -3186,11 +3204,13 @@ static void verify_sve_features(void)
cpu_die_early();
}
- /* Add checks on other ZCR bits here if necessary */
+ cpacr_restore(cpacr);
}
static void verify_sme_features(void)
{
+ unsigned long cpacr = cpacr_save_enable_kernel_sme();
+
u64 safe_smcr = read_sanitised_ftr_reg(SYS_SMCR_EL1);
u64 smcr = read_smcr_features();
@@ -3203,7 +3223,7 @@ static void verify_sme_features(void)
cpu_die_early();
}
- /* Add checks on other SMCR bits here if necessary */
+ cpacr_restore(cpacr);
}
static void verify_hyp_capabilities(void)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 91e44ac7150f9..601b973f90ad2 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1174,7 +1174,7 @@ void sve_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
* Read the pseudo-ZCR used by cpufeatures to identify the supported SVE
* vector length.
*
- * Use only if SVE is present.
+ * Use only if SVE is present and enabled.
* This function clobbers the SVE vector length.
*/
u64 read_zcr_features(void)
@@ -1183,7 +1183,6 @@ u64 read_zcr_features(void)
* Set the maximum possible VL, and write zeroes to all other
* bits to see if they stick.
*/
- sve_kernel_enable(NULL);
write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL1);
/* Return LEN value that would be written to get the maximum VL */
@@ -1337,13 +1336,11 @@ void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
* Read the pseudo-SMCR used by cpufeatures to identify the supported
* vector length.
*
- * Use only if SME is present.
+ * Use only if SME is present and enabled.
* This function clobbers the SME vector length.
*/
u64 read_smcr_features(void)
{
- sme_kernel_enable(NULL);
-
/*
* Set the maximum possible VL.
*/
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 11/38] arm64: use build-time assertions for cpucap ordering
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (9 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 10/38] arm64: Explicitly save/restore CPACR when probing SVE and SME Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 11/38] arm64: Use " Mark Rutland
` (27 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
Both sme2_kernel_enable() and fa64_kernel_enable() need to run after
sme_kernel_enable(). This happens to be true today as ARM64_SME has a
lower index than either ARM64_SME2 or ARM64_SME_FA64, and both functions
have a comment to this effect.
It would be nicer to have a build-time assertion like we for for
can_use_gic_priorities() and has_gic_prio_relaxed_sync(), as that way
it will be harder to miss any potential breakage.
This patch replaces the comments with build-time assertions.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/kernel/fpsimd.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 601b973f90ad2..48338cf8f90bd 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1310,23 +1310,21 @@ void sme_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
isb();
}
-/*
- * This must be called after sme_kernel_enable(), we rely on the
- * feature table being sorted to ensure this.
- */
void sme2_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
{
+ /* This must be enabled after SME */
+ BUILD_BUG_ON(ARM64_SME2 <= ARM64_SME);
+
/* Allow use of ZT0 */
write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_EZT0_MASK,
SYS_SMCR_EL1);
}
-/*
- * This must be called after sme_kernel_enable(), we rely on the
- * feature table being sorted to ensure this.
- */
void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
{
+ /* This must be enabled after SME */
+ BUILD_BUG_ON(ARM64_SME_FA64 <= ARM64_SME);
+
/* Allow use of FA64 */
write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_FA64_MASK,
SYS_SMCR_EL1);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 11/38] arm64: Use build-time assertions for cpucap ordering
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (10 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 11/38] arm64: use build-time assertions for cpucap ordering Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 12/38] arm64: Rename SVE/SME cpu_enable functions Mark Rutland
` (26 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
Both sme2_kernel_enable() and fa64_kernel_enable() need to run after
sme_kernel_enable(). This happens to be true today as ARM64_SME has a
lower index than either ARM64_SME2 or ARM64_SME_FA64, and both functions
have a comment to this effect.
It would be nicer to have a build-time assertion like we for for
can_use_gic_priorities() and has_gic_prio_relaxed_sync(), as that way
it will be harder to miss any potential breakage.
This patch replaces the comments with build-time assertions.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/kernel/fpsimd.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 601b973f90ad2..48338cf8f90bd 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1310,23 +1310,21 @@ void sme_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
isb();
}
-/*
- * This must be called after sme_kernel_enable(), we rely on the
- * feature table being sorted to ensure this.
- */
void sme2_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
{
+ /* This must be enabled after SME */
+ BUILD_BUG_ON(ARM64_SME2 <= ARM64_SME);
+
/* Allow use of ZT0 */
write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_EZT0_MASK,
SYS_SMCR_EL1);
}
-/*
- * This must be called after sme_kernel_enable(), we rely on the
- * feature table being sorted to ensure this.
- */
void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
{
+ /* This must be enabled after SME */
+ BUILD_BUG_ON(ARM64_SME_FA64 <= ARM64_SME);
+
/* Allow use of FA64 */
write_sysreg_s(read_sysreg_s(SYS_SMCR_EL1) | SMCR_ELx_FA64_MASK,
SYS_SMCR_EL1);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 12/38] arm64: Rename SVE/SME cpu_enable functions
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (11 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 11/38] arm64: Use " Mark Rutland
@ 2023-10-05 9:49 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 13/38] arm64: Use a positive cpucap for FP/SIMD Mark Rutland
` (25 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:49 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
The arm64_cpu_capabilities::cpu_enable() callbacks for SVE, SME, SME2,
and FA64 are named with an unusual "${feature}_kernel_enable" pattern
rather than the much more common "cpu_enable_${feature}". Now that we
only use these as cpu_enable() callbacks, it would be nice to have them
match the usual scheme.
This patch renames the cpu_enable() callbacks to match this scheme. At
the same time, the comment above cpu_enable_sve() is removed for
consistency with the other cpu_enable() callbacks.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/fpsimd.h | 8 ++++----
arch/arm64/kernel/cpufeature.c | 8 ++++----
arch/arm64/kernel/fpsimd.c | 12 ++++--------
3 files changed, 12 insertions(+), 16 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index bd7bea92dae07..0cbaa06b394a0 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -149,10 +149,10 @@ extern void sme_save_state(void *state, int zt);
extern void sme_load_state(void const *state, int zt);
struct arm64_cpu_capabilities;
-extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused);
-extern void sme_kernel_enable(const struct arm64_cpu_capabilities *__unused);
-extern void sme2_kernel_enable(const struct arm64_cpu_capabilities *__unused);
-extern void fa64_kernel_enable(const struct arm64_cpu_capabilities *__unused);
+extern void cpu_enable_sve(const struct arm64_cpu_capabilities *__unused);
+extern void cpu_enable_sme(const struct arm64_cpu_capabilities *__unused);
+extern void cpu_enable_sme2(const struct arm64_cpu_capabilities *__unused);
+extern void cpu_enable_fa64(const struct arm64_cpu_capabilities *__unused);
extern u64 read_zcr_features(void);
extern u64 read_smcr_features(void);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e3e0d3d3bd6b7..fb828f8c49e31 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2425,7 +2425,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.desc = "Scalable Vector Extension",
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.capability = ARM64_SVE,
- .cpu_enable = sve_kernel_enable,
+ .cpu_enable = cpu_enable_sve,
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, SVE, IMP)
},
@@ -2678,7 +2678,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.capability = ARM64_SME,
.matches = has_cpuid_feature,
- .cpu_enable = sme_kernel_enable,
+ .cpu_enable = cpu_enable_sme,
ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, IMP)
},
/* FA64 should be sorted after the base SME capability */
@@ -2687,7 +2687,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.capability = ARM64_SME_FA64,
.matches = has_cpuid_feature,
- .cpu_enable = fa64_kernel_enable,
+ .cpu_enable = cpu_enable_fa64,
ARM64_CPUID_FIELDS(ID_AA64SMFR0_EL1, FA64, IMP)
},
{
@@ -2695,7 +2695,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.type = ARM64_CPUCAP_SYSTEM_FEATURE,
.capability = ARM64_SME2,
.matches = has_cpuid_feature,
- .cpu_enable = sme2_kernel_enable,
+ .cpu_enable = cpu_enable_sme2,
ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, SME, SME2)
},
#endif /* CONFIG_ARM64_SME */
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 48338cf8f90bd..45ea9cabbaa41 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1160,11 +1160,7 @@ static void __init sve_efi_setup(void)
panic("Cannot allocate percpu memory for EFI SVE save/restore");
}
-/*
- * Enable SVE for EL1.
- * Intended for use by the cpufeatures code during CPU boot.
- */
-void sve_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
+void cpu_enable_sve(const struct arm64_cpu_capabilities *__always_unused p)
{
write_sysreg(read_sysreg(CPACR_EL1) | CPACR_EL1_ZEN_EL1EN, CPACR_EL1);
isb();
@@ -1295,7 +1291,7 @@ static void sme_free(struct task_struct *task)
task->thread.sme_state = NULL;
}
-void sme_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
+void cpu_enable_sme(const struct arm64_cpu_capabilities *__always_unused p)
{
/* Set priority for all PEs to architecturally defined minimum */
write_sysreg_s(read_sysreg_s(SYS_SMPRI_EL1) & ~SMPRI_EL1_PRIORITY_MASK,
@@ -1310,7 +1306,7 @@ void sme_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
isb();
}
-void sme2_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
+void cpu_enable_sme2(const struct arm64_cpu_capabilities *__always_unused p)
{
/* This must be enabled after SME */
BUILD_BUG_ON(ARM64_SME2 <= ARM64_SME);
@@ -1320,7 +1316,7 @@ void sme2_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
SYS_SMCR_EL1);
}
-void fa64_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p)
+void cpu_enable_fa64(const struct arm64_cpu_capabilities *__always_unused p)
{
/* This must be enabled after SME */
BUILD_BUG_ON(ARM64_SME_FA64 <= ARM64_SME);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 13/38] arm64: Use a positive cpucap for FP/SIMD
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (12 preceding siblings ...)
2023-10-05 9:49 ` [PATCH v2 12/38] arm64: Rename SVE/SME cpu_enable functions Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 14/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_{ADDRESS,GENERIC}_AUTH Mark Rutland
` (24 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
Currently we have a negative cpucap which describes the *absence* of
FP/SIMD rather than *presence* of FP/SIMD. This largely works, but is
somewhat awkward relative to other cpucaps that describe the presence of
a feature, and it would be nicer to have a cpucap which describes the
presence of FP/SIMD:
* This will allow the cpucap to be treated as a standard
ARM64_CPUCAP_SYSTEM_FEATURE, which can be detected with the standard
has_cpuid_feature() function and ARM64_CPUID_FIELDS() description.
* This ensures that the cpucap will only transition from not-present to
present, reducing the risk of unintentional and/or unsafe usage of
FP/SIMD before cpucaps are finalized.
* This will allow using arm64_cpu_capabilities::cpu_enable() to enable
the use of FP/SIMD later, with FP/SIMD being disabled at boot time
otherwise. This will ensure that any unintentional and/or unsafe usage
of FP/SIMD prior to this is trapped, and will ensure that FP/SIMD is
never unintentionally enabled for userspace in mismatched big.LITTLE
systems.
This patch replaces the negative ARM64_HAS_NO_FPSIMD cpucap with a
positive ARM64_HAS_FPSIMD cpucap, making changes as described above.
Note that as FP/SIMD will now be trapped when not supported system-wide,
do_fpsimd_acc() must handle these traps in the same way as for SVE and
SME. The commentary in fpsimd_restore_current_state() is updated to
describe the new scheme.
No users of system_supports_fpsimd() need to know that FP/SIMD is
available prior to alternatives being patched, so this is updated to
use alternative_has_cap_likely() to check for the ARM64_HAS_FPSIMD
cpucap, without generating code to test the system_cpucaps bitmap.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 2 +-
arch/arm64/include/asm/fpsimd.h | 1 +
arch/arm64/kernel/cpufeature.c | 18 ++++--------
arch/arm64/kernel/fpsimd.c | 44 +++++++++++++++++++++++------
arch/arm64/mm/proc.S | 3 +-
arch/arm64/tools/cpucaps | 2 +-
6 files changed, 44 insertions(+), 26 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 1233a8ff96a88..a1698945916ed 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -760,7 +760,7 @@ static inline bool system_supports_mixed_endian(void)
static __always_inline bool system_supports_fpsimd(void)
{
- return !cpus_have_const_cap(ARM64_HAS_NO_FPSIMD);
+ return alternative_has_cap_likely(ARM64_HAS_FPSIMD);
}
static inline bool system_uses_hw_pan(void)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index 0cbaa06b394a0..c43ae9c013ec4 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -149,6 +149,7 @@ extern void sme_save_state(void *state, int zt);
extern void sme_load_state(void const *state, int zt);
struct arm64_cpu_capabilities;
+extern void cpu_enable_fpsimd(const struct arm64_cpu_capabilities *__unused);
extern void cpu_enable_sve(const struct arm64_cpu_capabilities *__unused);
extern void cpu_enable_sme(const struct arm64_cpu_capabilities *__unused);
extern void cpu_enable_sme2(const struct arm64_cpu_capabilities *__unused);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index fb828f8c49e31..c493fa582a190 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1580,14 +1580,6 @@ static bool has_no_hw_prefetch(const struct arm64_cpu_capabilities *entry, int _
MIDR_CPU_VAR_REV(1, MIDR_REVISION_MASK));
}
-static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unused)
-{
- u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
-
- return cpuid_feature_extract_signed_field(pfr0,
- ID_AA64PFR0_EL1_FP_SHIFT) < 0;
-}
-
static bool has_cache_idc(const struct arm64_cpu_capabilities *entry,
int scope)
{
@@ -2398,11 +2390,11 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, CSV3, IMP)
},
{
- /* FP/SIMD is not implemented */
- .capability = ARM64_HAS_NO_FPSIMD,
- .type = ARM64_CPUCAP_BOOT_RESTRICTED_CPU_LOCAL_FEATURE,
- .min_field_value = 0,
- .matches = has_no_fpsimd,
+ .capability = ARM64_HAS_FPSIMD,
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .matches = has_cpuid_feature,
+ .cpu_enable = cpu_enable_fpsimd,
+ ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, FP, IMP)
},
#ifdef CONFIG_ARM64_PMEM
{
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 45ea9cabbaa41..f1c9eadea200d 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1160,6 +1160,13 @@ static void __init sve_efi_setup(void)
panic("Cannot allocate percpu memory for EFI SVE save/restore");
}
+void cpu_enable_fpsimd(const struct arm64_cpu_capabilities *__always_unused p)
+{
+ unsigned long enable = CPACR_EL1_FPEN_EL1EN | CPACR_EL1_FPEN_EL0EN;
+ write_sysreg(read_sysreg(CPACR_EL1) | enable, CPACR_EL1);
+ isb();
+}
+
void cpu_enable_sve(const struct arm64_cpu_capabilities *__always_unused p)
{
write_sysreg(read_sysreg(CPACR_EL1) | CPACR_EL1_ZEN_EL1EN, CPACR_EL1);
@@ -1520,8 +1527,17 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs)
*/
void do_fpsimd_acc(unsigned long esr, struct pt_regs *regs)
{
- /* TODO: implement lazy context saving/restoring */
- WARN_ON(1);
+ /* Even if we chose not to use FPSIMD, the hardware could still trap: */
+ if (!system_supports_fpsimd()) {
+ force_signal_inject(SIGILL, ILL_ILLOPC, regs->pc, 0);
+ return;
+ }
+
+ /*
+ * When FPSIMD is enabled, we should never take a trap unless something
+ * has gone very wrong.
+ */
+ BUG();
}
/*
@@ -1762,13 +1778,23 @@ void fpsimd_bind_state_to_cpu(struct cpu_fp_state *state)
void fpsimd_restore_current_state(void)
{
/*
- * For the tasks that were created before we detected the absence of
- * FP/SIMD, the TIF_FOREIGN_FPSTATE could be set via fpsimd_thread_switch(),
- * e.g, init. This could be then inherited by the children processes.
- * If we later detect that the system doesn't support FP/SIMD,
- * we must clear the flag for all the tasks to indicate that the
- * FPSTATE is clean (as we can't have one) to avoid looping for ever in
- * do_notify_resume().
+ * TIF_FOREIGN_FPSTATE is set on the init task and copied by
+ * arch_dup_task_struct() regardless of whether FP/SIMD is detected.
+ * Thus user threads can have this set even when FP/SIMD hasn't been
+ * detected.
+ *
+ * When FP/SIMD is detected, begin_new_exec() will set
+ * TIF_FOREIGN_FPSTATE via flush_thread() -> fpsimd_flush_thread(),
+ * and fpsimd_thread_switch() will set TIF_FOREIGN_FPSTATE when
+ * switching tasks. We detect FP/SIMD before we exec the first user
+ * process, ensuring this has TIF_FOREIGN_FPSTATE set and
+ * do_notify_resume() will call fpsimd_restore_current_state() to
+ * install the user FP/SIMD context.
+ *
+ * When FP/SIMD is not detected, nothing else will clear or set
+ * TIF_FOREIGN_FPSTATE prior to the first return to userspace, and
+ * we must clear TIF_FOREIGN_FPSTATE to avoid do_notify_resume()
+ * looping forever calling fpsimd_restore_current_state().
*/
if (!system_supports_fpsimd()) {
clear_thread_flag(TIF_FOREIGN_FPSTATE);
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 14fdf645edc88..f66c37a1610e1 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -405,8 +405,7 @@ SYM_FUNC_START(__cpu_setup)
tlbi vmalle1 // Invalidate local TLB
dsb nsh
- mov x1, #3 << 20
- msr cpacr_el1, x1 // Enable FP/ASIMD
+ msr cpacr_el1, xzr // Reset cpacr_el1
mov x1, #1 << 12 // Reset mdscr_el1 and disable
msr mdscr_el1, x1 // access to the DCC from EL0
isb // Unmask debug exceptions now,
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index c3f06fdef6099..1adf562c914d9 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -27,6 +27,7 @@ HAS_ECV_CNTPOFF
HAS_EPAN
HAS_EVT
HAS_FGT
+HAS_FPSIMD
HAS_GENERIC_AUTH
HAS_GENERIC_AUTH_ARCH_QARMA3
HAS_GENERIC_AUTH_ARCH_QARMA5
@@ -39,7 +40,6 @@ HAS_LDAPR
HAS_LSE_ATOMICS
HAS_MOPS
HAS_NESTED_VIRT
-HAS_NO_FPSIMD
HAS_NO_HW_PREFETCH
HAS_PAN
HAS_S1PIE
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 14/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_{ADDRESS,GENERIC}_AUTH
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (13 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 13/38] arm64: Use a positive cpucap for FP/SIMD Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 15/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_ARMv8_4_TTL Mark Rutland
` (23 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In system_supports_address_auth() and system_supports_generic_auth() we
use cpus_have_const_cap to check for ARM64_HAS_ADDRESS_AUTH and
ARM64_HAS_GENERIC_AUTH respectively, but this is not necessary and
alternative_has_cap_*() would bre preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_HAS_ADDRESS_AUTH cpucap is a boot cpu feature which is
detected and patched early on the boot CPU before any pointer
authentication keys are enabled via their respective SCTLR_ELx.EN* bits.
Nothing which uses system_supports_address_auth() is called before the
boot alternatives are patched. Thus it is safe for
system_supports_address_auth() to use cpus_have_final_boot_cap() to
check for ARM64_HAS_ADDRESS_AUTH.
The ARM64_HAS_GENERIC_AUTH cpucap is a system feature which is detected
on all CPUs, then finalized and patched under
setup_system_capabilities(). We use system_supports_generic_auth() in a
few places:
* The pac_generic_keys_get() and pac_generic_keys_set() functions are
only reachable from system calls once userspace is up and running. As
cpucaps are finalzied long before userspace runs, these can safely use
alternative_has_cap_*() or cpus_have_final_cap().
* The ptrauth_prctl_reset_keys() function is only reachable from system
calls once userspace is up and running. As cpucaps are finalized long
before userspace runs, this can safely use alternative_has_cap_*() or
cpus_have_final_cap().
* The ptrauth_keys_install_user() function is used during
context-switch. This is called prior to alternatives being applied,
and so cannot use cpus_have_final_cap(), but as this only needs to
switch the APGA key for userspace tasks, it's safe to use
alternative_has_cap_*().
* The ptrauth_keys_init_user() function is used to initialize userspace
keys, and is only reachable after system cpucaps have been finalized
and patched. Thus this can safely use alternative_has_cap_*() or
cpus_have_final_cap().
* The system_has_full_ptr_auth() helper function is only used by KVM
code, which is only reachable after system cpucaps have been finalized
and patched. Thus this can safely use alternative_has_cap_*() or
cpus_have_final_cap().
This patch modifies system_supports_address_auth() to use
cpus_have_final_boot_cap() to check ARM64_HAS_ADDRESS_AUTH, and modifies
system_supports_generic_auth() to use alternative_has_cap_unlikely() to
check ARM64_HAS_GENERIC_AUTH. In either case this will avoid generating
code to test the system_cpucaps bitmap and should be better for all
subsequent calls at runtime. The use of cpus_have_final_boot_cap() will
make it easier to spot if code is chaanged such that these run before
the relevant cpucap is guaranteed to have been finalized.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index a1698945916ed..dfdedbdcc1151 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -806,12 +806,12 @@ static __always_inline bool system_supports_cnp(void)
static inline bool system_supports_address_auth(void)
{
- return cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
+ return cpus_have_final_boot_cap(ARM64_HAS_ADDRESS_AUTH);
}
static inline bool system_supports_generic_auth(void)
{
- return cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
+ return alternative_has_cap_unlikely(ARM64_HAS_GENERIC_AUTH);
}
static inline bool system_has_full_ptr_auth(void)
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 15/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_ARMv8_4_TTL
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (14 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 14/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_{ADDRESS,GENERIC}_AUTH Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 16/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_BTI Mark Rutland
` (22 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In __tlbi_level() we use cpus_have_const_cap() to check for
ARM64_HAS_ARMv8_4_TTL, but this is not necessary and
alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
In the window between detecting the ARM64_HAS_ARMv8_4_TTL cpucap and
patching alternative branches, we do not perform any TLB invalidation,
and even if we were to perform TLB invalidation here it would not be
functionally necessary to optimize this by using the TTL hint. Hence
there's no need to use cpus_have_const_cap(), and
alternative_has_cap_unlikely() is sufficient.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/tlbflush.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index b149cf9f91bc9..53ed194626e1a 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -105,7 +105,7 @@ static inline unsigned long get_trans_granule(void)
#define __tlbi_level(op, addr, level) do { \
u64 arg = addr; \
\
- if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) && \
+ if (alternative_has_cap_unlikely(ARM64_HAS_ARMv8_4_TTL) && \
level) { \
u64 ttl = level & 3; \
ttl |= get_trans_granule() << 2; \
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 16/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_BTI
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (15 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 15/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_ARMv8_4_TTL Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 17/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CACHE_DIC Mark Rutland
` (21 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In system_supports_bti() we use cpus_have_const_cap() to check for
ARM64_HAS_BTI, but this is not necessary and alternative_has_cap_*() or
cpus_have_final_*cap() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
When CONFIG_ARM64_BTI_KERNEL=y, the ARM64_HAS_BTI cpucap is a strict
boot cpu feature which is detected and patched early on the boot cpu.
All uses guarded by CONFIG_ARM64_BTI_KERNEL happen after the boot CPU
has detected ARM64_HAS_BTI and patched boot alternatives, and hence can
safely use alternative_has_cap_*() or cpus_have_final_boot_cap().
Regardless of CONFIG_ARM64_BTI_KERNEL, all other uses of ARM64_HAS_BTI
happen after system capabilities have been finalized and alternatives
have been patched. Hence these can safely use alternative_has_cap_*) or
cpus_have_final_cap().
This patch splits system_supports_bti() into system_supports_bti() and
system_supports_bti_kernel(), with the former handling where the cpucap
affects userspace functionality, and ther latter handling where the
cpucap affects kernel functionality. The use of cpus_have_const_cap() is
replaced by cpus_have_final_cap() in cpus_have_const_cap, and
cpus_have_final_boot_cap() in system_supports_bti_kernel(). This will
avoid generating code to test the system_cpucaps bitmap and should be
better for all subsequent calls at runtime. The use of
cpus_have_final_cap() and cpus_have_final_boot_cap() will make it easier
to spot if code is chaanged such that these run before the ARM64_HAS_BTI
cpucap is guaranteed to have been finalized.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 8 +++++++-
arch/arm64/include/asm/pgtable-prot.h | 6 +-----
arch/arm64/kernel/efi.c | 3 +--
arch/arm64/kernel/vdso.c | 2 +-
arch/arm64/kvm/hyp/pgtable.c | 2 +-
5 files changed, 11 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index dfdedbdcc1151..caa54ddf0bcdc 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -837,7 +837,13 @@ static inline bool system_has_prio_mask_debugging(void)
static inline bool system_supports_bti(void)
{
- return cpus_have_const_cap(ARM64_BTI);
+ return cpus_have_final_cap(ARM64_BTI);
+}
+
+static inline bool system_supports_bti_kernel(void)
+{
+ return IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) &&
+ cpus_have_final_boot_cap(ARM64_BTI);
}
static inline bool system_supports_tlb_range(void)
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index eed814b00a389..e9624f6326dde 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -75,11 +75,7 @@ extern bool arm64_use_ng_mappings;
* If we have userspace only BTI we don't want to mark kernel pages
* guarded even if the system does support BTI.
*/
-#ifdef CONFIG_ARM64_BTI_KERNEL
-#define PTE_MAYBE_GP (system_supports_bti() ? PTE_GP : 0)
-#else
-#define PTE_MAYBE_GP 0
-#endif
+#define PTE_MAYBE_GP (system_supports_bti_kernel() ? PTE_GP : 0)
#define PAGE_KERNEL __pgprot(_PAGE_KERNEL)
#define PAGE_KERNEL_RO __pgprot(_PAGE_KERNEL_RO)
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index 2b478ca356b00..3f8c9c143552f 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -113,8 +113,7 @@ static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data)
pte = set_pte_bit(pte, __pgprot(PTE_RDONLY));
if (md->attribute & EFI_MEMORY_XP)
pte = set_pte_bit(pte, __pgprot(PTE_PXN));
- else if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) &&
- system_supports_bti() && spd->has_bti)
+ else if (system_supports_bti_kernel() && spd->has_bti)
pte = set_pte_bit(pte, __pgprot(PTE_GP));
set_pte(ptep, pte);
return 0;
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index d9e1355730ef5..5562daf38a22f 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -212,7 +212,7 @@ static int __setup_additional_pages(enum vdso_abi abi,
if (IS_ERR(ret))
goto up_fail;
- if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) && system_supports_bti())
+ if (system_supports_bti_kernel())
gp_flags = VM_ARM64_BTI;
vdso_base += VVAR_NR_PAGES * PAGE_SIZE;
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 799d2c204bb8a..77fb330c7bf48 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -401,7 +401,7 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
if (device)
return -EINVAL;
- if (IS_ENABLED(CONFIG_ARM64_BTI_KERNEL) && system_supports_bti())
+ if (system_supports_bti_kernel())
attr |= KVM_PTE_LEAF_ATTR_HI_S1_GP;
} else {
attr |= KVM_PTE_LEAF_ATTR_HI_S1_XN;
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 17/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CACHE_DIC
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (16 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 16/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_BTI Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 18/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CNP Mark Rutland
` (20 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In icache_inval_all_pou() we use cpus_have_const_cap() to check for
ARM64_HAS_CACHE_DIC, but this is not necessary and
alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The cpus_have_const_cap() check in icache_inval_all_pou() is an
optimization to skip a redundant (but benign) IC IALLUIS + DSB ISH
sequence when all CPUs in the system have DIC. In the window between
detecting the ARM64_HAS_CACHE_DIC cpucap and patching alternative
branches there is only a single potential call to icache_inval_all_pou()
(in the alternatives patching itself), which there's no need to optimize
for at the expense of other callers.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. This also aligns better with the way we patch the assembly
cache maintenance sequences in arch/arm64/mm/cache.S.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cacheflush.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index d115451ed263d..fefac75fa009f 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -132,7 +132,7 @@ void flush_dcache_folio(struct folio *);
static __always_inline void icache_inval_all_pou(void)
{
- if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
+ if (alternative_has_cap_unlikely(ARM64_HAS_CACHE_DIC))
return;
asm("ic ialluis");
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 18/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CNP
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (17 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 17/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CACHE_DIC Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 19/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_DIT Mark Rutland
` (19 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In system_supports_cnp() we use cpus_have_const_cap() to check for
ARM64_HAS_CNP, but this is only necessary so that the cpu_enable_cnp()
callback can run prior to alternatives being patched, and otherwise this
is not necessary and alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The cpu_enable_cnp() callback is run immediately after the ARM64_HAS_CNP
cpucap is detected system-wide under setup_system_capabilities(), prior
to alternatives being patched. During this window cpu_enable_cnp() uses
cpu_replace_ttbr1() to set the CNP bit for the swapper_pg_dir in TTBR1.
No other users of the ARM64_HAS_CNP cpucap need the up-to-date value
during this window:
* As KVM isn't initialized yet, kvm_get_vttbr() isn't reachable.
* As cpuidle isn't initialized yet, __cpu_suspend_exit() isn't
reachable.
* At this point all CPUs are using the swapper_pg_dir with a reserved
ASID in TTBR1, and the idmap_pg_dir in TTBR0, so neither
check_and_switch_context() nor cpu_do_switch_mm() need to do anything
special.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. To allow cpu_enable_cnp() to function prior to alternatives
being patched, cpu_replace_ttbr1() is split into cpu_replace_ttbr1() and
cpu_enable_swapper_cnp(), with the former only used for early TTBR1
replacement, and the latter used by both cpu_enable_cnp() and
__cpu_suspend_exit().
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Vladimir Murzin <vladimir.murzin@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 2 +-
arch/arm64/include/asm/mmu_context.h | 28 +++++++++++++++++-----------
arch/arm64/kernel/cpufeature.c | 2 +-
arch/arm64/kernel/suspend.c | 2 +-
4 files changed, 20 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index caa54ddf0bcdc..a91f940351fc3 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -801,7 +801,7 @@ static __always_inline bool system_supports_tpidr2(void)
static __always_inline bool system_supports_cnp(void)
{
- return cpus_have_const_cap(ARM64_HAS_CNP);
+ return alternative_has_cap_unlikely(ARM64_HAS_CNP);
}
static inline bool system_supports_address_auth(void)
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index a6fb325424e7a..9ce4200508b12 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -152,7 +152,7 @@ static inline void cpu_install_ttbr0(phys_addr_t ttbr0, unsigned long t0sz)
* Atomically replaces the active TTBR1_EL1 PGD with a new VA-compatible PGD,
* avoiding the possibility of conflicting TLB entries being allocated.
*/
-static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap)
+static inline void __cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap, bool cnp)
{
typedef void (ttbr_replace_func)(phys_addr_t);
extern ttbr_replace_func idmap_cpu_replace_ttbr1;
@@ -162,17 +162,8 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap)
/* phys_to_ttbr() zeros lower 2 bits of ttbr with 52-bit PA */
phys_addr_t ttbr1 = phys_to_ttbr(virt_to_phys(pgdp));
- if (system_supports_cnp() && !WARN_ON(pgdp != lm_alias(swapper_pg_dir))) {
- /*
- * cpu_replace_ttbr1() is used when there's a boot CPU
- * up (i.e. cpufeature framework is not up yet) and
- * latter only when we enable CNP via cpufeature's
- * enable() callback.
- * Also we rely on the system_cpucaps bit being set before
- * calling the enable() function.
- */
+ if (cnp)
ttbr1 |= TTBR_CNP_BIT;
- }
replace_phys = (void *)__pa_symbol(idmap_cpu_replace_ttbr1);
@@ -189,6 +180,21 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap)
cpu_uninstall_idmap();
}
+static inline void cpu_enable_swapper_cnp(void)
+{
+ __cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir, true);
+}
+
+static inline void cpu_replace_ttbr1(pgd_t *pgdp, pgd_t *idmap)
+{
+ /*
+ * Only for early TTBR1 replacement before cpucaps are finalized and
+ * before we've decided whether to use CNP.
+ */
+ WARN_ON(system_capabilities_finalized());
+ __cpu_replace_ttbr1(pgdp, idmap, false);
+}
+
/*
* It would be nice to return ASIDs back to the allocator, but unfortunately
* that introduces a race with a generation rollover where we could erroneously
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index c493fa582a190..e1582e50f5291 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -3458,7 +3458,7 @@ subsys_initcall_sync(init_32bit_el0_mask);
static void __maybe_unused cpu_enable_cnp(struct arm64_cpu_capabilities const *cap)
{
- cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir);
+ cpu_enable_swapper_cnp();
}
/*
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index 0fbdf5fe64d8d..c09fa99e0a44e 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -55,7 +55,7 @@ void notrace __cpu_suspend_exit(void)
/* Restore CnP bit in TTBR1_EL1 */
if (system_supports_cnp())
- cpu_replace_ttbr1(lm_alias(swapper_pg_dir), idmap_pg_dir);
+ cpu_enable_swapper_cnp();
/*
* PSTATE was not saved over suspend/resume, re-enable any detected
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 19/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_DIT
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (18 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 18/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CNP Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 20/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_GIC_PRIO_MASKING Mark Rutland
` (18 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In __cpu_suspend_exit() we use cpus_have_const_cap() to check for
ARM64_HAS_DIT but this is not necessary and cpus_have_final_cap() of
alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_HAS_DIT cpucap is detected and patched (along with all other
cpucaps) before __cpu_suspend_exit() can run. We'll only use
__cpu_suspend_exit() as part of PSCI cpuidle or hibernation, and both of
these are intialized after system cpucaps are detected and patched: the
PSCI cpuidle driver is registered with a device_initcall, hibernation
restoration occurs in a late_initcall, and hibarnation saving is driven
by usrspace. Therefore it is not necessary to use cpus_have_const_cap(),
and using alternative_has_cap_*() or cpus_have_final_cap() is
sufficient.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. To clearly document the ordering relationship between
suspend/resume and alternatives patching, an explicit check for
system_capabilities_finalized() is added to cpu_suspend() along with a
comment block, which will make it easier to spot issues if code is
changed in future to allow these functions to be reached earlier.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/kernel/suspend.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index c09fa99e0a44e..eca4d04352118 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -61,7 +61,7 @@ void notrace __cpu_suspend_exit(void)
* PSTATE was not saved over suspend/resume, re-enable any detected
* features that might not have been set correctly.
*/
- if (cpus_have_const_cap(ARM64_HAS_DIT))
+ if (alternative_has_cap_unlikely(ARM64_HAS_DIT))
set_pstate_dit(1);
__uaccess_enable_hw_pan();
@@ -98,6 +98,15 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
struct sleep_stack_data state;
struct arm_cpuidle_irq_context context;
+ /*
+ * Some portions of CPU state (e.g. PSTATE.{PAN,DIT}) are initialized
+ * before alternatives are patched, but are only restored by
+ * __cpu_suspend_exit() after alternatives are patched. To avoid
+ * accidentally losing these bits we must not attempt to suspend until
+ * after alternatives have been patched.
+ */
+ WARN_ON(!system_capabilities_finalized());
+
/* Report any MTE async fault before going to suspend */
mte_suspend_enter();
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 20/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_GIC_PRIO_MASKING
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (19 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 19/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_DIT Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 21/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_PAN Mark Rutland
` (17 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In system_uses_irq_prio_masking() we use cpus_have_const_cap() to check
for ARM64_HAS_GIC_PRIO_MASKING, but this is not necessary and
alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
When CONFIG_ARM64_PSEUDO_NMI=y the ARM64_HAS_GIC_PRIO_MASKING cpucap is
a strict boot cpu feature which is detected and patched early on the
boot cpu, which both happen in smp_prepare_boot_cpu(). In the window
between the ARM64_HAS_GIC_PRIO_MASKING cpucap is detected and
alternatives are patched we don't run any code that depends upon the
ARM64_HAS_GIC_PRIO_MASKING cpucap:
* We leave DAIF.IF set until after boot alternatives are patched, and
interrupts are unmasked later in init_IRQ(), so we cannot reach
IRQ/FIQ entry code and will not use irqs_priority_unmasked().
* We don't call any code which uses arm_cpuidle_save_irq_context() and
arm_cpuidle_restore_irq_context() during this window.
* We don't call start_thread_common() during this window.
* The local_irq_*() code in <asm/irqflags.h> depends solely on an
alternative branch since commit:
a5f61cc636f48bdf ("arm64: irqflags: use alternative branches for pseudo-NMI logic")
... and hence will use the default (DAIF-only) masking behaviour until
alternatives are patched.
* Secondary CPUs are brought up later after alternatives are patched,
and alternatives are patched on the boot CPU immediately prior to
calling init_gic_priority_masking(), so we'll correctly initialize
interrupt masking regardless.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which avoid generating code to test the
system_cpucaps bitmap and should be better for all subsequent calls at
runtime. As this makes system_uses_irq_prio_masking() equivalent to
__irqflags_uses_pmr(), the latter is removed and replaced with the
former for consistency.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 2 +-
arch/arm64/include/asm/irqflags.h | 20 +++++++-------------
2 files changed, 8 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index a91f940351fc3..c387ee4ee194e 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -821,7 +821,7 @@ static inline bool system_has_full_ptr_auth(void)
static __always_inline bool system_uses_irq_prio_masking(void)
{
- return cpus_have_const_cap(ARM64_HAS_GIC_PRIO_MASKING);
+ return alternative_has_cap_unlikely(ARM64_HAS_GIC_PRIO_MASKING);
}
static inline bool system_supports_mte(void)
diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h
index 1f31ec146d161..0a7186a93882d 100644
--- a/arch/arm64/include/asm/irqflags.h
+++ b/arch/arm64/include/asm/irqflags.h
@@ -21,12 +21,6 @@
* exceptions should be unmasked.
*/
-static __always_inline bool __irqflags_uses_pmr(void)
-{
- return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) &&
- alternative_has_cap_unlikely(ARM64_HAS_GIC_PRIO_MASKING);
-}
-
static __always_inline void __daif_local_irq_enable(void)
{
barrier();
@@ -49,7 +43,7 @@ static __always_inline void __pmr_local_irq_enable(void)
static inline void arch_local_irq_enable(void)
{
- if (__irqflags_uses_pmr()) {
+ if (system_uses_irq_prio_masking()) {
__pmr_local_irq_enable();
} else {
__daif_local_irq_enable();
@@ -77,7 +71,7 @@ static __always_inline void __pmr_local_irq_disable(void)
static inline void arch_local_irq_disable(void)
{
- if (__irqflags_uses_pmr()) {
+ if (system_uses_irq_prio_masking()) {
__pmr_local_irq_disable();
} else {
__daif_local_irq_disable();
@@ -99,7 +93,7 @@ static __always_inline unsigned long __pmr_local_save_flags(void)
*/
static inline unsigned long arch_local_save_flags(void)
{
- if (__irqflags_uses_pmr()) {
+ if (system_uses_irq_prio_masking()) {
return __pmr_local_save_flags();
} else {
return __daif_local_save_flags();
@@ -118,7 +112,7 @@ static __always_inline bool __pmr_irqs_disabled_flags(unsigned long flags)
static inline bool arch_irqs_disabled_flags(unsigned long flags)
{
- if (__irqflags_uses_pmr()) {
+ if (system_uses_irq_prio_masking()) {
return __pmr_irqs_disabled_flags(flags);
} else {
return __daif_irqs_disabled_flags(flags);
@@ -137,7 +131,7 @@ static __always_inline bool __pmr_irqs_disabled(void)
static inline bool arch_irqs_disabled(void)
{
- if (__irqflags_uses_pmr()) {
+ if (system_uses_irq_prio_masking()) {
return __pmr_irqs_disabled();
} else {
return __daif_irqs_disabled();
@@ -169,7 +163,7 @@ static __always_inline unsigned long __pmr_local_irq_save(void)
static inline unsigned long arch_local_irq_save(void)
{
- if (__irqflags_uses_pmr()) {
+ if (system_uses_irq_prio_masking()) {
return __pmr_local_irq_save();
} else {
return __daif_local_irq_save();
@@ -196,7 +190,7 @@ static __always_inline void __pmr_local_irq_restore(unsigned long flags)
*/
static inline void arch_local_irq_restore(unsigned long flags)
{
- if (__irqflags_uses_pmr()) {
+ if (system_uses_irq_prio_masking()) {
__pmr_local_irq_restore(flags);
} else {
__daif_local_irq_restore(flags);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 21/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_PAN
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (20 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 20/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_GIC_PRIO_MASKING Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 22/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_EPAN Mark Rutland
` (16 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In system_uses_hw_pan() we use cpus_have_const_cap() to check for
ARM64_HAS_PAN, but this is only necessary so that the
system_uses_ttbr0_pan() check in setup_cpu_features() can run prior to
alternatives being patched, and otherwise this is not necessary and
alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_HAS_PAN cpucap is used by system_uses_hw_pan() and
system_uses_ttbr0_pan() depending on whether CONFIG_ARM64_SW_TTBR0_PAN
is selected, and:
* We only use system_uses_hw_pan() directly in __sdei_handler(), which
isn't reachable until after alternatives have been patched, and for
this it is safe to use alternative_has_cap_*().
* We use system_uses_ttbr0_pan() in a few places:
- In check_and_switch_context() and cpu_uninstall_idmap(), which will
defer installing a translation table into TTBR0 when the
ARM64_HAS_PAN cpucap is not detected.
Prior to patching alternatives, all CPUs will be using init_mm with
the reserved ttbr0 translation tables install in TTBR0, so these can
safely use alternative_has_cap_*().
- In update_saved_ttbr0(), which will only save the active TTBR0 into
a per-thread variable when the ARM64_HAS_PAN cpucap is not detected.
Prior to patching alternatives, all CPUs will be using init_mm with
the reserved ttbr0 translation tables install in TTBR0, so these can
safely use alternative_has_cap_*().
- In efi_set_pgd(), which will handle check_and_switch_context()
deferring the installation of TTBR0 when TTBR0 PAN is detected.
The EFI runtime services are not initialized until after
alternatives have been patched, and so this can safely use
alternative_has_cap_*() or cpus_have_final_cap().
- In uaccess_ttbr0_disable() and uaccess_ttbr0_enable(), where we'll
avoid installing/uninstalling a translation table in TTBR0 when
ARM64_HAS_PAN is detected.
Prior to patching alternatives we will not perform any uaccess and
will not call uaccess_ttbr0_disable() or uaccess_ttbr0_enable(), and
so these can safely use alternative_has_cap_*() or
cpus_have_final_cap().
- In is_el1_permission_fault() where we will consider a translation
fault on a TTBR0 address to be a permission fault when ARM64_HAS_PAN
is not detected *and* we have set the PAN bit in the SPSR (which
tells us that in the interrupted context, TTBR0 pointed at the
reserved zero ttbr).
In the window between detecting system cpucaps and patching
alternatives we should not perform any accesses to TTBR0 addresses,
and no userspace translation tables exist until after patching
alternatives. Thus it is safe for this to use alternative_has_cap*().
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime.
So that the check for TTBR0 PAN in setup_cpu_features() can run prior to
alternatives being patched, the call to system_uses_ttbr0_pan() is
replaced with an explicit check of the ARM64_HAS_PAN bit in the
system_cpucaps bitmap.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 2 +-
arch/arm64/kernel/cpufeature.c | 3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index c387ee4ee194e..c581687b23a2e 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -765,7 +765,7 @@ static __always_inline bool system_supports_fpsimd(void)
static inline bool system_uses_hw_pan(void)
{
- return cpus_have_const_cap(ARM64_HAS_PAN);
+ return alternative_has_cap_unlikely(ARM64_HAS_PAN);
}
static inline bool system_uses_ttbr0_pan(void)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e1582e50f5291..9ab7e19b71762 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -3368,7 +3368,8 @@ void __init setup_system_features(void)
* finalized. Finalize and log the available system capabilities.
*/
update_cpu_capabilities(SCOPE_SYSTEM);
- if (system_uses_ttbr0_pan())
+ if (IS_ENABLED(CONFIG_ARM64_SW_TTBR0_PAN) &&
+ !cpus_have_cap(ARM64_HAS_PAN))
pr_info("emulated: Privileged Access Never (PAN) using TTBR0_EL1 switching\n");
/*
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 22/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_EPAN
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (21 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 21/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_PAN Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 23/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_RNG Mark Rutland
` (15 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
We use cpus_have_const_cap() to check for ARM64_HAS_EPAN but this is not
necessary and alternative_has_cap() or cpus_have_cap() would be
preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_HAS_EPAN cpucap is used to affect two things:
1) The permision bits used for userspace executable mappings, which are
chosen by adjust_protection_map(), which is an arch_initcall. This is
called after the ARM64_HAS_EPAN cpucap has been detected and
alternatives have been patched, and before any userspace translation
tables exist.
2) The handling of faults taken from (user or kernel) accesses to
userspace executable mappings in do_page_fault(). Userspace
translation tables are created after adjust_protection_map() is
called, and hence after the ARM64_HAS_EPAN cpucap has been detected
and alternatives have been patched.
Neither of these run until after ARM64_HAS_EPAN cpucap has been detected
and alternatives have been patched, and hence there's no need to use
cpus_have_const_cap(). Since adjust_protection_map() is only executed
once at boot time it would be best for it to use cpus_have_cap(), and
since do_page_fault() is executed frequently it would be best for it to
use alternatives_have_cap_unlikely().
This patch replaces the uses of cpus_have_const_cap() with
cpus_have_cap() and alternative_has_cap_unlikely(), which will avoid
generating redundant code, and should be better for all subsequent calls
at runtime. The ARM64_HAS_EPAN cpucap is added to cpucap_is_possible()
so that code can be elided entirely when this is not possible.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Vladimir Murzin <vladimir.murzin@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpucaps.h | 2 ++
arch/arm64/mm/fault.c | 2 +-
arch/arm64/mm/mmap.c | 2 +-
3 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 07c9271b534df..af9550147dd08 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -21,6 +21,8 @@ cpucap_is_possible(const unsigned int cap)
switch (cap) {
case ARM64_HAS_PAN:
return IS_ENABLED(CONFIG_ARM64_PAN);
+ case ARM64_HAS_EPAN:
+ return IS_ENABLED(CONFIG_ARM64_EPAN);
case ARM64_SVE:
return IS_ENABLED(CONFIG_ARM64_SVE);
case ARM64_SME:
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 2e5d1e238af95..460d799e12966 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -571,7 +571,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
/* Write implies read */
vm_flags |= VM_WRITE;
/* If EPAN is absent then exec implies read */
- if (!cpus_have_const_cap(ARM64_HAS_EPAN))
+ if (!alternative_has_cap_unlikely(ARM64_HAS_EPAN))
vm_flags |= VM_EXEC;
}
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 8f5b7ce857ed4..645fe60d000f1 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -68,7 +68,7 @@ static int __init adjust_protection_map(void)
* With Enhanced PAN we can honour the execute-only permissions as
* there is no PAN override with such mappings.
*/
- if (cpus_have_const_cap(ARM64_HAS_EPAN)) {
+ if (cpus_have_cap(ARM64_HAS_EPAN)) {
protection_map[VM_EXEC] = PAGE_EXECONLY;
protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY;
}
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 23/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_RNG
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (22 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 22/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_EPAN Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 24/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_WFXT Mark Rutland
` (14 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In __cpu_has_rng() we use cpus_have_const_cap() to check for
ARM64_HAS_RNG, but this is not necessary and alternative_has_cap_*()
would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
In the window between detecting the ARM64_HAS_RNG cpucap and patching
alternative branches, nothing which calls __cpu_has_rng() can run, and
hence it's not necessary to use cpus_have_const_cap().
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/archrandom.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/archrandom.h b/arch/arm64/include/asm/archrandom.h
index b0abc64f86b03..ecdb3cfcd0f8c 100644
--- a/arch/arm64/include/asm/archrandom.h
+++ b/arch/arm64/include/asm/archrandom.h
@@ -63,7 +63,7 @@ static __always_inline bool __cpu_has_rng(void)
{
if (unlikely(!system_capabilities_finalized() && !preemptible()))
return this_cpu_has_cap(ARM64_HAS_RNG);
- return cpus_have_const_cap(ARM64_HAS_RNG);
+ return alternative_has_cap_unlikely(ARM64_HAS_RNG);
}
static inline size_t __must_check arch_get_random_longs(unsigned long *v, size_t max_longs)
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 24/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_WFXT
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (23 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 23/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_RNG Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 25/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_TLB_RANGE Mark Rutland
` (13 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In __delay() we use cpus_have_const_cap() to check for ARM64_HAS_WFXT,
but this is not necessary and alternative_has_cap() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The cpus_have_const_cap() check in __delay() is an optimization to use
WFIT and WFET in preference to busy-polling the counter and/or using
regular WFE and relying upon the architected timer event stream. It is
not necessary to apply this optimization in the window between detecting
the ARM64_HAS_WFXT cpucap and patching alternatives.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/lib/delay.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/lib/delay.c b/arch/arm64/lib/delay.c
index 5b7890139bc2f..cb2062e7e2340 100644
--- a/arch/arm64/lib/delay.c
+++ b/arch/arm64/lib/delay.c
@@ -27,7 +27,7 @@ void __delay(unsigned long cycles)
{
cycles_t start = get_cycles();
- if (cpus_have_const_cap(ARM64_HAS_WFXT)) {
+ if (alternative_has_cap_unlikely(ARM64_HAS_WFXT)) {
u64 end = start + cycles;
/*
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 25/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_TLB_RANGE
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (24 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 24/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_WFXT Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 26/38] arm64: Avoid cpus_have_const_cap() for ARM64_MTE Mark Rutland
` (12 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
We use cpus_have_const_cap() to check for ARM64_HAS_TLB_RANGE, but this
is not necessary and alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
In the window between detecting the ARM64_HAS_TLB_RANGE cpucap and
patching alternative branches, we do not perform any TLB invalidation,
and even if we were to perform TLB invalidation here it would not be
functionally necessary to optimize this by using range invalidation.
Hence there's no need to use cpus_have_const_cap(), and
alternative_has_cap_unlikely() is sufficient.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index c581687b23a2e..bceb007f9ea3a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -848,7 +848,7 @@ static inline bool system_supports_bti_kernel(void)
static inline bool system_supports_tlb_range(void)
{
- return cpus_have_const_cap(ARM64_HAS_TLB_RANGE);
+ return alternative_has_cap_unlikely(ARM64_HAS_TLB_RANGE);
}
int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 26/38] arm64: Avoid cpus_have_const_cap() for ARM64_MTE
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (25 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 25/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_TLB_RANGE Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 27/38] arm64: Avoid cpus_have_const_cap() for ARM64_SSBS Mark Rutland
` (11 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In system_supports_mte() we use cpus_have_const_cap() to check for
ARM64_MTE, but this is not necessary and cpus_have_final_boot_cap()
would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_MTE cpucap is a boot cpu feature which is detected and patched
early on the boot CPU under smp_prepare_boot_cpu(). In the window
between detecting the ARM64_MTE cpucap and patching alternatives,
nothing depends on the ARM64_MTE cpucap:
* The kasan_hw_tags_enabled() helper depends upon the kasan_flag_enabled
static key, which is initialized later in kasan_init_hw_tags() after
alternatives have been applied.
* No KVM code is called during this window, and KVM is not initialized
until after system cpucaps have been detected and patched. KVM code
can safely use cpus_have_final_cap() or alternative_has_cap_*().
* We don't context-switch prior to patching boot alternatives, and thus
mte_thread_switch() is not reachable during this window. Thus, we can
safely use cpus_have_final_boot_cap() or alternative_has_cap_*() in
the context-switch code.
* IRQ and FIQ are masked during this window, and we can only take SError
and Debug exceptions. SError exceptions are fatal at this point in
time, and we do not expect to take Debug exceptions, thus:
- It's fine to lave TCO set for exceptions taken during this window,
and mte_disable_tco_entry() doesn't need to do anything.
- We don't need to detect and report asynchronous tag cehck faults
during this window, and neither mte_check_tfsr_entry() nor
mte_check_tfsr_exit() need to do anything.
Since we want to report any SErrors taken during thiw window, these
cannot safely use cpus_have_final_boot_cap() or cpus_have_final_cap(),
but these can safely use alternative_has_cap_*().
* The __set_pte_at() function is not used during this window. It is
possible for this to be used on kernel mappings prior to boot cpucaps
being finalized, so this cannot safely use cpus_have_final_boot_cap()
or cpus_have_final_cap(), but this can safely use
alternative_has_cap_*().
* No userspace translation tables have been created yet, and swap has
not been initialized yet. Thus swapping is not possible and none of
the following are called:
- arch_thp_swp_supported()
- arch_prepare_to_swap()
- arch_swap_invalidate_page()
- arch_swap_invalidate_area()
- arch_swap_restore()
These can safely use system_has_final_cap() or
alternative_has_cap_*().
* The elfcore functions are only reachable after userspace is brought
up, which happens after system cpucaps have been detected and patched.
Thus the elfcore code can safely use cpus_have_final_cap() or
alternative_has_cap_*().
* Hibernation is only possible after userspace is brought up, which
happens after system cpucaps have been detected and patched. Thus the
hibernate code can safely use cpus_have_final_cap() or
alternative_has_cap_*().
* The set_tagged_addr_ctrl() function is only reachable after userspace
is brought up, which happens after system cpucaps have been detected
and patched. Thus this can safely use cpus_have_final_cap() or
alternative_has_cap_*().
* The copy_user_highpage() and copy_highpage() functions are not used
during this window, and can safely use alternative_has_cap_*().
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which avoid generating code to test the
system_cpucaps bitmap and should be better for all subsequent calls at
runtime.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index bceb007f9ea3a..386b339d5367d 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -826,7 +826,7 @@ static __always_inline bool system_uses_irq_prio_masking(void)
static inline bool system_supports_mte(void)
{
- return cpus_have_const_cap(ARM64_MTE);
+ return alternative_has_cap_unlikely(ARM64_MTE);
}
static inline bool system_has_prio_mask_debugging(void)
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 27/38] arm64: Avoid cpus_have_const_cap() for ARM64_SSBS
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (26 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 26/38] arm64: Avoid cpus_have_const_cap() for ARM64_MTE Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 28/38] arm64: Avoid cpus_have_const_cap() for ARM64_SPECTRE_V2 Mark Rutland
` (10 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In ssbs_thread_switch() we use cpus_have_const_cap() to check for
ARM64_SSBS, but this is not necessary and alternative_has_cap_*() would
be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The cpus_have_const_cap() check in ssbs_thread_switch() is an
optimization to avoid the overhead of
spectre_v4_enable_task_mitigation() where all CPUs implement SSBS and
naturally preserve the SSBS bit in SPSR_ELx. In the window between
detecting the ARM64_SSBS system-wide and patching alternative branches
it is benign to continue to call spectre_v4_enable_task_mitigation().
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/kernel/process.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 0fcc4eb1a7abc..657ea273c0f9b 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -454,7 +454,7 @@ static void ssbs_thread_switch(struct task_struct *next)
* If all CPUs implement the SSBS extension, then we just need to
* context-switch the PSTATE field.
*/
- if (cpus_have_const_cap(ARM64_SSBS))
+ if (alternative_has_cap_unlikely(ARM64_SSBS))
return;
spectre_v4_enable_task_mitigation(next);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 28/38] arm64: Avoid cpus_have_const_cap() for ARM64_SPECTRE_V2
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (27 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 27/38] arm64: Avoid cpus_have_const_cap() for ARM64_SSBS Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 29/38] arm64: Avoid cpus_have_const_cap() for ARM64_{SVE,SME,SME2,FA64} Mark Rutland
` (9 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In arm64_apply_bp_hardening() we use cpus_have_const_cap() to check for
ARM64_SPECTRE_V2 , but this is not necessary and alternative_has_cap_*()
would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The cpus_have_const_cap() check in arm64_apply_bp_hardening() is
intended to avoid the overhead of looking up and invoking a per-cpu
function pointer when no branch predictor hardening is required. The
arm64_apply_bp_hardening() function itself is called in two distinct
flows:
1) When handling certain exceptions taken from EL0, where the PC could
be a TTBR1 address and hence might have trained a branch predictor.
As cpucaps are detected and alternatives are patched long before it
is possible to execute userspace, it is not necessary to use
cpus_have_const_cap() for these cases, and cpus_have_final_cap() or
alternative_has_cap() would be preferable.
2) When switching between tasks in check_and_switch_context().
This can be called before cpucaps are detected and alternatives are
patched, but this is long before the kernel mounts filesystems or
accepts any input. At this stage the kernel hasn't loaded any secrets
and there is no potential for hostile branch predictor training. Once
cpucaps have been finalized and alternatives have been patched,
switching tasks will invalidate any prior predictions. Hence it is
not necessary to use cpus_have_const_cap() for this case.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/spectre.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h
index 9cc501450486d..06c357d83b138 100644
--- a/arch/arm64/include/asm/spectre.h
+++ b/arch/arm64/include/asm/spectre.h
@@ -73,7 +73,7 @@ static __always_inline void arm64_apply_bp_hardening(void)
{
struct bp_hardening_data *d;
- if (!cpus_have_const_cap(ARM64_SPECTRE_V2))
+ if (!alternative_has_cap_unlikely(ARM64_SPECTRE_V2))
return;
d = this_cpu_ptr(&bp_hardening_data);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 29/38] arm64: Avoid cpus_have_const_cap() for ARM64_{SVE,SME,SME2,FA64}
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (28 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 28/38] arm64: Avoid cpus_have_const_cap() for ARM64_SPECTRE_V2 Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 30/38] arm64: Avoid cpus_have_const_cap() for ARM64_UNMAP_KERNEL_AT_EL0 Mark Rutland
` (8 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In system_supports_{sve,sme,sme2,fa64}() we use cpus_have_const_cap() to
check for the relevant cpucaps, but this is only necessary so that
sve_setup() and sme_setup() can run prior to alternatives being patched,
and otherwise alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
All of system_supports_{sve,sme,sme2,fa64}() will return false prior to
system cpucaps being detected. In the window between system cpucaps being
detected and patching alternatives, we need system_supports_sve() and
system_supports_sme() to run to initialize SVE and SME properties, but
all other users of system_supports_{sve,sme,sme2,fa64}() don't depend on
the relevant cpucap becoming true until alternatives are patched:
* No KVM code runs until after alternatives are patched, and so this can
safely use cpus_have_final_cap() or alternative_has_cap_*().
* The cpuid_cpu_online() callback in arch/arm64/kernel/cpuinfo.c is
registered later from cpuinfo_regs_init() as a device_initcall, and so
this can safely use cpus_have_final_cap() or alternative_has_cap_*().
* The entry, signal, and ptrace code isn't reachable until userspace has
run, and so this can safely use cpus_have_final_cap() or
alternative_has_cap_*().
* Currently perf_reg_validate() will un-reserve the PERF_REG_ARM64_VG
pseudo-register before alternatives are patched, and before
sve_setup() has run. If a sampling event is created early enough, this
would allow perf_ext_reg_value() to sample (the as-yet uninitialized)
thread_struct::vl[] prior to alternatives being patched.
It would be preferable to defer this until alternatives are patched,
and this can safely use alternative_has_cap_*().
* The context-switch code will run during this window as part of
stop_machine() used during alternatives_patch_all(), and potentially
for other work if other kernel threads are created early. No threads
require the use of SVE/SME/SME2/FA64 prior to alternatives being
patched, and it would be preferable for the related context-switch
logic to take effect after alternatives are patched so that ths is
guaranteed to see a consistent system-wide state (e.g. anything
initialized by sve_setup() and sme_setup().
This can safely ues alternative_has_cap_*().
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. The sve_setup() and sme_setup() functions are modified to
use cpus_have_cap() directly so that they can observe the cpucaps being
set prior to alternatives being patched.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 8 ++++----
arch/arm64/kernel/fpsimd.c | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 386b339d5367d..e1f011f9ea20a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -776,22 +776,22 @@ static inline bool system_uses_ttbr0_pan(void)
static __always_inline bool system_supports_sve(void)
{
- return cpus_have_const_cap(ARM64_SVE);
+ return alternative_has_cap_unlikely(ARM64_SVE);
}
static __always_inline bool system_supports_sme(void)
{
- return cpus_have_const_cap(ARM64_SME);
+ return alternative_has_cap_unlikely(ARM64_SME);
}
static __always_inline bool system_supports_sme2(void)
{
- return cpus_have_const_cap(ARM64_SME2);
+ return alternative_has_cap_unlikely(ARM64_SME2);
}
static __always_inline bool system_supports_fa64(void)
{
- return cpus_have_const_cap(ARM64_SME_FA64);
+ return alternative_has_cap_unlikely(ARM64_SME_FA64);
}
static __always_inline bool system_supports_tpidr2(void)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index f1c9eadea200d..3e3c524f715bf 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1199,7 +1199,7 @@ void __init sve_setup(void)
DECLARE_BITMAP(tmp_map, SVE_VQ_MAX);
unsigned long b;
- if (!system_supports_sve())
+ if (!cpus_have_cap(ARM64_SVE))
return;
/*
@@ -1358,7 +1358,7 @@ void __init sme_setup(void)
u64 smcr;
int min_bit;
- if (!system_supports_sme())
+ if (!cpus_have_cap(ARM64_SME))
return;
/*
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 30/38] arm64: Avoid cpus_have_const_cap() for ARM64_UNMAP_KERNEL_AT_EL0
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (29 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 29/38] arm64: Avoid cpus_have_const_cap() for ARM64_{SVE,SME,SME2,FA64} Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 31/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_843419 Mark Rutland
` (7 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In arm64_kernel_unmapped_at_el0() we use cpus_have_const_cap() to check
for ARM64_UNMAP_KERNEL_AT_EL0, but this is only necessary so that
arm64_get_bp_hardening_vector() and this_cpu_set_vectors() can run prior
to alternatives being patched. Otherwise this is not necessary and
alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_UNMAP_KERNEL_AT_EL0 cpucap is a system-wide feature that is
detected and patched before any translation tables are created for
userspace. In the window between detecting the ARM64_UNMAP_KERNEL_AT_EL0
cpucap and patching alternatives, most users of
arm64_kernel_unmapped_at_el0() do not need to know that the cpucap has
been detected:
* As KVM is initialized after cpucaps are finalized, no usaef of
arm64_kernel_unmapped_at_el0() in the KVM code is reachable during
this window.
* The arm64_mm_context_get() function in arch/arm64/mm/context.c is only
called after the SMMU driver is brought up after alternatives have
been patched. Thus this can safely use cpus_have_final_cap() or
alternative_has_cap_*().
Similarly the asids_update_limit() function is called after
alternatives have been patched as an arch_initcall, and this can
safely use cpus_have_final_cap() or alternative_has_cap_*().
Similarly we do not expect an ASID rollover to occur between cpucaps
being detected and patching alternatives. Thus
set_reserved_asid_bits() can safely use cpus_have_final_cap() or
alternative_has_cap_*().
* The __tlbi_user() and __tlbi_user_level() macros are not used during
this window, and only need to invalidate additional entries once
userspace translation tables have been active on a CPU. Thus these can
safely use alternative_has_cap_*().
* The xen_kernel_unmapped_at_usr() function is not used during this
window as it is only used in a late_initcall. Thus this can safely use
cpus_have_final_cap() or alternative_has_cap_*().
* The arm64_get_meltdown_state() function is not used during this
window. It only used by arm64_get_meltdown_state() and KVM code, both
of which are only used after cpucaps have been finalized. Thus this
can safely use cpus_have_final_cap() or alternative_has_cap_*().
* The tls_thread_switch() uses arm64_kernel_unmapped_at_el0() as an
optimization to avoid zeroing tpidrro_el0 when KPTI is enabled
and this will be trampled by the KPTI trampoline. It doesn't matter if
this continues to zero the register during the window between
detecting the cpucap and patching alternatives, so this can safely use
alternative_has_cap_*().
* The sdei_arch_get_entry_point() and do_sdei_event() functions aren't
reachable at this time as the SDEI driver is registered later by
acpi_init() -> acpi_ghes_init() -> sdei_init(), where acpi_init is a
subsys_initcall. Thus these can safely use cpus_have_final_cap() or
alternative_has_cap_*().
* The uses under drivers/ aren't reachable at this time as the drivers
are registered later:
- TRBE is registered via module_init()
- SMMUv3 is registred via module_driver()
- SPE is registred via module_init()
* The arm64_get_bp_hardening_vector() and this_cpu_set_vectors()
functions need to run on boot CPUs prior to patching alternatives.
As these are only called during the onlining of a CPU, it's fine to
perform a system_cpucaps bitmap test using cpus_have_cap().
This patch modifies this_cpu_set_vectors() to use cpus_have_cap(), and
replaced all other use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. The ARM64_UNMAP_KERNEL_AT_EL0 cpucap is added to
cpucap_is_possible() so that code can be elided entirely when this is
not possible.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpucaps.h | 2 ++
arch/arm64/include/asm/mmu.h | 2 +-
arch/arm64/include/asm/vectors.h | 2 +-
arch/arm64/kernel/proton-pack.c | 2 +-
4 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index af9550147dd08..5e3ec59fb48bb 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -42,6 +42,8 @@ cpucap_is_possible(const unsigned int cap)
return IS_ENABLED(CONFIG_ARM64_BTI);
case ARM64_HAS_TLB_RANGE:
return IS_ENABLED(CONFIG_ARM64_TLB_RANGE);
+ case ARM64_UNMAP_KERNEL_AT_EL0:
+ return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
case ARM64_WORKAROUND_2658417:
return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417);
}
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 94b68850cb9f0..2fcf51231d6eb 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -57,7 +57,7 @@ typedef struct {
static inline bool arm64_kernel_unmapped_at_el0(void)
{
- return cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
+ return alternative_has_cap_unlikely(ARM64_UNMAP_KERNEL_AT_EL0);
}
extern void arm64_memblock_init(void);
diff --git a/arch/arm64/include/asm/vectors.h b/arch/arm64/include/asm/vectors.h
index bc9a2145f4194..b815d8f2c0dcd 100644
--- a/arch/arm64/include/asm/vectors.h
+++ b/arch/arm64/include/asm/vectors.h
@@ -62,7 +62,7 @@ DECLARE_PER_CPU_READ_MOSTLY(const char *, this_cpu_vector);
static inline const char *
arm64_get_bp_hardening_vector(enum arm64_bp_harden_el1_vectors slot)
{
- if (arm64_kernel_unmapped_at_el0())
+ if (cpus_have_cap(ARM64_UNMAP_KERNEL_AT_EL0))
return (char *)(TRAMP_VALIAS + SZ_2K * slot);
WARN_ON_ONCE(slot == EL1_VECTOR_KPTI);
diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
index 05f40c4e18fda..6268a13a1d589 100644
--- a/arch/arm64/kernel/proton-pack.c
+++ b/arch/arm64/kernel/proton-pack.c
@@ -972,7 +972,7 @@ static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot)
* When KPTI is in use, the vectors are switched when exiting to
* user-space.
*/
- if (arm64_kernel_unmapped_at_el0())
+ if (cpus_have_cap(ARM64_UNMAP_KERNEL_AT_EL0))
return;
write_sysreg(v, vbar_el1);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 31/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_843419
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (30 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 30/38] arm64: Avoid cpus_have_const_cap() for ARM64_UNMAP_KERNEL_AT_EL0 Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 32/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1542419 Mark Rutland
` (6 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In count_plts() and is_forbidden_offset_for_adrp() we use
cpus_have_const_cap() to check for ARM64_WORKAROUND_843419, but this is
not necessary and cpus_have_final_cap() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
It's not possible to load a module in the window between detecting the
ARM64_WORKAROUND_843419 cpucap and patching alternatives. The module VA
range limits are initialized much later in module_init_limits() which is
a subsys_initcall, and module loading cannot happen before this. Hence
it's not necessary for count_plts() or is_forbidden_offset_for_adrp() to
use cpus_have_const_cap().
This patch replaces the use of cpus_have_const_cap() with
cpus_have_final_cap() which will avoid generating code to test the
system_cpucaps bitmap and should be better for all subsequent calls at
runtime. Using cpus_have_final_cap() clearly documents that we do not
expect this code to run before cpucaps are finalized, and will make it
easier to spot issues if code is changed in future to allow modules to
be loaded earlier. The ARM64_WORKAROUND_843419 cpucap is added to
cpucap_is_possible() so that code can be elided entirely when this is not
possible, and redundant IS_ENABLED() checks are removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpucaps.h | 2 ++
arch/arm64/include/asm/module.h | 3 +--
arch/arm64/kernel/module-plts.c | 7 +++----
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 5e3ec59fb48bb..68b0a658e0828 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -44,6 +44,8 @@ cpucap_is_possible(const unsigned int cap)
return IS_ENABLED(CONFIG_ARM64_TLB_RANGE);
case ARM64_UNMAP_KERNEL_AT_EL0:
return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
+ case ARM64_WORKAROUND_843419:
+ return IS_ENABLED(CONFIG_ARM64_ERRATUM_843419);
case ARM64_WORKAROUND_2658417:
return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417);
}
diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h
index bfa6638b4c930..79550b22ba19c 100644
--- a/arch/arm64/include/asm/module.h
+++ b/arch/arm64/include/asm/module.h
@@ -44,8 +44,7 @@ struct plt_entry {
static inline bool is_forbidden_offset_for_adrp(void *place)
{
- return IS_ENABLED(CONFIG_ARM64_ERRATUM_843419) &&
- cpus_have_const_cap(ARM64_WORKAROUND_843419) &&
+ return cpus_have_final_cap(ARM64_WORKAROUND_843419) &&
((u64)place & 0xfff) >= 0xff8;
}
diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c
index bd69a4e7cd605..399692c414f02 100644
--- a/arch/arm64/kernel/module-plts.c
+++ b/arch/arm64/kernel/module-plts.c
@@ -203,8 +203,7 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num,
break;
case R_AARCH64_ADR_PREL_PG_HI21_NC:
case R_AARCH64_ADR_PREL_PG_HI21:
- if (!IS_ENABLED(CONFIG_ARM64_ERRATUM_843419) ||
- !cpus_have_const_cap(ARM64_WORKAROUND_843419))
+ if (!cpus_have_final_cap(ARM64_WORKAROUND_843419))
break;
/*
@@ -239,13 +238,13 @@ static unsigned int count_plts(Elf64_Sym *syms, Elf64_Rela *rela, int num,
}
}
- if (IS_ENABLED(CONFIG_ARM64_ERRATUM_843419) &&
- cpus_have_const_cap(ARM64_WORKAROUND_843419))
+ if (cpus_have_final_cap(ARM64_WORKAROUND_843419)) {
/*
* Add some slack so we can skip PLT slots that may trigger
* the erratum due to the placement of the ADRP instruction.
*/
ret += DIV_ROUND_UP(ret, (SZ_4K / sizeof(struct plt_entry)));
+ }
return ret;
}
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 32/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1542419
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (31 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 31/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_843419 Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 33/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1742098 Mark Rutland
` (5 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
We use cpus_have_const_cap() to check for ARM64_WORKAROUND_1542419 but
this is not necessary and cpus_have_final_cap() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_WORKAROUND_1542419 cpucap is detected and patched before any
userspace code can run, and the both __do_compat_cache_op() and
ctr_read_handler() are only reachable from exceptions taken from
userspace. Thus it is not necessary for either to use
cpus_have_const_cap(), and cpus_have_final_cap() is equivalent.
This patch replaces the use of cpus_have_const_cap() with
cpus_have_final_cap(), which will avoid generating code to test the
system_cpucaps bitmap and should be better for all subsequent calls at
runtime. Using cpus_have_final_cap() clearly documents that we do not
expect this code to run before cpucaps are finalized, and will make it
easier to spot issues if code is changed in future to allow these
functions to be reached earlier.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/kernel/sys_compat.c | 2 +-
arch/arm64/kernel/traps.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c
index df14336c3a29c..4a609e9b65de0 100644
--- a/arch/arm64/kernel/sys_compat.c
+++ b/arch/arm64/kernel/sys_compat.c
@@ -31,7 +31,7 @@ __do_compat_cache_op(unsigned long start, unsigned long end)
if (fatal_signal_pending(current))
return 0;
- if (cpus_have_const_cap(ARM64_WORKAROUND_1542419)) {
+ if (cpus_have_final_cap(ARM64_WORKAROUND_1542419)) {
/*
* The workaround requires an inner-shareable tlbi.
* We pick the reserved-ASID to minimise the impact.
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 8b70759cdbb90..9eba6cdd70382 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -631,7 +631,7 @@ static void ctr_read_handler(unsigned long esr, struct pt_regs *regs)
int rt = ESR_ELx_SYS64_ISS_RT(esr);
unsigned long val = arm64_ftr_reg_user_value(&arm64_ftr_reg_ctrel0);
- if (cpus_have_const_cap(ARM64_WORKAROUND_1542419)) {
+ if (cpus_have_final_cap(ARM64_WORKAROUND_1542419)) {
/* Hide DIC so that we can trap the unnecessary maintenance...*/
val &= ~BIT(CTR_EL0_DIC_SHIFT);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 33/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1742098
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (32 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 32/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1542419 Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 34/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_2645198 Mark Rutland
` (4 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In elf_hwcap_fixup() we use cpus_have_const_cap() to check for
ARM64_WORKAROUND_1742098, but this is not necessary and cpus_have_cap()
would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_WORKAROUND_1742098 cpucap is detected and patched before
elf_hwcap_fixup() can run, and hence it is not necessary to use
cpus_have_const_cap(). We run cpus_have_const_cap() at most twice: once
after finalizing system cpucaps, and potentially once more after
detecting mismatched CPUs which support AArch32 at EL0. Due to this,
it's not necessary to optimize for many calls to elf_hwcap_fixup(), and
it's fine to use cpus_have_cap().
This patch replaces the use of cpus_have_const_cap() with
cpus_have_cap(), which will only generate the bitmap test and avoid
generating an alternative sequence, resulting in slightly simpler annd
smaller code being generated. The ARM64_WORKAROUND_1742098 cpucap is added to
cpucap_is_possible() so that code can be elided entirely when this is not
possible without requiring ifdeffery or IS_ENABLED() checks at each usage.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpucaps.h | 2 ++
arch/arm64/kernel/cpufeature.c | 4 +---
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 68b0a658e0828..5613d9e813b20 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -46,6 +46,8 @@ cpucap_is_possible(const unsigned int cap)
return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0);
case ARM64_WORKAROUND_843419:
return IS_ENABLED(CONFIG_ARM64_ERRATUM_843419);
+ case ARM64_WORKAROUND_1742098:
+ return IS_ENABLED(CONFIG_ARM64_ERRATUM_1742098);
case ARM64_WORKAROUND_2658417:
return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417);
}
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9ab7e19b71762..a65c6f00deec5 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2221,10 +2221,8 @@ static void user_feature_fixup(void)
static void elf_hwcap_fixup(void)
{
-#ifdef CONFIG_ARM64_ERRATUM_1742098
- if (cpus_have_const_cap(ARM64_WORKAROUND_1742098))
+ if (cpus_have_cap(ARM64_WORKAROUND_1742098))
compat_elf_hwcap2 &= ~COMPAT_HWCAP2_AES;
-#endif /* ARM64_ERRATUM_1742098 */
}
#ifdef CONFIG_KVM
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 34/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_2645198
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (33 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 33/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1742098 Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 35/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_CAVIUM_23154 Mark Rutland
` (3 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
We use cpus_have_const_cap() to check for ARM64_WORKAROUND_2645198 but
this is not necessary and alternative_has_cap() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_WORKAROUND_2645198 cpucap is detected and patched before any
userspace translation table exist, and the workaround is only necessary
when manipulating usrspace translation tables which are in use. Thus it
is not necessary to use cpus_have_const_cap(), and alternative_has_cap()
is equivalent.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. The ARM64_WORKAROUND_2645198 cpucap is added to
cpucap_is_possible() so that code can be elided entirely when this is
not possible, and redundant IS_ENABLED() checks are removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpucaps.h | 2 ++
arch/arm64/mm/hugetlbpage.c | 3 +--
arch/arm64/mm/mmu.c | 3 +--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 5613d9e813b20..c5b67a64613e9 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -48,6 +48,8 @@ cpucap_is_possible(const unsigned int cap)
return IS_ENABLED(CONFIG_ARM64_ERRATUM_843419);
case ARM64_WORKAROUND_1742098:
return IS_ENABLED(CONFIG_ARM64_ERRATUM_1742098);
+ case ARM64_WORKAROUND_2645198:
+ return IS_ENABLED(CONFIG_ARM64_ERRATUM_2645198);
case ARM64_WORKAROUND_2658417:
return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417);
}
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 9c52718ea7509..e9fc56e4f98c0 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -555,8 +555,7 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
{
- if (IS_ENABLED(CONFIG_ARM64_ERRATUM_2645198) &&
- cpus_have_const_cap(ARM64_WORKAROUND_2645198)) {
+ if (alternative_has_cap_unlikely(ARM64_WORKAROUND_2645198)) {
/*
* Break-before-make (BBM) is required for all user space mappings
* when the permission changes from executable to non-executable
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 47781bec61719..15f6347d23b69 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1469,8 +1469,7 @@ early_initcall(prevent_bootmem_remove_init);
pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
{
- if (IS_ENABLED(CONFIG_ARM64_ERRATUM_2645198) &&
- cpus_have_const_cap(ARM64_WORKAROUND_2645198)) {
+ if (alternative_has_cap_unlikely(ARM64_WORKAROUND_2645198)) {
/*
* Break-before-make (BBM) is required for all user space mappings
* when the permission changes from executable to non-executable
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 35/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_CAVIUM_23154
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (34 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 34/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_2645198 Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 36/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_NVIDIA_CARMEL_CNP Mark Rutland
` (2 subsequent siblings)
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In gic_read_iar() we use cpus_have_const_cap() to check for
ARM64_WORKAROUND_CAVIUM_23154 but this is not necessary and
alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_WORKAROUND_CAVIUM_23154 cpucap is detected and patched early
on the boot CPU before the GICv3 driver is initialized and hence before
gic_read_iar() is ever called. Thus it is not necessary to use
cpus_have_const_cap(), and alternative_has_cap() is equivalent.
In addition, arm64's gic_read_iar() lives in irq-gic-v3.c purely for
historical reasons. It was originally added prior to 32-bit arm support
in commit:
6d4e11c5e2e8cd54 ("irqchip/gicv3: Workaround for Cavium ThunderX erratum 23154")
When support for 32-bit arm was added, 32-bit arm's gic_read_iar()
implementation was placed in <asm/arch_gicv3.h>, but the arm64 version
was kept within irq-gic-v3.c as it depended on a static key local to
irq-gic-v3.c and it was easier to add ifdeffery, which is what we did in
commit:
7936e914f7b0827c ("irqchip/gic-v3: Refactor the arm64 specific parts")
Subsequently the static key was replaced with a cpucap in commit:
a4023f682739439b ("arm64: Add hypervisor safe helper for checking constant capabilities")
Since that commit there has been no need to keep arm64's gic_read_iar()
in irq-gic-v3.c.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. For consistency, move the arm64-specific gic_read_iar()
implementation over to arm64's <asm/arch_gicv3.h>. The
ARM64_WORKAROUND_CAVIUM_23154 cpucap is added to cpucap_is_possible() so
that code can be elided entirely when this is not possible.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/arch_gicv3.h | 8 ++++++++
arch/arm64/include/asm/cpucaps.h | 2 ++
drivers/irqchip/irq-gic-v3.c | 11 -----------
3 files changed, 10 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
index 01281a5336cf8..5f172611654b1 100644
--- a/arch/arm64/include/asm/arch_gicv3.h
+++ b/arch/arm64/include/asm/arch_gicv3.h
@@ -79,6 +79,14 @@ static inline u64 gic_read_iar_cavium_thunderx(void)
return 0x3ff;
}
+static u64 __maybe_unused gic_read_iar(void)
+{
+ if (alternative_has_cap_unlikely(ARM64_WORKAROUND_CAVIUM_23154))
+ return gic_read_iar_cavium_thunderx();
+ else
+ return gic_read_iar_common();
+}
+
static inline void gic_write_ctlr(u32 val)
{
write_sysreg_s(val, SYS_ICC_CTLR_EL1);
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index c5b67a64613e9..34b6428f08bad 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -52,6 +52,8 @@ cpucap_is_possible(const unsigned int cap)
return IS_ENABLED(CONFIG_ARM64_ERRATUM_2645198);
case ARM64_WORKAROUND_2658417:
return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417);
+ case ARM64_WORKAROUND_CAVIUM_23154:
+ return IS_ENABLED(CONFIG_CAVIUM_ERRATUM_23154);
}
return true;
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index eedfa8e9f0772..6d6c01f8f7b47 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -270,17 +270,6 @@ static void gic_redist_wait_for_rwp(void)
gic_do_wait_for_rwp(gic_data_rdist_rd_base(), GICR_CTLR_RWP);
}
-#ifdef CONFIG_ARM64
-
-static u64 __maybe_unused gic_read_iar(void)
-{
- if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_23154))
- return gic_read_iar_cavium_thunderx();
- else
- return gic_read_iar_common();
-}
-#endif
-
static void gic_enable_redist(bool enable)
{
void __iomem *rbase;
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 36/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_NVIDIA_CARMEL_CNP
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (35 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 35/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_CAVIUM_23154 Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 37/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_REPEAT_TLBI Mark Rutland
2023-10-05 9:50 ` [PATCH v2 38/38] arm64: Remove cpus_have_const_cap() Mark Rutland
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In has_useable_cnp() we use cpus_have_const_cap() to check for
ARM64_WORKAROUND_NVIDIA_CARMEL_CNP, but this is not necessary and
cpus_have_cap() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
We use has_useable_cnp() to determine whether we have the system-wide
ARM64_HAS_CNP cpucap. Due to the structure of the cpufeature code, we
call has_useable_cnp() in two distinct cases:
1) When finalizing system capabilities, setup_system_capabilities() will
call has_useable_cnp() with SCOPE_SYSTEM to determine whether all
CPUs have the feature. This is called after we've detected any local
cpucaps including ARM64_WORKAROUND_NVIDIA_CARMEL_CNP, but prior to
patching alternatives.
If the ARM64_WORKAROUND_NVIDIA_CARMEL_CNP was detected, we will not
detect ARM64_HAS_CNP.
2) After finalizing system capabilties, verify_local_cpu_capabilities()
will call has_useable_cnp() with SCOPE_LOCAL_CPU to verify that CPUs
have CNP if we previously detected it.
Note that if ARM64_WORKAROUND_NVIDIA_CARMEL_CNP was detected, we will
not have detected ARM64_HAS_CNP.
For case 1 we must check the system_cpucaps bitmap as this occurs prior
to patching the alternatives. For case 2 we'll only call
has_useable_cnp() once per subsequent onlining of a CPU, and as this
isn't a fast path it's not necessary to optimize for this case.
This patch replaces the use of cpus_have_const_cap() with
cpus_have_cap(), which will only generate the bitmap test and avoid
generating an alternative sequence, resulting in slightly simpler annd
smaller code being generated. The ARM64_WORKAROUND_NVIDIA_CARMEL_CNP
cpucap is added to cpucap_is_possible() so that code can be elided
entirely when this is not possible.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpucaps.h | 2 ++
arch/arm64/kernel/cpufeature.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 34b6428f08bad..7ddb79c235c27 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -54,6 +54,8 @@ cpucap_is_possible(const unsigned int cap)
return IS_ENABLED(CONFIG_ARM64_ERRATUM_2658417);
case ARM64_WORKAROUND_CAVIUM_23154:
return IS_ENABLED(CONFIG_CAVIUM_ERRATUM_23154);
+ case ARM64_WORKAROUND_NVIDIA_CARMEL_CNP:
+ return IS_ENABLED(CONFIG_NVIDIA_CARMEL_CNP_ERRATUM);
}
return true;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index a65c6f00deec5..d1509082b6df5 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1629,7 +1629,7 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
if (is_kdump_kernel())
return false;
- if (cpus_have_const_cap(ARM64_WORKAROUND_NVIDIA_CARMEL_CNP))
+ if (cpus_have_cap(ARM64_WORKAROUND_NVIDIA_CARMEL_CNP))
return false;
return has_cpuid_feature(entry, scope);
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 37/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_REPEAT_TLBI
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (36 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 36/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_NVIDIA_CARMEL_CNP Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
2023-10-05 9:50 ` [PATCH v2 38/38] arm64: Remove cpus_have_const_cap() Mark Rutland
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
In arch_tlbbatch_should_defer() we use cpus_have_const_cap() to check
for ARM64_WORKAROUND_REPEAT_TLBI, but this is not necessary and
alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The cpus_have_const_cap() check in arch_tlbbatch_should_defer() is an
optimization to avoid some redundant work when the
ARM64_WORKAROUND_REPEAT_TLBI cpucap is detected and forces the immediate
use of TLBI + DSB ISH. In the window between detecting the
ARM64_WORKAROUND_REPEAT_TLBI cpucap and patching alternatives this is
not a big concern and there's no need to optimize this window at the
expsense of subsequent usage at runtime.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. The ARM64_WORKAROUND_REPEAT_TLBI cpucap is added to
cpucap_is_possible() so that code can be elided entirely when this is
not possible without requiring ifdeffery or IS_ENABLED() checks at each
usage.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpucaps.h | 2 ++
arch/arm64/include/asm/tlbflush.h | 5 ++---
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 7ddb79c235c27..270680e2b5c4a 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -56,6 +56,8 @@ cpucap_is_possible(const unsigned int cap)
return IS_ENABLED(CONFIG_CAVIUM_ERRATUM_23154);
case ARM64_WORKAROUND_NVIDIA_CARMEL_CNP:
return IS_ENABLED(CONFIG_NVIDIA_CARMEL_CNP_ERRATUM);
+ case ARM64_WORKAROUND_REPEAT_TLBI:
+ return IS_ENABLED(CONFIG_ARM64_WORKAROUND_REPEAT_TLBI);
}
return true;
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 53ed194626e1a..7aa476a52180a 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -284,16 +284,15 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
{
-#ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI
/*
* TLB flush deferral is not required on systems which are affected by
* ARM64_WORKAROUND_REPEAT_TLBI, as __tlbi()/__tlbi_user() implementation
* will have two consecutive TLBI instructions with a dsb(ish) in between
* defeating the purpose (i.e save overall 'dsb ish' cost).
*/
- if (unlikely(cpus_have_const_cap(ARM64_WORKAROUND_REPEAT_TLBI)))
+ if (alternative_has_cap_unlikely(ARM64_WORKAROUND_REPEAT_TLBI))
return false;
-#endif
+
return true;
}
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 38/38] arm64: Remove cpus_have_const_cap()
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
` (37 preceding siblings ...)
2023-10-05 9:50 ` [PATCH v2 37/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_REPEAT_TLBI Mark Rutland
@ 2023-10-05 9:50 ` Mark Rutland
38 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 9:50 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko,
mark.rutland, maz, oliver.upton, pcc, sstabellini, suzuki.poulose,
tglx, vladimir.murzin, will
There are no longer any users of cpus_have_const_cap(), and therefore it
can be removed.
Remove cpus_have_const_cap(). At the same time, remove
__cpus_have_const_cap(), as this is a trivial wrapper of
alternative_has_cap_unlikely(), which can be used directly instead.
The comment for __system_matches_cap() is updated to no longer refer to
cpus_have_const_cap(). As we have a number of ways to check the cpucaps,
the specific suggestions are removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 38 ++---------------------------
arch/arm64/kernel/cpufeature.c | 1 -
2 files changed, 2 insertions(+), 37 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index e1f011f9ea20a..396af9b9c857a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -462,19 +462,6 @@ static __always_inline bool cpus_have_cap(unsigned int num)
return arch_test_bit(num, system_cpucaps);
}
-/*
- * Test for a capability without a runtime check.
- *
- * Before capabilities are finalized, this returns false.
- * After capabilities are finalized, this is patched to avoid a runtime check.
- *
- * @num must be a compile-time constant.
- */
-static __always_inline bool __cpus_have_const_cap(int num)
-{
- return alternative_has_cap_unlikely(num);
-}
-
/*
* Test for a capability without a runtime check.
*
@@ -487,7 +474,7 @@ static __always_inline bool __cpus_have_const_cap(int num)
static __always_inline bool cpus_have_final_boot_cap(int num)
{
if (boot_capabilities_finalized())
- return __cpus_have_const_cap(num);
+ return alternative_has_cap_unlikely(num);
else
BUG();
}
@@ -504,32 +491,11 @@ static __always_inline bool cpus_have_final_boot_cap(int num)
static __always_inline bool cpus_have_final_cap(int num)
{
if (system_capabilities_finalized())
- return __cpus_have_const_cap(num);
+ return alternative_has_cap_unlikely(num);
else
BUG();
}
-/*
- * Test for a capability, possibly with a runtime check for non-hyp code.
- *
- * For hyp code, this behaves the same as cpus_have_final_cap().
- *
- * For non-hyp code:
- * Before capabilities are finalized, this behaves as cpus_have_cap().
- * After capabilities are finalized, this is patched to avoid a runtime check.
- *
- * @num must be a compile-time constant.
- */
-static __always_inline bool cpus_have_const_cap(int num)
-{
- if (is_hyp_code())
- return cpus_have_final_cap(num);
- else if (system_capabilities_finalized())
- return __cpus_have_const_cap(num);
- else
- return cpus_have_cap(num);
-}
-
static inline int __attribute_const__
cpuid_feature_extract_signed_field_width(u64 features, int field, int width)
{
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d1509082b6df5..cdc4476c9b8e6 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -3320,7 +3320,6 @@ EXPORT_SYMBOL_GPL(this_cpu_has_cap);
* This helper function is used in a narrow window when,
* - The system wide safe registers are set with all the SMP CPUs and,
* - The SYSTEM_FEATURE system_cpucaps may not have been set.
- * In all other cases cpus_have_{const_}cap() should be used.
*/
static bool __maybe_unused __system_matches_cap(unsigned int n)
{
--
2.30.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH v2 03/38] arm64: Factor out cpucap definitions
2023-10-05 9:49 ` [PATCH v2 03/38] arm64: Factor out cpucap definitions Mark Rutland
@ 2023-10-05 12:29 ` Mark Rutland
0 siblings, 0 replies; 41+ messages in thread
From: Mark Rutland @ 2023-10-05 12:29 UTC (permalink / raw)
To: linux-arm-kernel
Cc: ardb, bertrand.marquis, boris.ostrovsky, broonie, catalin.marinas,
daniel.lezcano, james.morse, jgross, kristina.martsenko, maz,
oliver.upton, pcc, sstabellini, suzuki.poulose, tglx,
vladimir.murzin, will
On Thu, Oct 05, 2023 at 10:49:49AM +0100, Mark Rutland wrote:
> For clarity it would be nice to factor cpucap manipulation out of
> <asm/cpufeature.h>, and the obvious place would be <asm/cpucap.h>, but
> this will clash somewhat with <generated/asm/cpucaps.h>.
>
> Rename <generated/asm/cpucaps.h> to <generated/asm/cpucap-defs.h>,
> matching what we do for <generated/asm/sysreg-defs.h>, and introduce a
> new <asm/cpucaps.h> which includes the generated header.
>
> Subsequent patches will fill out <asm/cpucaps.h>.
>
> There should be no functional change as a result of this patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
> arch/arm64/include/asm/cpucaps.h | 8 ++++++++
> arch/arm64/tools/Makefile | 4 ++--
> arch/arm64/tools/gen-cpucaps.awk | 6 +++---
> 3 files changed, 13 insertions(+), 5 deletions(-)
> create mode 100644 arch/arm64/include/asm/cpucaps.h
I've just realised that I forgot to update arch/arm64/include/asm/Kbuild, and
as-is this will cause cpucap-defs.h to be removed and regenerated on every
subsequent build, which isn't desirable.
I'll fold in the below for v3 to fix that:
| diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
| index 5c8ee5a541d20..4b6d2d52053e4 100644
| --- a/arch/arm64/include/asm/Kbuild
| +++ b/arch/arm64/include/asm/Kbuild
| @@ -6,5 +6,5 @@ generic-y += qspinlock.h
| generic-y += parport.h
| generic-y += user.h
|
| -generated-y += cpucaps.h
| +generated-y += cpucap-defs.h
| generated-y += sysreg-defs.h
Mark.
>
> diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
> new file mode 100644
> index 0000000000000..7333b5bbf4488
> --- /dev/null
> +++ b/arch/arm64/include/asm/cpucaps.h
> @@ -0,0 +1,8 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> +#ifndef __ASM_CPUCAPS_H
> +#define __ASM_CPUCAPS_H
> +
> +#include <asm/cpucap-defs.h>
> +
> +#endif /* __ASM_CPUCAPS_H */
> diff --git a/arch/arm64/tools/Makefile b/arch/arm64/tools/Makefile
> index 07a93ab21a62b..fa2251d9762d9 100644
> --- a/arch/arm64/tools/Makefile
> +++ b/arch/arm64/tools/Makefile
> @@ -3,7 +3,7 @@
> gen := arch/$(ARCH)/include/generated
> kapi := $(gen)/asm
>
> -kapi-hdrs-y := $(kapi)/cpucaps.h $(kapi)/sysreg-defs.h
> +kapi-hdrs-y := $(kapi)/cpucap-defs.h $(kapi)/sysreg-defs.h
>
> targets += $(addprefix ../../../, $(kapi-hdrs-y))
>
> @@ -17,7 +17,7 @@ quiet_cmd_gen_cpucaps = GEN $@
> quiet_cmd_gen_sysreg = GEN $@
> cmd_gen_sysreg = mkdir -p $(dir $@); $(AWK) -f $(real-prereqs) > $@
>
> -$(kapi)/cpucaps.h: $(src)/gen-cpucaps.awk $(src)/cpucaps FORCE
> +$(kapi)/cpucap-defs.h: $(src)/gen-cpucaps.awk $(src)/cpucaps FORCE
> $(call if_changed,gen_cpucaps)
>
> $(kapi)/sysreg-defs.h: $(src)/gen-sysreg.awk $(src)/sysreg FORCE
> diff --git a/arch/arm64/tools/gen-cpucaps.awk b/arch/arm64/tools/gen-cpucaps.awk
> index 8525980379d71..2f4f61a0af17e 100755
> --- a/arch/arm64/tools/gen-cpucaps.awk
> +++ b/arch/arm64/tools/gen-cpucaps.awk
> @@ -15,8 +15,8 @@ function fatal(msg) {
> /^#/ { next }
>
> BEGIN {
> - print "#ifndef __ASM_CPUCAPS_H"
> - print "#define __ASM_CPUCAPS_H"
> + print "#ifndef __ASM_CPUCAP_DEFS_H"
> + print "#define __ASM_CPUCAP_DEFS_H"
> print ""
> print "/* Generated file - do not edit */"
> cap_num = 0
> @@ -31,7 +31,7 @@ BEGIN {
> END {
> printf("#define ARM64_NCAPS\t\t\t\t\t%d\n", cap_num)
> print ""
> - print "#endif /* __ASM_CPUCAPS_H */"
> + print "#endif /* __ASM_CPUCAP_DEFS_H */"
> }
>
> # Any lines not handled by previous rules are unexpected
> --
> 2.30.2
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2023-10-05 12:30 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-05 9:49 [PATCH v2 00/38] arm64: Remove cpus_have_const_cap() Mark Rutland
2023-10-05 9:49 ` [PATCH v2 01/38] clocksource/drivers/arm_arch_timer: Initialize evtstrm after finalizing cpucaps Mark Rutland
2023-10-05 9:49 ` [PATCH v2 02/38] arm64/arm: xen: enlighten: Fix KPTI checks Mark Rutland
2023-10-05 9:49 ` [PATCH v2 03/38] arm64: Factor out cpucap definitions Mark Rutland
2023-10-05 12:29 ` Mark Rutland
2023-10-05 9:49 ` [PATCH v2 04/38] arm64: Add cpucap_is_possible() Mark Rutland
2023-10-05 9:49 ` [PATCH v2 05/38] arm64: Add cpus_have_final_boot_cap() Mark Rutland
2023-10-05 9:49 ` [PATCH v2 06/38] arm64: Rework setup_cpu_features() Mark Rutland
2023-10-05 9:49 ` [PATCH v2 07/38] arm64: Fixup user features at boot time Mark Rutland
2023-10-05 9:49 ` [PATCH v2 08/38] arm64: Split kpti_install_ng_mappings() Mark Rutland
2023-10-05 9:49 ` [PATCH v2 09/38] arm64: kvm: Use cpus_have_final_cap() explicitly Mark Rutland
2023-10-05 9:49 ` [PATCH v2 10/38] arm64: Explicitly save/restore CPACR when probing SVE and SME Mark Rutland
2023-10-05 9:49 ` [PATCH v2 11/38] arm64: use build-time assertions for cpucap ordering Mark Rutland
2023-10-05 9:49 ` [PATCH v2 11/38] arm64: Use " Mark Rutland
2023-10-05 9:49 ` [PATCH v2 12/38] arm64: Rename SVE/SME cpu_enable functions Mark Rutland
2023-10-05 9:50 ` [PATCH v2 13/38] arm64: Use a positive cpucap for FP/SIMD Mark Rutland
2023-10-05 9:50 ` [PATCH v2 14/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_{ADDRESS,GENERIC}_AUTH Mark Rutland
2023-10-05 9:50 ` [PATCH v2 15/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_ARMv8_4_TTL Mark Rutland
2023-10-05 9:50 ` [PATCH v2 16/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_BTI Mark Rutland
2023-10-05 9:50 ` [PATCH v2 17/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CACHE_DIC Mark Rutland
2023-10-05 9:50 ` [PATCH v2 18/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_CNP Mark Rutland
2023-10-05 9:50 ` [PATCH v2 19/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_DIT Mark Rutland
2023-10-05 9:50 ` [PATCH v2 20/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_GIC_PRIO_MASKING Mark Rutland
2023-10-05 9:50 ` [PATCH v2 21/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_PAN Mark Rutland
2023-10-05 9:50 ` [PATCH v2 22/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_EPAN Mark Rutland
2023-10-05 9:50 ` [PATCH v2 23/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_RNG Mark Rutland
2023-10-05 9:50 ` [PATCH v2 24/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_WFXT Mark Rutland
2023-10-05 9:50 ` [PATCH v2 25/38] arm64: Avoid cpus_have_const_cap() for ARM64_HAS_TLB_RANGE Mark Rutland
2023-10-05 9:50 ` [PATCH v2 26/38] arm64: Avoid cpus_have_const_cap() for ARM64_MTE Mark Rutland
2023-10-05 9:50 ` [PATCH v2 27/38] arm64: Avoid cpus_have_const_cap() for ARM64_SSBS Mark Rutland
2023-10-05 9:50 ` [PATCH v2 28/38] arm64: Avoid cpus_have_const_cap() for ARM64_SPECTRE_V2 Mark Rutland
2023-10-05 9:50 ` [PATCH v2 29/38] arm64: Avoid cpus_have_const_cap() for ARM64_{SVE,SME,SME2,FA64} Mark Rutland
2023-10-05 9:50 ` [PATCH v2 30/38] arm64: Avoid cpus_have_const_cap() for ARM64_UNMAP_KERNEL_AT_EL0 Mark Rutland
2023-10-05 9:50 ` [PATCH v2 31/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_843419 Mark Rutland
2023-10-05 9:50 ` [PATCH v2 32/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1542419 Mark Rutland
2023-10-05 9:50 ` [PATCH v2 33/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1742098 Mark Rutland
2023-10-05 9:50 ` [PATCH v2 34/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_2645198 Mark Rutland
2023-10-05 9:50 ` [PATCH v2 35/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_CAVIUM_23154 Mark Rutland
2023-10-05 9:50 ` [PATCH v2 36/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_NVIDIA_CARMEL_CNP Mark Rutland
2023-10-05 9:50 ` [PATCH v2 37/38] arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_REPEAT_TLBI Mark Rutland
2023-10-05 9:50 ` [PATCH v2 38/38] arm64: Remove cpus_have_const_cap() Mark Rutland
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).