* [PATCH v2 0/7] arm64: Make EFI calls preemptible
@ 2025-09-05 13:30 Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 1/7] efi: Add missing static initializer for efi_mm::cpus_allowed_lock Ard Biesheuvel
` (8 more replies)
0 siblings, 9 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2025-09-05 13:30 UTC (permalink / raw)
To: linux-efi
Cc: linux-kernel, linux-arm-kernel, Ard Biesheuvel, Will Deacon,
Mark Rutland, Sebastian Andrzej Siewior, Peter Zijlstra
From: Ard Biesheuvel <ardb@kernel.org>
The arm64 port permits the use of the baseline FP/SIMD register file in
kernel mode, and no longer requires preemption to be disabled. Now that
the EFI spec is being clarified to state that EFI runtime services may
only use baseline FP/SIMD, the fact that EFI may code may use FP/SIMD
registers (while executing at the same privilege level as the kernel) is
no longer a reason to disable preemption when invoking them.
This means that the only remaining reason for disabling preemption is
the fact that the active mm is swapped out and replaced with efi_mm in a
way that is hidden from the scheduler, and so scheduling is not
supported currently. However, given that virtually all (*) EFI runtime
calls are made from the efi_rts_wq workqueue, the efi_mm can simply be
loaded into the workqueue worker kthread while the call is in progress,
and this does not require preemption to be disabled.
Note that this is only a partial solution in terms of RT guarantees,
given that the runtime services execute at the same privilege level as
the kernel, and can therefore disable interrupts (and therefore
preemption) directly. But it should prevent scheduling latency spikes
for EFI calls that simply take a long time to run to completion.
Changes since v1/RFC:
- Disable uaccess for SWPAN before updating the preserved TTBR0 value
- Document why disabling migration is needed
- Rebase onto v6.17-rc1
(*) only efi_reset_system() and EFI pstore invoke EFI runtime services
without going through the workqueue, and the latter only when saving
a kernel oops log to the EFI varstore
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Ard Biesheuvel (7):
efi: Add missing static initializer for efi_mm::cpus_allowed_lock
efi/runtime: Return success/failure from arch_efi_call_virt_setup()
efi/runtime: Deal with arch_efi_call_virt_setup() returning failure
arm64/fpsimd: Don't warn when EFI execution context is preemptible
arm64/efi: Use a semaphore to protect the EFI stack and FP/SIMD state
arm64/efi: Move uaccess en/disable out of efi_set_pgd()
arm64/efi: Call EFI runtime services without disabling preemption
arch/arm/include/asm/efi.h | 2 +-
arch/arm64/include/asm/efi.h | 15 ++----
arch/arm64/kernel/efi.c | 57 +++++++++++++++++---
arch/arm64/kernel/fpsimd.c | 4 +-
arch/loongarch/include/asm/efi.h | 2 +-
arch/riscv/include/asm/efi.h | 2 +-
arch/x86/include/asm/efi.h | 2 +-
arch/x86/platform/efi/efi_32.c | 3 +-
arch/x86/platform/efi/efi_64.c | 3 +-
arch/x86/platform/uv/bios_uv.c | 3 +-
drivers/firmware/efi/efi.c | 3 ++
drivers/firmware/efi/riscv-runtime.c | 3 +-
drivers/firmware/efi/runtime-wrappers.c | 20 ++++---
include/linux/efi.h | 8 +--
14 files changed, 89 insertions(+), 38 deletions(-)
base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585
--
2.51.0.355.g5224444f11-goog
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2 1/7] efi: Add missing static initializer for efi_mm::cpus_allowed_lock
2025-09-05 13:30 [PATCH v2 0/7] arm64: Make EFI calls preemptible Ard Biesheuvel
@ 2025-09-05 13:30 ` Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 2/7] efi/runtime: Return success/failure from arch_efi_call_virt_setup() Ard Biesheuvel
` (7 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2025-09-05 13:30 UTC (permalink / raw)
To: linux-efi
Cc: linux-kernel, linux-arm-kernel, Ard Biesheuvel, Will Deacon,
Mark Rutland, Sebastian Andrzej Siewior, Peter Zijlstra, stable
From: Ard Biesheuvel <ardb@kernel.org>
Initialize the cpus_allowed_lock struct member of efi_mm.
Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
drivers/firmware/efi/efi.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
index 1ce428e2ac8a..fc407d891348 100644
--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -74,6 +74,9 @@ struct mm_struct efi_mm = {
.page_table_lock = __SPIN_LOCK_UNLOCKED(efi_mm.page_table_lock),
.mmlist = LIST_HEAD_INIT(efi_mm.mmlist),
.cpu_bitmap = { [BITS_TO_LONGS(NR_CPUS)] = 0},
+#ifdef CONFIG_SCHED_MM_CID
+ .cpus_allowed_lock = __RAW_SPIN_LOCK_UNLOCKED(efi_mm.cpus_allowed_lock),
+#endif
};
struct workqueue_struct *efi_rts_wq;
--
2.51.0.355.g5224444f11-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 2/7] efi/runtime: Return success/failure from arch_efi_call_virt_setup()
2025-09-05 13:30 [PATCH v2 0/7] arm64: Make EFI calls preemptible Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 1/7] efi: Add missing static initializer for efi_mm::cpus_allowed_lock Ard Biesheuvel
@ 2025-09-05 13:30 ` Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 3/7] efi/runtime: Deal with arch_efi_call_virt_setup() returning failure Ard Biesheuvel
` (6 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2025-09-05 13:30 UTC (permalink / raw)
To: linux-efi
Cc: linux-kernel, linux-arm-kernel, Ard Biesheuvel, Will Deacon,
Mark Rutland, Sebastian Andrzej Siewior, Peter Zijlstra
From: Ard Biesheuvel <ardb@kernel.org>
Permit the arch glue to signal failure from arch_efi_call_virt_setup().
This permits the use of sleeping locks in the call wrappers, and this
will allow EFI runtime services to be invoked without the need for
disabling preemption.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm/include/asm/efi.h | 2 +-
arch/arm64/include/asm/efi.h | 2 +-
arch/arm64/kernel/efi.c | 3 ++-
arch/loongarch/include/asm/efi.h | 2 +-
arch/riscv/include/asm/efi.h | 2 +-
arch/x86/include/asm/efi.h | 2 +-
arch/x86/platform/efi/efi_32.c | 3 ++-
arch/x86/platform/efi/efi_64.c | 3 ++-
drivers/firmware/efi/riscv-runtime.c | 3 ++-
9 files changed, 13 insertions(+), 9 deletions(-)
diff --git a/arch/arm/include/asm/efi.h b/arch/arm/include/asm/efi.h
index e408399d5f0e..0809a69bb579 100644
--- a/arch/arm/include/asm/efi.h
+++ b/arch/arm/include/asm/efi.h
@@ -23,7 +23,7 @@ void arm_efi_init(void);
int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md);
int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md, bool);
-#define arch_efi_call_virt_setup() efi_virtmap_load()
+#define arch_efi_call_virt_setup() (efi_virtmap_load(), true)
#define arch_efi_call_virt_teardown() efi_virtmap_unload()
#ifdef CONFIG_CPU_TTBR0_PAN
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index bcd5622aa096..decf87777f57 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -37,7 +37,7 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md,
extern u64 *efi_rt_stack_top;
efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...);
-void arch_efi_call_virt_setup(void);
+bool arch_efi_call_virt_setup(void);
void arch_efi_call_virt_teardown(void);
/*
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index 6c371b158b99..9b03f3d77a25 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -167,11 +167,12 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
static DEFINE_RAW_SPINLOCK(efi_rt_lock);
-void arch_efi_call_virt_setup(void)
+bool arch_efi_call_virt_setup(void)
{
efi_virtmap_load();
raw_spin_lock(&efi_rt_lock);
__efi_fpsimd_begin();
+ return true;
}
void arch_efi_call_virt_teardown(void)
diff --git a/arch/loongarch/include/asm/efi.h b/arch/loongarch/include/asm/efi.h
index eddc8e79b3fa..84cf2151123f 100644
--- a/arch/loongarch/include/asm/efi.h
+++ b/arch/loongarch/include/asm/efi.h
@@ -14,7 +14,7 @@ void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
#define ARCH_EFI_IRQ_FLAGS_MASK 0x00000004 /* Bit 2: CSR.CRMD.IE */
-#define arch_efi_call_virt_setup()
+#define arch_efi_call_virt_setup() true
#define arch_efi_call_virt_teardown()
#define EFI_ALLOC_ALIGN SZ_64K
diff --git a/arch/riscv/include/asm/efi.h b/arch/riscv/include/asm/efi.h
index 46a355913b27..a7b4d719e7be 100644
--- a/arch/riscv/include/asm/efi.h
+++ b/arch/riscv/include/asm/efi.h
@@ -40,7 +40,7 @@ static inline unsigned long efi_get_kimg_min_align(void)
#define EFI_KIMG_PREFERRED_ADDRESS efi_get_kimg_min_align()
-void arch_efi_call_virt_setup(void);
+bool arch_efi_call_virt_setup(void);
void arch_efi_call_virt_teardown(void);
unsigned long stext_offset(void);
diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h
index f227a70ac91f..879c8402e024 100644
--- a/arch/x86/include/asm/efi.h
+++ b/arch/x86/include/asm/efi.h
@@ -140,7 +140,7 @@ extern void efi_delete_dummy_variable(void);
extern void efi_crash_gracefully_on_page_fault(unsigned long phys_addr);
extern void efi_free_boot_services(void);
-void arch_efi_call_virt_setup(void);
+bool arch_efi_call_virt_setup(void);
void arch_efi_call_virt_teardown(void);
extern u64 efi_setup;
diff --git a/arch/x86/platform/efi/efi_32.c b/arch/x86/platform/efi/efi_32.c
index b2cc7b4552a1..215f16ce84ab 100644
--- a/arch/x86/platform/efi/efi_32.c
+++ b/arch/x86/platform/efi/efi_32.c
@@ -141,10 +141,11 @@ void __init efi_runtime_update_mappings(void)
}
}
-void arch_efi_call_virt_setup(void)
+bool arch_efi_call_virt_setup(void)
{
efi_fpu_begin();
firmware_restrict_branch_speculation_start();
+ return true;
}
void arch_efi_call_virt_teardown(void)
diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index b4409df2105a..d4b1e70f41fa 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -443,12 +443,13 @@ static void efi_leave_mm(void)
unuse_temporary_mm(efi_prev_mm);
}
-void arch_efi_call_virt_setup(void)
+bool arch_efi_call_virt_setup(void)
{
efi_sync_low_kernel_mappings();
efi_fpu_begin();
firmware_restrict_branch_speculation_start();
efi_enter_mm();
+ return true;
}
void arch_efi_call_virt_teardown(void)
diff --git a/drivers/firmware/efi/riscv-runtime.c b/drivers/firmware/efi/riscv-runtime.c
index fa71cd898120..07e04b8f982a 100644
--- a/drivers/firmware/efi/riscv-runtime.c
+++ b/drivers/firmware/efi/riscv-runtime.c
@@ -142,10 +142,11 @@ static void efi_virtmap_unload(void)
preempt_enable();
}
-void arch_efi_call_virt_setup(void)
+bool arch_efi_call_virt_setup(void)
{
sync_kernel_mappings(efi_mm.pgd);
efi_virtmap_load();
+ return true;
}
void arch_efi_call_virt_teardown(void)
--
2.51.0.355.g5224444f11-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 3/7] efi/runtime: Deal with arch_efi_call_virt_setup() returning failure
2025-09-05 13:30 [PATCH v2 0/7] arm64: Make EFI calls preemptible Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 1/7] efi: Add missing static initializer for efi_mm::cpus_allowed_lock Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 2/7] efi/runtime: Return success/failure from arch_efi_call_virt_setup() Ard Biesheuvel
@ 2025-09-05 13:30 ` Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 4/7] arm64/fpsimd: Don't warn when EFI execution context is preemptible Ard Biesheuvel
` (5 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2025-09-05 13:30 UTC (permalink / raw)
To: linux-efi
Cc: linux-kernel, linux-arm-kernel, Ard Biesheuvel, Will Deacon,
Mark Rutland, Sebastian Andrzej Siewior, Peter Zijlstra
From: Ard Biesheuvel <ardb@kernel.org>
Deal with arch_efi_call_virt_setup() returning failure, by giving up and
returning an appropriate error code to the caller.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/x86/platform/uv/bios_uv.c | 3 ++-
drivers/firmware/efi/runtime-wrappers.c | 20 +++++++++++++-------
include/linux/efi.h | 8 ++++----
3 files changed, 19 insertions(+), 12 deletions(-)
diff --git a/arch/x86/platform/uv/bios_uv.c b/arch/x86/platform/uv/bios_uv.c
index bf31af3d32d6..a442bbe5b1c2 100644
--- a/arch/x86/platform/uv/bios_uv.c
+++ b/arch/x86/platform/uv/bios_uv.c
@@ -32,7 +32,8 @@ static s64 __uv_bios_call(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3,
*/
return BIOS_STATUS_UNIMPLEMENTED;
- ret = efi_call_virt_pointer(tab, function, (u64)which, a1, a2, a3, a4, a5);
+ ret = efi_call_virt_pointer(tab, function, BIOS_STATUS_UNIMPLEMENTED,
+ (u64)which, a1, a2, a3, a4, a5);
return ret;
}
diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
index 708b777857d3..82a27b414485 100644
--- a/drivers/firmware/efi/runtime-wrappers.c
+++ b/drivers/firmware/efi/runtime-wrappers.c
@@ -219,7 +219,10 @@ static void __nocfi efi_call_rts(struct work_struct *work)
efi_status_t status = EFI_NOT_FOUND;
unsigned long flags;
- arch_efi_call_virt_setup();
+ if (!arch_efi_call_virt_setup()) {
+ status = EFI_NOT_READY;
+ goto out;
+ }
flags = efi_call_virt_save_flags();
switch (efi_rts_work.efi_rts_id) {
@@ -308,6 +311,7 @@ static void __nocfi efi_call_rts(struct work_struct *work)
efi_call_virt_check_flags(flags, efi_rts_work.caller);
arch_efi_call_virt_teardown();
+out:
efi_rts_work.status = status;
complete(&efi_rts_work.efi_rts_comp);
}
@@ -444,8 +448,8 @@ virt_efi_set_variable_nb(efi_char16_t *name, efi_guid_t *vendor, u32 attr,
if (down_trylock(&efi_runtime_lock))
return EFI_NOT_READY;
- status = efi_call_virt_pointer(efi.runtime, set_variable, name, vendor,
- attr, data_size, data);
+ status = efi_call_virt_pointer(efi.runtime, set_variable, EFI_NOT_READY,
+ name, vendor, attr, data_size, data);
up(&efi_runtime_lock);
return status;
}
@@ -481,9 +485,9 @@ virt_efi_query_variable_info_nb(u32 attr, u64 *storage_space,
if (down_trylock(&efi_runtime_lock))
return EFI_NOT_READY;
- status = efi_call_virt_pointer(efi.runtime, query_variable_info, attr,
- storage_space, remaining_space,
- max_variable_size);
+ status = efi_call_virt_pointer(efi.runtime, query_variable_info,
+ EFI_NOT_READY, attr, storage_space,
+ remaining_space, max_variable_size);
up(&efi_runtime_lock);
return status;
}
@@ -509,12 +513,14 @@ virt_efi_reset_system(int reset_type, efi_status_t status,
return;
}
- arch_efi_call_virt_setup();
+ if (!arch_efi_call_virt_setup())
+ goto out;
efi_rts_work.efi_rts_id = EFI_RESET_SYSTEM;
arch_efi_call_virt(efi.runtime, reset_system, reset_type, status,
data_size, data);
arch_efi_call_virt_teardown();
+out:
up(&efi_runtime_lock);
}
diff --git a/include/linux/efi.h b/include/linux/efi.h
index a98cc39e7aaa..325d892e559b 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -1181,19 +1181,19 @@ static inline void efi_check_for_embedded_firmwares(void) { }
* Restores the usual kernel environment once the call has returned.
*/
-#define efi_call_virt_pointer(p, f, args...) \
+#define efi_call_virt_pointer(p, f, busy, args...) \
({ \
- typeof((p)->f(args)) __s; \
+ typeof((p)->f(args)) __s = (busy); \
unsigned long __flags; \
\
- arch_efi_call_virt_setup(); \
+ if (!arch_efi_call_virt_setup()) goto __out; \
\
__flags = efi_call_virt_save_flags(); \
__s = arch_efi_call_virt(p, f, args); \
efi_call_virt_check_flags(__flags, NULL); \
\
arch_efi_call_virt_teardown(); \
- \
+__out: \
__s; \
})
--
2.51.0.355.g5224444f11-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 4/7] arm64/fpsimd: Don't warn when EFI execution context is preemptible
2025-09-05 13:30 [PATCH v2 0/7] arm64: Make EFI calls preemptible Ard Biesheuvel
` (2 preceding siblings ...)
2025-09-05 13:30 ` [PATCH v2 3/7] efi/runtime: Deal with arch_efi_call_virt_setup() returning failure Ard Biesheuvel
@ 2025-09-05 13:30 ` Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 5/7] arm64/efi: Use a semaphore to protect the EFI stack and FP/SIMD state Ard Biesheuvel
` (4 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2025-09-05 13:30 UTC (permalink / raw)
To: linux-efi
Cc: linux-kernel, linux-arm-kernel, Ard Biesheuvel, Will Deacon,
Mark Rutland, Sebastian Andrzej Siewior, Peter Zijlstra
From: Ard Biesheuvel <ardb@kernel.org>
Kernel mode FP/SIMD no longer requires preemption to be disabled, so
only warn on uses of FP/SIMD from preemptible context if the fallback
path is taken for cases where kernel mode NEON would not be allowed
otherwise.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/kernel/fpsimd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index c37f02d7194e..d26a02ea2bb9 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -1933,11 +1933,11 @@ void __efi_fpsimd_begin(void)
if (!system_supports_fpsimd())
return;
- WARN_ON(preemptible());
-
if (may_use_simd()) {
kernel_neon_begin();
} else {
+ WARN_ON(preemptible());
+
/*
* If !efi_sve_state, SVE can't be in use yet and doesn't need
* preserving:
--
2.51.0.355.g5224444f11-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 5/7] arm64/efi: Use a semaphore to protect the EFI stack and FP/SIMD state
2025-09-05 13:30 [PATCH v2 0/7] arm64: Make EFI calls preemptible Ard Biesheuvel
` (3 preceding siblings ...)
2025-09-05 13:30 ` [PATCH v2 4/7] arm64/fpsimd: Don't warn when EFI execution context is preemptible Ard Biesheuvel
@ 2025-09-05 13:30 ` Ard Biesheuvel
2025-09-05 13:44 ` Peter Zijlstra
2025-09-05 13:30 ` [PATCH v2 6/7] arm64/efi: Move uaccess en/disable out of efi_set_pgd() Ard Biesheuvel
` (3 subsequent siblings)
8 siblings, 1 reply; 14+ messages in thread
From: Ard Biesheuvel @ 2025-09-05 13:30 UTC (permalink / raw)
To: linux-efi
Cc: linux-kernel, linux-arm-kernel, Ard Biesheuvel, Will Deacon,
Mark Rutland, Sebastian Andrzej Siewior, Peter Zijlstra
From: Ard Biesheuvel <ardb@kernel.org>
Replace the spinlock in the arm64 glue code with a semaphore, so that
the CPU can preempted while running the EFI runtime service.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/kernel/efi.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index 9b03f3d77a25..8b999c07c7d1 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -165,12 +165,19 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
return s;
}
-static DEFINE_RAW_SPINLOCK(efi_rt_lock);
+static DEFINE_SEMAPHORE(efi_rt_lock, 1);
bool arch_efi_call_virt_setup(void)
{
+ /*
+ * This might be called from a non-sleepable context so try to take the
+ * lock but don't block on it. This should never occur in practice, as
+ * all EFI runtime calls are serialized under the efi_runtime_lock.
+ */
+ if (WARN_ON(down_trylock(&efi_rt_lock)))
+ return false;
+
efi_virtmap_load();
- raw_spin_lock(&efi_rt_lock);
__efi_fpsimd_begin();
return true;
}
@@ -178,8 +185,8 @@ bool arch_efi_call_virt_setup(void)
void arch_efi_call_virt_teardown(void)
{
__efi_fpsimd_end();
- raw_spin_unlock(&efi_rt_lock);
efi_virtmap_unload();
+ up(&efi_rt_lock);
}
asmlinkage u64 *efi_rt_stack_top __ro_after_init;
--
2.51.0.355.g5224444f11-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 6/7] arm64/efi: Move uaccess en/disable out of efi_set_pgd()
2025-09-05 13:30 [PATCH v2 0/7] arm64: Make EFI calls preemptible Ard Biesheuvel
` (4 preceding siblings ...)
2025-09-05 13:30 ` [PATCH v2 5/7] arm64/efi: Use a semaphore to protect the EFI stack and FP/SIMD state Ard Biesheuvel
@ 2025-09-05 13:30 ` Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 7/7] arm64/efi: Call EFI runtime services without disabling preemption Ard Biesheuvel
` (2 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2025-09-05 13:30 UTC (permalink / raw)
To: linux-efi
Cc: linux-kernel, linux-arm-kernel, Ard Biesheuvel, Will Deacon,
Mark Rutland, Sebastian Andrzej Siewior, Peter Zijlstra
From: Ard Biesheuvel <ardb@kernel.org>
efi_set_pgd() will no longer be called when invoking EFI runtime
services via the efi_rts_wq work queue, but the uaccess en/disable are
still needed when using PAN emulation using TTBR0 switching. So move
these into the callers.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/include/asm/efi.h | 13 +++----------
arch/arm64/kernel/efi.c | 18 ++++++++++++++++++
2 files changed, 21 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index decf87777f57..09650b2e15af 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -126,21 +126,14 @@ static inline void efi_set_pgd(struct mm_struct *mm)
if (mm != current->active_mm) {
/*
* Update the current thread's saved ttbr0 since it is
- * restored as part of a return from exception. Enable
- * access to the valid TTBR0_EL1 and invoke the errata
- * workaround directly since there is no return from
- * exception when invoking the EFI run-time services.
+ * restored as part of a return from exception.
*/
update_saved_ttbr0(current, mm);
- uaccess_ttbr0_enable();
- post_ttbr_update_workaround();
} else {
/*
- * Defer the switch to the current thread's TTBR0_EL1
- * until uaccess_enable(). Restore the current
- * thread's saved ttbr0 corresponding to its active_mm
+ * Restore the current thread's saved ttbr0
+ * corresponding to its active_mm
*/
- uaccess_ttbr0_disable();
update_saved_ttbr0(current, current->active_mm);
}
}
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index 8b999c07c7d1..ece046bcf0db 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -178,6 +178,15 @@ bool arch_efi_call_virt_setup(void)
return false;
efi_virtmap_load();
+
+ /*
+ * Enable access to the valid TTBR0_EL1 and invoke the errata
+ * workaround directly since there is no return from exception when
+ * invoking the EFI run-time services.
+ */
+ uaccess_ttbr0_enable();
+ post_ttbr_update_workaround();
+
__efi_fpsimd_begin();
return true;
}
@@ -185,6 +194,15 @@ bool arch_efi_call_virt_setup(void)
void arch_efi_call_virt_teardown(void)
{
__efi_fpsimd_end();
+
+ /*
+ * Defer the switch to the current thread's TTBR0_EL1 until
+ * uaccess_enable(). Do so before efi_virtmap_unload() updates the
+ * saved TTBR0 value, so the userland page tables are not activated
+ * inadvertently over the back of an exception.
+ */
+ uaccess_ttbr0_disable();
+
efi_virtmap_unload();
up(&efi_rt_lock);
}
--
2.51.0.355.g5224444f11-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 7/7] arm64/efi: Call EFI runtime services without disabling preemption
2025-09-05 13:30 [PATCH v2 0/7] arm64: Make EFI calls preemptible Ard Biesheuvel
` (5 preceding siblings ...)
2025-09-05 13:30 ` [PATCH v2 6/7] arm64/efi: Move uaccess en/disable out of efi_set_pgd() Ard Biesheuvel
@ 2025-09-05 13:30 ` Ard Biesheuvel
2025-09-05 15:45 ` [PATCH v2 0/7] arm64: Make EFI calls preemptible Yeoreum Yun
2025-09-15 8:52 ` Sebastian Andrzej Siewior
8 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2025-09-05 13:30 UTC (permalink / raw)
To: linux-efi
Cc: linux-kernel, linux-arm-kernel, Ard Biesheuvel, Will Deacon,
Mark Rutland, Sebastian Andrzej Siewior, Peter Zijlstra
From: Ard Biesheuvel <ardb@kernel.org>
The only remaining reason why EFI runtime services are invoked with
preemption disabled is the fact that the mm is swapped out behind the
back of the context switching code.
The kernel no longer disables preemption in kernel_neon_begin().
Furthermore, the EFI spec is being clarified to explicitly state that
only baseline FP/SIMD is permitted in EFI runtime service
implementations, and so the existing kernel mode NEON context switching
code is sufficient to preserve and restore the execution context of an
in-progress EFI runtime service call.
Most EFI calls are made from the efi_rts_wq, which is serviced by a
kthread. As kthreads never return to user space, they usually don't have
an mm, and so we can use the existing infrastructure to swap in the
efi_mm while the EFI call is in progress. This is visible to the
scheduler, which will therefore reactivate the selected mm when
switching out the kthread and back in again.
Given that the EFI spec explicitly permits runtime services to be called
with interrupts enabled, firmware code is already required to tolerate
interruptions. So rather than disable preemption, disable only migration
so that EFI runtime services are less likely to cause scheduling delays.
To avoid potential issues where runtime services are interrupted while
polling the secure firmware for async completions, keep migration
disabled so that a runtime service invocation does not resume on a
different CPU from the one it was started on.
Note, though, that the firmware executes at the same privilege level as
the kernel, and is therefore able to disable interrupts altogether.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/kernel/efi.c | 23 ++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index ece046bcf0db..cf62980006ea 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -10,6 +10,7 @@
#include <linux/efi.h>
#include <linux/init.h>
#include <linux/kmemleak.h>
+#include <linux/kthread.h>
#include <linux/screen_info.h>
#include <linux/vmalloc.h>
@@ -177,7 +178,19 @@ bool arch_efi_call_virt_setup(void)
if (WARN_ON(down_trylock(&efi_rt_lock)))
return false;
- efi_virtmap_load();
+ if (preemptible() && (current->flags & PF_KTHREAD)) {
+ /*
+ * Disable migration to ensure that a preempted EFI runtime
+ * service call will be resumed on the same CPU. This avoids
+ * potential issues with EFI runtime calls that are preempted
+ * while polling for an asynchronous completion of a secure
+ * firmware call, which may not permit the CPU to change.
+ */
+ migrate_disable();
+ kthread_use_mm(&efi_mm);
+ } else {
+ efi_virtmap_load();
+ }
/*
* Enable access to the valid TTBR0_EL1 and invoke the errata
@@ -203,7 +216,13 @@ void arch_efi_call_virt_teardown(void)
*/
uaccess_ttbr0_disable();
- efi_virtmap_unload();
+ if (preemptible() && (current->flags & PF_KTHREAD)) {
+ kthread_unuse_mm(&efi_mm);
+ migrate_enable();
+ } else {
+ efi_virtmap_unload();
+ }
+
up(&efi_rt_lock);
}
--
2.51.0.355.g5224444f11-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v2 5/7] arm64/efi: Use a semaphore to protect the EFI stack and FP/SIMD state
2025-09-05 13:30 ` [PATCH v2 5/7] arm64/efi: Use a semaphore to protect the EFI stack and FP/SIMD state Ard Biesheuvel
@ 2025-09-05 13:44 ` Peter Zijlstra
2025-09-05 13:54 ` Ard Biesheuvel
0 siblings, 1 reply; 14+ messages in thread
From: Peter Zijlstra @ 2025-09-05 13:44 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: linux-efi, linux-kernel, linux-arm-kernel, Ard Biesheuvel,
Will Deacon, Mark Rutland, Sebastian Andrzej Siewior
On Fri, Sep 05, 2025 at 03:30:41PM +0200, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
>
> Replace the spinlock in the arm64 glue code with a semaphore, so that
> the CPU can preempted while running the EFI runtime service.
Gotta ask, why a semaphore and not a mutex?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 5/7] arm64/efi: Use a semaphore to protect the EFI stack and FP/SIMD state
2025-09-05 13:44 ` Peter Zijlstra
@ 2025-09-05 13:54 ` Ard Biesheuvel
2025-09-08 15:37 ` Peter Zijlstra
0 siblings, 1 reply; 14+ messages in thread
From: Ard Biesheuvel @ 2025-09-05 13:54 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Ard Biesheuvel, linux-efi, linux-kernel, linux-arm-kernel,
Will Deacon, Mark Rutland, Sebastian Andrzej Siewior
On Fri, 5 Sept 2025 at 15:44, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Sep 05, 2025 at 03:30:41PM +0200, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > Replace the spinlock in the arm64 glue code with a semaphore, so that
> > the CPU can preempted while running the EFI runtime service.
>
> Gotta ask, why a semaphore and not a mutex?
Because mutex_trylock() is not permitted in interrupt context.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 0/7] arm64: Make EFI calls preemptible
2025-09-05 13:30 [PATCH v2 0/7] arm64: Make EFI calls preemptible Ard Biesheuvel
` (6 preceding siblings ...)
2025-09-05 13:30 ` [PATCH v2 7/7] arm64/efi: Call EFI runtime services without disabling preemption Ard Biesheuvel
@ 2025-09-05 15:45 ` Yeoreum Yun
2025-09-15 8:52 ` Sebastian Andrzej Siewior
8 siblings, 0 replies; 14+ messages in thread
From: Yeoreum Yun @ 2025-09-05 15:45 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: linux-efi, linux-kernel, linux-arm-kernel, Ard Biesheuvel,
Will Deacon, Mark Rutland, Sebastian Andrzej Siewior,
Peter Zijlstra
This series looks good to me.
Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com>
> From: Ard Biesheuvel <ardb@kernel.org>
>
> The arm64 port permits the use of the baseline FP/SIMD register file in
> kernel mode, and no longer requires preemption to be disabled. Now that
> the EFI spec is being clarified to state that EFI runtime services may
> only use baseline FP/SIMD, the fact that EFI may code may use FP/SIMD
> registers (while executing at the same privilege level as the kernel) is
> no longer a reason to disable preemption when invoking them.
>
> This means that the only remaining reason for disabling preemption is
> the fact that the active mm is swapped out and replaced with efi_mm in a
> way that is hidden from the scheduler, and so scheduling is not
> supported currently. However, given that virtually all (*) EFI runtime
> calls are made from the efi_rts_wq workqueue, the efi_mm can simply be
> loaded into the workqueue worker kthread while the call is in progress,
> and this does not require preemption to be disabled.
>
> Note that this is only a partial solution in terms of RT guarantees,
> given that the runtime services execute at the same privilege level as
> the kernel, and can therefore disable interrupts (and therefore
> preemption) directly. But it should prevent scheduling latency spikes
> for EFI calls that simply take a long time to run to completion.
>
> Changes since v1/RFC:
> - Disable uaccess for SWPAN before updating the preserved TTBR0 value
> - Document why disabling migration is needed
> - Rebase onto v6.17-rc1
>
> (*) only efi_reset_system() and EFI pstore invoke EFI runtime services
> without going through the workqueue, and the latter only when saving
> a kernel oops log to the EFI varstore
>
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
>
> Ard Biesheuvel (7):
> efi: Add missing static initializer for efi_mm::cpus_allowed_lock
> efi/runtime: Return success/failure from arch_efi_call_virt_setup()
> efi/runtime: Deal with arch_efi_call_virt_setup() returning failure
> arm64/fpsimd: Don't warn when EFI execution context is preemptible
> arm64/efi: Use a semaphore to protect the EFI stack and FP/SIMD state
> arm64/efi: Move uaccess en/disable out of efi_set_pgd()
> arm64/efi: Call EFI runtime services without disabling preemption
>
> arch/arm/include/asm/efi.h | 2 +-
> arch/arm64/include/asm/efi.h | 15 ++----
> arch/arm64/kernel/efi.c | 57 +++++++++++++++++---
> arch/arm64/kernel/fpsimd.c | 4 +-
> arch/loongarch/include/asm/efi.h | 2 +-
> arch/riscv/include/asm/efi.h | 2 +-
> arch/x86/include/asm/efi.h | 2 +-
> arch/x86/platform/efi/efi_32.c | 3 +-
> arch/x86/platform/efi/efi_64.c | 3 +-
> arch/x86/platform/uv/bios_uv.c | 3 +-
> drivers/firmware/efi/efi.c | 3 ++
> drivers/firmware/efi/riscv-runtime.c | 3 +-
> drivers/firmware/efi/runtime-wrappers.c | 20 ++++---
> include/linux/efi.h | 8 +--
> 14 files changed, 89 insertions(+), 38 deletions(-)
>
>
> base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585
> --
> 2.51.0.355.g5224444f11-goog
>
>
--
Sincerely,
Yeoreum Yun
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 5/7] arm64/efi: Use a semaphore to protect the EFI stack and FP/SIMD state
2025-09-05 13:54 ` Ard Biesheuvel
@ 2025-09-08 15:37 ` Peter Zijlstra
0 siblings, 0 replies; 14+ messages in thread
From: Peter Zijlstra @ 2025-09-08 15:37 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Ard Biesheuvel, linux-efi, linux-kernel, linux-arm-kernel,
Will Deacon, Mark Rutland, Sebastian Andrzej Siewior
On Fri, Sep 05, 2025 at 03:54:55PM +0200, Ard Biesheuvel wrote:
> On Fri, 5 Sept 2025 at 15:44, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Fri, Sep 05, 2025 at 03:30:41PM +0200, Ard Biesheuvel wrote:
> > > From: Ard Biesheuvel <ardb@kernel.org>
> > >
> > > Replace the spinlock in the arm64 glue code with a semaphore, so that
> > > the CPU can preempted while running the EFI runtime service.
> >
> > Gotta ask, why a semaphore and not a mutex?
>
> Because mutex_trylock() is not permitted in interrupt context.
Ah, true. Might make for a good comment near there.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 0/7] arm64: Make EFI calls preemptible
2025-09-05 13:30 [PATCH v2 0/7] arm64: Make EFI calls preemptible Ard Biesheuvel
` (7 preceding siblings ...)
2025-09-05 15:45 ` [PATCH v2 0/7] arm64: Make EFI calls preemptible Yeoreum Yun
@ 2025-09-15 8:52 ` Sebastian Andrzej Siewior
2025-09-15 9:05 ` Ard Biesheuvel
8 siblings, 1 reply; 14+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-09-15 8:52 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: linux-efi, linux-kernel, linux-arm-kernel, Ard Biesheuvel,
Will Deacon, Mark Rutland, Peter Zijlstra
On 2025-09-05 15:30:36 [+0200], Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
>
…
> Note that this is only a partial solution in terms of RT guarantees,
> given that the runtime services execute at the same privilege level as
> the kernel, and can therefore disable interrupts (and therefore
> preemption) directly. But it should prevent scheduling latency spikes
> for EFI calls that simply take a long time to run to completion.
That sounds nice. There is no feature flag that could tell if a specific
EFI-call (or any) will disable interrupts, right? But if the source code
is available, you could check.
Sebastian
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 0/7] arm64: Make EFI calls preemptible
2025-09-15 8:52 ` Sebastian Andrzej Siewior
@ 2025-09-15 9:05 ` Ard Biesheuvel
0 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2025-09-15 9:05 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: Ard Biesheuvel, linux-efi, linux-kernel, linux-arm-kernel,
Will Deacon, Mark Rutland, Peter Zijlstra
Hi,
Thanks for taking a look.
On Mon, 15 Sept 2025 at 10:52, Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
>
> On 2025-09-05 15:30:36 [+0200], Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> …
> > Note that this is only a partial solution in terms of RT guarantees,
> > given that the runtime services execute at the same privilege level as
> > the kernel, and can therefore disable interrupts (and therefore
> > preemption) directly. But it should prevent scheduling latency spikes
> > for EFI calls that simply take a long time to run to completion.
>
> That sounds nice. There is no feature flag that could tell if a specific
> EFI-call (or any) will disable interrupts, right?
Sadly, no. At runtime, the EFI APIs that manage this at a higher level
of abstraction are no longer available, and so the only available
option for firmware to ensure that code runs uninterrupted is to mask
interrupts at the CPU side. Everything else (timers, interrupt
controllers) is owned by the OS at this point, so runtime firmware
cannot touch it (even if it wanted to - it has no idea where memory
mapped peripherals live in the OS's memory map that it runs under)
It would be nice if we could sandbox this in a VM but that is not
straight-forward.
> But if the source code
> is available, you could check.
>
Even though much of the code is based on the public reference
implementation, the tweaks that require playing with interrupt
masking/unmasking are often part of the downstream, closed source
forks.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-09-15 9:05 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-05 13:30 [PATCH v2 0/7] arm64: Make EFI calls preemptible Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 1/7] efi: Add missing static initializer for efi_mm::cpus_allowed_lock Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 2/7] efi/runtime: Return success/failure from arch_efi_call_virt_setup() Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 3/7] efi/runtime: Deal with arch_efi_call_virt_setup() returning failure Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 4/7] arm64/fpsimd: Don't warn when EFI execution context is preemptible Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 5/7] arm64/efi: Use a semaphore to protect the EFI stack and FP/SIMD state Ard Biesheuvel
2025-09-05 13:44 ` Peter Zijlstra
2025-09-05 13:54 ` Ard Biesheuvel
2025-09-08 15:37 ` Peter Zijlstra
2025-09-05 13:30 ` [PATCH v2 6/7] arm64/efi: Move uaccess en/disable out of efi_set_pgd() Ard Biesheuvel
2025-09-05 13:30 ` [PATCH v2 7/7] arm64/efi: Call EFI runtime services without disabling preemption Ard Biesheuvel
2025-09-05 15:45 ` [PATCH v2 0/7] arm64: Make EFI calls preemptible Yeoreum Yun
2025-09-15 8:52 ` Sebastian Andrzej Siewior
2025-09-15 9:05 ` Ard Biesheuvel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).