* [PATCH 01/10] KVM: arm64: Fix clobbered ELR in sync abort
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
@ 2024-03-14 20:23 ` Pierre-Clément Tosi
2024-03-17 12:26 ` Marc Zyngier
2024-03-14 20:23 ` [PATCH 02/10] KVM: arm64: Fix __pkvm_init_switch_pgd C signature Pierre-Clément Tosi
` (9 subsequent siblings)
10 siblings, 1 reply; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-14 20:23 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Marc Zyngier, Andrew Scull
When the hypervisor receives a SError or synchronous exception (EL2h)
while running with the __kvm_hyp_vector and if ELR_EL2 doesn't point to
an extable entry, it panics indirectly by overwriting ELR with the
address of a panic handler in order for the asm routine it returns to to
ERET into the handler. This is done (instead of a simple function call)
to ensure that the panic handler runs with the SPSel that was in use
when the exception was triggered, necessary to support features such as
the shadow call stack.
However, this clobbers ELR_EL2 for the handler itself. As a result,
hyp_panic(), when retrieving what it believes to be the PC where the
exception happened, actually ends up reading the address of the panic
handler that called it! This results in an erroneous and confusing panic
message where the source of any synchronous exception (e.g. BUG() or
kCFI) appears to be __guest_exit_panic, making it hard to locate the
actual BRK instruction.
Therefore, store the original ELR_EL2 in a per-CPU struct and point the
sysreg to a routine that first restores it to its previous value before
running __guest_exit_panic.
Fixes: 7db21530479f ("KVM: arm64: Restore hyp when panicking in guest context")
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
---
arch/arm64/kernel/asm-offsets.c | 1 +
arch/arm64/kvm/hyp/entry.S | 9 +++++++++
arch/arm64/kvm/hyp/include/hyp/switch.h | 6 ++++--
3 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 5a7dbbe0ce63..e62353168a57 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -128,6 +128,7 @@ int main(void)
DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1));
DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2));
DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_cpu_context, regs));
+ DEFINE(CPU_ELR_EL2, offsetof(struct kvm_cpu_context, sys_regs[ELR_EL2]));
DEFINE(CPU_RGSR_EL1, offsetof(struct kvm_cpu_context, sys_regs[RGSR_EL1]));
DEFINE(CPU_GCR_EL1, offsetof(struct kvm_cpu_context, sys_regs[GCR_EL1]));
DEFINE(CPU_APIAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index f3aa7738b477..9cdf46da3051 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -83,6 +83,15 @@ alternative_else_nop_endif
eret
sb
+SYM_INNER_LABEL(__guest_exit_panic_with_restored_elr, SYM_L_GLOBAL)
+ // x0-x29,lr: hyp regs
+
+ stp x0, x1, [sp, #-16]!
+ adr_this_cpu x0, kvm_hyp_ctxt, x1
+ ldr x0, [x0, #CPU_ELR_EL2]
+ msr elr_el2, x0
+ ldp x0, x1, [sp], #16
+
SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
// x2-x29,lr: vcpu regs
// vcpu x0-x1 on the stack
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a038320cdb08..6a8dc8d3c193 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -747,7 +747,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline void __kvm_unexpected_el2_exception(void)
{
- extern char __guest_exit_panic[];
+ extern char __guest_exit_panic_with_restored_elr[];
unsigned long addr, fixup;
struct kvm_exception_table_entry *entry, *end;
unsigned long elr_el2 = read_sysreg(elr_el2);
@@ -769,7 +769,9 @@ static inline void __kvm_unexpected_el2_exception(void)
}
/* Trigger a panic after restoring the hyp context. */
- write_sysreg(__guest_exit_panic, elr_el2);
+ write_sysreg(__guest_exit_panic_with_restored_elr, elr_el2);
+
+ this_cpu_ptr(&kvm_hyp_ctxt)->sys_regs[ELR_EL2] = elr_el2;
}
#endif /* __ARM64_KVM_HYP_SWITCH_H__ */
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 01/10] KVM: arm64: Fix clobbered ELR in sync abort
2024-03-14 20:23 ` [PATCH 01/10] KVM: arm64: Fix clobbered ELR in sync abort Pierre-Clément Tosi
@ 2024-03-17 12:26 ` Marc Zyngier
0 siblings, 0 replies; 19+ messages in thread
From: Marc Zyngier @ 2024-03-17 12:26 UTC (permalink / raw)
To: Pierre-Clément Tosi
Cc: kvmarm, linux-arm-kernel, kvm, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Andrew Scull
On Thu, 14 Mar 2024 20:23:22 +0000,
Pierre-Clément Tosi <ptosi@google.com> wrote:
>
> When the hypervisor receives a SError or synchronous exception (EL2h)
nit: since this also affects SError, $subject seems wrong.
> while running with the __kvm_hyp_vector and if ELR_EL2 doesn't point to
> an extable entry, it panics indirectly by overwriting ELR with the
> address of a panic handler in order for the asm routine it returns to to
> ERET into the handler. This is done (instead of a simple function call)
> to ensure that the panic handler runs with the SPSel that was in use
> when the exception was triggered, necessary to support features such as
> the shadow call stack.
I don't understand this. Who changes SPsel? At any given point, SP_EL0
is that of either the host or the guest, but never the hypervisor's.
If something was touching SP_EL0 behind our back, it would result in
corruption (see 6e977984f6d8 for a bit of rationale about it).
>
> However, this clobbers ELR_EL2 for the handler itself. As a result,
> hyp_panic(), when retrieving what it believes to be the PC where the
> exception happened, actually ends up reading the address of the panic
> handler that called it! This results in an erroneous and confusing panic
> message where the source of any synchronous exception (e.g. BUG() or
> kCFI) appears to be __guest_exit_panic, making it hard to locate the
> actual BRK instruction.
>
> Therefore, store the original ELR_EL2 in a per-CPU struct and point the
Can you elaborate on *which* per-CPU structure you are using?
> sysreg to a routine that first restores it to its previous value before
> running __guest_exit_panic.
>
> Fixes: 7db21530479f ("KVM: arm64: Restore hyp when panicking in guest context")
> Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
> ---
> arch/arm64/kernel/asm-offsets.c | 1 +
> arch/arm64/kvm/hyp/entry.S | 9 +++++++++
> arch/arm64/kvm/hyp/include/hyp/switch.h | 6 ++++--
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
> index 5a7dbbe0ce63..e62353168a57 100644
> --- a/arch/arm64/kernel/asm-offsets.c
> +++ b/arch/arm64/kernel/asm-offsets.c
> @@ -128,6 +128,7 @@ int main(void)
> DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1));
> DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2));
> DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_cpu_context, regs));
> + DEFINE(CPU_ELR_EL2, offsetof(struct kvm_cpu_context, sys_regs[ELR_EL2]));
> DEFINE(CPU_RGSR_EL1, offsetof(struct kvm_cpu_context, sys_regs[RGSR_EL1]));
> DEFINE(CPU_GCR_EL1, offsetof(struct kvm_cpu_context, sys_regs[GCR_EL1]));
> DEFINE(CPU_APIAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));
> diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
> index f3aa7738b477..9cdf46da3051 100644
> --- a/arch/arm64/kvm/hyp/entry.S
> +++ b/arch/arm64/kvm/hyp/entry.S
> @@ -83,6 +83,15 @@ alternative_else_nop_endif
> eret
> sb
>
> +SYM_INNER_LABEL(__guest_exit_panic_with_restored_elr, SYM_L_GLOBAL)
The name of the function isn't great. Crucially, 'with_restored_elr'
implies that ELR is already restored, while it is the hunk below that
does the 'restore' part.
Something like '__guest_exit_restore_elr_and_panic' seems more
appropriate.
> + // x0-x29,lr: hyp regs
> +
> + stp x0, x1, [sp, #-16]!
> + adr_this_cpu x0, kvm_hyp_ctxt, x1
> + ldr x0, [x0, #CPU_ELR_EL2]
> + msr elr_el2, x0
> + ldp x0, x1, [sp], #16
> +
> SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
> // x2-x29,lr: vcpu regs
> // vcpu x0-x1 on the stack
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index a038320cdb08..6a8dc8d3c193 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -747,7 +747,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
>
> static inline void __kvm_unexpected_el2_exception(void)
> {
> - extern char __guest_exit_panic[];
> + extern char __guest_exit_panic_with_restored_elr[];
> unsigned long addr, fixup;
> struct kvm_exception_table_entry *entry, *end;
> unsigned long elr_el2 = read_sysreg(elr_el2);
> @@ -769,7 +769,9 @@ static inline void __kvm_unexpected_el2_exception(void)
> }
>
> /* Trigger a panic after restoring the hyp context. */
> - write_sysreg(__guest_exit_panic, elr_el2);
> + write_sysreg(__guest_exit_panic_with_restored_elr, elr_el2);
> +
> + this_cpu_ptr(&kvm_hyp_ctxt)->sys_regs[ELR_EL2] = elr_el2;
nit: placing the saving of ELR_EL2 before overriding the sysreg makes
the whole thing a bit more readable, specially given the poor choice
of 'elr_el2' as a variable (visually clashing with 'elr_el2' as a
system register).
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 02/10] KVM: arm64: Fix __pkvm_init_switch_pgd C signature
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
2024-03-14 20:23 ` [PATCH 01/10] KVM: arm64: Fix clobbered ELR in sync abort Pierre-Clément Tosi
@ 2024-03-14 20:23 ` Pierre-Clément Tosi
2024-03-14 20:23 ` [PATCH 03/10] KVM: arm64: Pass pointer to __pkvm_init_switch_pgd Pierre-Clément Tosi
` (8 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-14 20:23 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Marc Zyngier, Will Deacon, Quentin Perret
Update the function declaration to match the asm implementation.
Fixes: f320bc742bc2 ("KVM: arm64: Prepare the creation of s1 mappings at EL2")
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
---
arch/arm64/include/asm/kvm_hyp.h | 3 +--
arch/arm64/kvm/hyp/nvhe/setup.c | 2 +-
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 145ce73fc16c..7a0082496d4a 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -123,8 +123,7 @@ void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
#endif
#ifdef __KVM_NVHE_HYPERVISOR__
-void __pkvm_init_switch_pgd(phys_addr_t phys, unsigned long size,
- phys_addr_t pgd, void *sp, void *cont_fn);
+void __pkvm_init_switch_pgd(phys_addr_t params, void (*finalize_fn)(void));
int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
unsigned long *per_cpu_base, u32 hyp_va_bits);
void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index bc58d1b515af..bcaeb0fafd2d 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -316,7 +316,7 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
{
struct kvm_nvhe_init_params *params;
void *virt = hyp_phys_to_virt(phys);
- void (*fn)(phys_addr_t params_pa, void *finalize_fn_va);
+ typeof(__pkvm_init_switch_pgd) *fn;
int ret;
BUG_ON(kvm_check_pvm_sysreg_table());
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH 03/10] KVM: arm64: Pass pointer to __pkvm_init_switch_pgd
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
2024-03-14 20:23 ` [PATCH 01/10] KVM: arm64: Fix clobbered ELR in sync abort Pierre-Clément Tosi
2024-03-14 20:23 ` [PATCH 02/10] KVM: arm64: Fix __pkvm_init_switch_pgd C signature Pierre-Clément Tosi
@ 2024-03-14 20:23 ` Pierre-Clément Tosi
2024-03-14 20:24 ` [PATCH 04/10] KVM: arm64: nVHE: Simplify __guest_exit_panic path Pierre-Clément Tosi
` (7 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-14 20:23 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Marc Zyngier, Will Deacon, Quentin Perret
Make the function take a VA pointer, instead of a phys_addr_t, to fully
take advantage of the high-level C language and its type checker.
Perform all accesses to the kvm_nvhe_init_params before disabling the
MMU, removing the need to access it using physical addresses, which was
the reason for taking a phys_addr_t.
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
---
arch/arm64/include/asm/kvm_hyp.h | 3 ++-
arch/arm64/kvm/hyp/nvhe/hyp-init.S | 8 +++++---
arch/arm64/kvm/hyp/nvhe/setup.c | 4 +---
3 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 7a0082496d4a..32fb866d1229 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -123,7 +123,8 @@ void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
#endif
#ifdef __KVM_NVHE_HYPERVISOR__
-void __pkvm_init_switch_pgd(phys_addr_t params, void (*finalize_fn)(void));
+void __pkvm_init_switch_pgd(struct kvm_nvhe_init_params *params,
+ void (*finalize_fn)(void));
int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
unsigned long *per_cpu_base, u32 hyp_va_bits);
void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index 2994878d68ea..8958dd761837 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -266,6 +266,10 @@ alternative_else_nop_endif
SYM_CODE_END(__kvm_handle_stub_hvc)
SYM_FUNC_START(__pkvm_init_switch_pgd)
+ /* Load the inputs from the VA pointer before turning the MMU off */
+ ldr x5, [x0, #NVHE_INIT_PGD_PA]
+ ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
+
/* Turn the MMU off */
pre_disable_mmu_workaround
mrs x2, sctlr_el2
@@ -276,15 +280,13 @@ SYM_FUNC_START(__pkvm_init_switch_pgd)
tlbi alle2
/* Install the new pgtables */
- ldr x3, [x0, #NVHE_INIT_PGD_PA]
- phys_to_ttbr x4, x3
+ phys_to_ttbr x4, x5
alternative_if ARM64_HAS_CNP
orr x4, x4, #TTBR_CNP_BIT
alternative_else_nop_endif
msr ttbr0_el2, x4
/* Set the new stack pointer */
- ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
mov sp, x0
/* And turn the MMU back on! */
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index bcaeb0fafd2d..45b83f3ed012 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -314,7 +314,6 @@ void __noreturn __pkvm_init_finalise(void)
int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
unsigned long *per_cpu_base, u32 hyp_va_bits)
{
- struct kvm_nvhe_init_params *params;
void *virt = hyp_phys_to_virt(phys);
typeof(__pkvm_init_switch_pgd) *fn;
int ret;
@@ -338,9 +337,8 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
update_nvhe_init_params();
/* Jump in the idmap page to switch to the new page-tables */
- params = this_cpu_ptr(&kvm_init_params);
fn = (typeof(fn))__hyp_pa(__pkvm_init_switch_pgd);
- fn(__hyp_pa(params), __pkvm_init_finalise);
+ fn(this_cpu_ptr(&kvm_init_params), __pkvm_init_finalise);
unreachable();
}
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH 04/10] KVM: arm64: nVHE: Simplify __guest_exit_panic path
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
` (2 preceding siblings ...)
2024-03-14 20:23 ` [PATCH 03/10] KVM: arm64: Pass pointer to __pkvm_init_switch_pgd Pierre-Clément Tosi
@ 2024-03-14 20:24 ` Pierre-Clément Tosi
2024-03-14 20:24 ` [PATCH 05/10] KVM: arm64: nVHE: Add EL2 sync exception handler Pierre-Clément Tosi
` (6 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-14 20:24 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Marc Zyngier, Andrew Scull
Immediately jump to __guest_exit_panic when taking an invalid EL2
exception with the nVHE host vector table instead of first duplicating
the vCPU context check that __guest_exit_panic will also perform.
Fix the wrong (probably bitrotten) __guest_exit_panic ABI doc to reflect
how it is used by VHE and (now) nVHE and rename the routine to
__hyp_panic to better reflect that it might not exit through the guest
but will always (directly or indirectly) end up executing hyp_panic().
Use CPU_LR_OFFSET to clarify that the routine returns to hyp_panic().
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
---
arch/arm64/kvm/hyp/entry.S | 14 +++++++++-----
arch/arm64/kvm/hyp/hyp-entry.S | 2 +-
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++--
arch/arm64/kvm/hyp/nvhe/host.S | 8 +-------
4 files changed, 13 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 9cdf46da3051..ac8aa8571b2f 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -83,7 +83,7 @@ alternative_else_nop_endif
eret
sb
-SYM_INNER_LABEL(__guest_exit_panic_with_restored_elr, SYM_L_GLOBAL)
+SYM_INNER_LABEL(__hyp_panic_with_restored_elr, SYM_L_GLOBAL)
// x0-x29,lr: hyp regs
stp x0, x1, [sp, #-16]!
@@ -92,13 +92,15 @@ SYM_INNER_LABEL(__guest_exit_panic_with_restored_elr, SYM_L_GLOBAL)
msr elr_el2, x0
ldp x0, x1, [sp], #16
-SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
- // x2-x29,lr: vcpu regs
- // vcpu x0-x1 on the stack
+SYM_INNER_LABEL(__hyp_panic, SYM_L_GLOBAL)
+ // x0-x29,lr: vcpu regs
+
+ stp x0, x1, [sp, #-16]!
// If the hyp context is loaded, go straight to hyp_panic
get_loaded_vcpu x0, x1
cbnz x0, 1f
+ ldp x0, x1, [sp], #16
b hyp_panic
1:
@@ -110,10 +112,12 @@ SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
// accurate if the guest had been completely restored.
adr_this_cpu x0, kvm_hyp_ctxt, x1
adr_l x1, hyp_panic
- str x1, [x0, #CPU_XREG_OFFSET(30)]
+ str x1, [x0, #CPU_LR_OFFSET]
get_vcpu_ptr x1, x0
+ // Keep x0-x1 on the stack for __guest_exit
+
SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
// x0: return code
// x1: vcpu
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 03f97d71984c..7e65ef738ec9 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -122,7 +122,7 @@ el2_error:
eret
sb
-.macro invalid_vector label, target = __guest_exit_panic
+.macro invalid_vector label, target = __hyp_panic
.align 2
SYM_CODE_START_LOCAL(\label)
b \target
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 6a8dc8d3c193..0dc721ced358 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -747,7 +747,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline void __kvm_unexpected_el2_exception(void)
{
- extern char __guest_exit_panic_with_restored_elr[];
+ extern char __hyp_panic_with_restored_elr[];
unsigned long addr, fixup;
struct kvm_exception_table_entry *entry, *end;
unsigned long elr_el2 = read_sysreg(elr_el2);
@@ -769,7 +769,7 @@ static inline void __kvm_unexpected_el2_exception(void)
}
/* Trigger a panic after restoring the hyp context. */
- write_sysreg(__guest_exit_panic_with_restored_elr, elr_el2);
+ write_sysreg(__hyp_panic_with_restored_elr, elr_el2);
this_cpu_ptr(&kvm_hyp_ctxt)->sys_regs[ELR_EL2] = elr_el2;
}
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index 7693a6757cd7..27c989c4976d 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -196,19 +196,13 @@ SYM_FUNC_END(__host_hvc)
tbz x0, #PAGE_SHIFT, .L__hyp_sp_overflow\@
sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0
sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp
-
/* If a guest is loaded, panic out of it. */
- stp x0, x1, [sp, #-16]!
- get_loaded_vcpu x0, x1
- cbnz x0, __guest_exit_panic
- add sp, sp, #16
-
/*
* The panic may not be clean if the exception is taken before the host
* context has been saved by __host_exit or after the hyp context has
* been partially clobbered by __host_enter.
*/
- b hyp_panic
+ b __hyp_panic
.L__hyp_sp_overflow\@:
/* Switch to the overflow stack */
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH 05/10] KVM: arm64: nVHE: Add EL2 sync exception handler
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
` (3 preceding siblings ...)
2024-03-14 20:24 ` [PATCH 04/10] KVM: arm64: nVHE: Simplify __guest_exit_panic path Pierre-Clément Tosi
@ 2024-03-14 20:24 ` Pierre-Clément Tosi
2024-03-17 11:42 ` Marc Zyngier
2024-03-14 20:24 ` [PATCH 06/10] KVM: arm64: nVHE: gen-hyprel: Skip R_AARCH64_ABS32 Pierre-Clément Tosi
` (5 subsequent siblings)
10 siblings, 1 reply; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-14 20:24 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Marc Zyngier
Introduce handlers for EL2{t,h} synchronous exceptions distinct from
handlers for other "invalid" exceptions when running with the nVHE host
vector. This will allow a future patch to handle CFI (synchronous)
errors without affecting other classes of exceptions.
Remove superfluous SP overflow check from the non-synchronous handlers.
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
---
arch/arm64/kvm/hyp/nvhe/host.S | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index 27c989c4976d..1b9111c2b480 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -183,7 +183,7 @@ SYM_FUNC_END(__host_hvc)
.endif
.endm
-.macro invalid_host_el2_vect
+.macro host_el2_sync_vect
.align 7
/*
@@ -221,6 +221,11 @@ SYM_FUNC_END(__host_hvc)
b __hyp_do_panic
.endm
+.macro invalid_host_el2_vect
+ .align 7
+ b __hyp_panic
+.endm
+
/*
* The host vector does not use an ESB instruction in order to avoid consuming
* SErrors that should only be consumed by the host. Guest entry is deferred by
@@ -233,12 +238,12 @@ SYM_FUNC_END(__host_hvc)
*/
.align 11
SYM_CODE_START(__kvm_hyp_host_vector)
- invalid_host_el2_vect // Synchronous EL2t
+ host_el2_sync_vect // Synchronous EL2t
invalid_host_el2_vect // IRQ EL2t
invalid_host_el2_vect // FIQ EL2t
invalid_host_el2_vect // Error EL2t
- invalid_host_el2_vect // Synchronous EL2h
+ host_el2_sync_vect // Synchronous EL2h
invalid_host_el2_vect // IRQ EL2h
invalid_host_el2_vect // FIQ EL2h
invalid_host_el2_vect // Error EL2h
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 05/10] KVM: arm64: nVHE: Add EL2 sync exception handler
2024-03-14 20:24 ` [PATCH 05/10] KVM: arm64: nVHE: Add EL2 sync exception handler Pierre-Clément Tosi
@ 2024-03-17 11:42 ` Marc Zyngier
2024-04-10 14:44 ` Pierre-Clément Tosi
0 siblings, 1 reply; 19+ messages in thread
From: Marc Zyngier @ 2024-03-17 11:42 UTC (permalink / raw)
To: Pierre-Clément Tosi
Cc: kvmarm, linux-arm-kernel, kvm, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Thu, 14 Mar 2024 20:24:31 +0000,
Pierre-Clément Tosi <ptosi@google.com> wrote:
>
> Introduce handlers for EL2{t,h} synchronous exceptions distinct from
> handlers for other "invalid" exceptions when running with the nVHE host
> vector. This will allow a future patch to handle CFI (synchronous)
> errors without affecting other classes of exceptions.
>
> Remove superfluous SP overflow check from the non-synchronous
> handlers.
Why are they superfluous? Because we are panic'ing? Detecting a stack
overflow is pretty valuable in any circumstances.
>
> Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
> ---
> arch/arm64/kvm/hyp/nvhe/host.S | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
> index 27c989c4976d..1b9111c2b480 100644
> --- a/arch/arm64/kvm/hyp/nvhe/host.S
> +++ b/arch/arm64/kvm/hyp/nvhe/host.S
> @@ -183,7 +183,7 @@ SYM_FUNC_END(__host_hvc)
> .endif
> .endm
>
> -.macro invalid_host_el2_vect
> +.macro host_el2_sync_vect
> .align 7
>
> /*
> @@ -221,6 +221,11 @@ SYM_FUNC_END(__host_hvc)
> b __hyp_do_panic
> .endm
>
> +.macro invalid_host_el2_vect
> + .align 7
> + b __hyp_panic
> +.endm
> +
> /*
> * The host vector does not use an ESB instruction in order to avoid consuming
> * SErrors that should only be consumed by the host. Guest entry is deferred by
> @@ -233,12 +238,12 @@ SYM_FUNC_END(__host_hvc)
> */
> .align 11
> SYM_CODE_START(__kvm_hyp_host_vector)
> - invalid_host_el2_vect // Synchronous EL2t
> + host_el2_sync_vect // Synchronous EL2t
The real question is: under which circumstances would running with
SP_EL0 be valid? I cannot see good reason for it.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 05/10] KVM: arm64: nVHE: Add EL2 sync exception handler
2024-03-17 11:42 ` Marc Zyngier
@ 2024-04-10 14:44 ` Pierre-Clément Tosi
0 siblings, 0 replies; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-04-10 14:44 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, kvm, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
Hi Marc,
On Sun, Mar 17, 2024 at 11:42:44AM +0000, Marc Zyngier wrote:
> On Thu, 14 Mar 2024 20:24:31 +0000,
> Pierre-Clément Tosi <ptosi@google.com> wrote:
> >
> > Remove superfluous SP overflow check from the non-synchronous
> > handlers.
>
> Why are they superfluous? Because we are panic'ing? Detecting a stack
> overflow is pretty valuable in any circumstances.
I've reverted to keeping these in v2.
However, the rationale was based on the assumption that the stack overflows into
an invalid mapping so that accessing it post-overflow triggers a page fault. If
that is correct, can't handlers of non-synchronous exceptions just blindly
access SP and rely on the synchronous exception handler to catch any overflow
(and somehow handle it or panic, this isn't really relevant)?
In particular, note that passing those checks doesn't guarantee that the SP
won't actually overflow while the handler is running (as most push to the
stack). In that case, they'll end up in the synchronous handler anyway, right?
So, given that the checks seem (to me) to happen at completely arbitrary points
in time (due to the nature of exceptions), it is therefore not clear how they
make the code more robust than not having them?
But I'm probably missing something?
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 06/10] KVM: arm64: nVHE: gen-hyprel: Skip R_AARCH64_ABS32
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
` (4 preceding siblings ...)
2024-03-14 20:24 ` [PATCH 05/10] KVM: arm64: nVHE: Add EL2 sync exception handler Pierre-Clément Tosi
@ 2024-03-14 20:24 ` Pierre-Clément Tosi
2024-03-14 20:25 ` [PATCH 07/10] KVM: arm64: VHE: Mark __hyp_call_panic __noreturn Pierre-Clément Tosi
` (4 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-14 20:24 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Marc Zyngier, David Brazdil
Ignore R_AARCH64_ABS32 relocations, instead of panicking, when emitting
the relocation table of the hypervisor. The toolchain might produce them
when generating function calls with KCFI, to allow type ID resolution
across compilation units (between the call-site check and the callee's
prefixed u32) at link time. They are therefore not needed in the final
(runtime) relocation table.
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
---
arch/arm64/kvm/hyp/nvhe/gen-hyprel.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c b/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c
index 6bc88a756cb7..7b046d97b301 100644
--- a/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c
+++ b/arch/arm64/kvm/hyp/nvhe/gen-hyprel.c
@@ -50,6 +50,9 @@
#ifndef R_AARCH64_ABS64
#define R_AARCH64_ABS64 257
#endif
+#ifndef R_AARCH64_ABS32
+#define R_AARCH64_ABS32 258
+#endif
#ifndef R_AARCH64_PREL64
#define R_AARCH64_PREL64 260
#endif
@@ -383,6 +386,9 @@ static void emit_rela_section(Elf64_Shdr *sh_rela)
case R_AARCH64_ABS64:
emit_rela_abs64(rela, sh_orig_name);
break;
+ /* Allow 32-bit absolute relocation, for KCFI type hashes. */
+ case R_AARCH64_ABS32:
+ break;
/* Allow position-relative data relocations. */
case R_AARCH64_PREL64:
case R_AARCH64_PREL32:
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH 07/10] KVM: arm64: VHE: Mark __hyp_call_panic __noreturn
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
` (5 preceding siblings ...)
2024-03-14 20:24 ` [PATCH 06/10] KVM: arm64: nVHE: gen-hyprel: Skip R_AARCH64_ABS32 Pierre-Clément Tosi
@ 2024-03-14 20:25 ` Pierre-Clément Tosi
2024-03-14 20:25 ` [PATCH 08/10] arm64: Move esr_comment() to <asm/esr.h> Pierre-Clément Tosi
` (3 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-14 20:25 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Marc Zyngier
Given that the sole purpose of __hyp_call_panic() is to call panic(), a
__noreturn function, give it the __noreturn attribute, removing the need
for its caller to use unreachable().
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
---
arch/arm64/kvm/hyp/vhe/switch.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 1581df6aec87..9db04a286398 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -301,7 +301,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
return ret;
}
-static void __hyp_call_panic(u64 spsr, u64 elr, u64 par)
+static void __noreturn __hyp_call_panic(u64 spsr, u64 elr, u64 par)
{
struct kvm_cpu_context *host_ctxt;
struct kvm_vcpu *vcpu;
@@ -326,7 +326,6 @@ void __noreturn hyp_panic(void)
u64 par = read_sysreg_par();
__hyp_call_panic(spsr, elr, par);
- unreachable();
}
asmlinkage void kvm_unexpected_el2_exception(void)
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH 08/10] arm64: Move esr_comment() to <asm/esr.h>
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
` (6 preceding siblings ...)
2024-03-14 20:25 ` [PATCH 07/10] KVM: arm64: VHE: Mark __hyp_call_panic __noreturn Pierre-Clément Tosi
@ 2024-03-14 20:25 ` Pierre-Clément Tosi
2024-03-17 12:50 ` Marc Zyngier
2024-03-14 20:25 ` [PATCH 09/10] KVM: arm64: nVHE: Support CONFIG_CFI_CLANG at EL2 Pierre-Clément Tosi
` (2 subsequent siblings)
10 siblings, 1 reply; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-14 20:25 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Marc Zyngier, Catalin Marinas, Will Deacon
As it is already defined twice and is about to be needed for CFI error
detection, move esr_comment() to a header so that it can be reused.
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
---
arch/arm64/include/asm/esr.h | 5 +++++
arch/arm64/kernel/debug-monitors.c | 4 +---
arch/arm64/kernel/traps.c | 2 --
arch/arm64/kvm/handle_exit.c | 2 +-
4 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 353fe08546cf..b0c23e7d6595 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -385,6 +385,11 @@
#ifndef __ASSEMBLY__
#include <asm/types.h>
+static inline unsigned long esr_comment(unsigned long esr)
+{
+ return esr & ESR_ELx_BRK64_ISS_COMMENT_MASK;
+}
+
static inline bool esr_is_data_abort(unsigned long esr)
{
const unsigned long ec = ESR_ELx_EC(esr);
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 64f2ecbdfe5c..647134ffa9b9 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -312,9 +312,7 @@ static int call_break_hook(struct pt_regs *regs, unsigned long esr)
* entirely not preemptible, and we can use rcu list safely here.
*/
list_for_each_entry_rcu(hook, list, node) {
- unsigned long comment = esr & ESR_ELx_BRK64_ISS_COMMENT_MASK;
-
- if ((comment & ~hook->mask) == hook->imm)
+ if ((esr_comment(esr) & ~hook->mask) == hook->imm)
fn = hook->fn;
}
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 215e6d7f2df8..56317ca48519 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -1105,8 +1105,6 @@ static struct break_hook ubsan_break_hook = {
};
#endif
-#define esr_comment(esr) ((esr) & ESR_ELx_BRK64_ISS_COMMENT_MASK)
-
/*
* Initial handler for AArch64 BRK exceptions
* This handler only used until debug_traps_init().
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 617ae6dea5d5..ffa67ac6656c 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -395,7 +395,7 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
if (mode != PSR_MODE_EL2t && mode != PSR_MODE_EL2h) {
kvm_err("Invalid host exception to nVHE hyp!\n");
} else if (ESR_ELx_EC(esr) == ESR_ELx_EC_BRK64 &&
- (esr & ESR_ELx_BRK64_ISS_COMMENT_MASK) == BUG_BRK_IMM) {
+ esr_comment(esr) == BUG_BRK_IMM) {
const char *file = NULL;
unsigned int line = 0;
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 08/10] arm64: Move esr_comment() to <asm/esr.h>
2024-03-14 20:25 ` [PATCH 08/10] arm64: Move esr_comment() to <asm/esr.h> Pierre-Clément Tosi
@ 2024-03-17 12:50 ` Marc Zyngier
0 siblings, 0 replies; 19+ messages in thread
From: Marc Zyngier @ 2024-03-17 12:50 UTC (permalink / raw)
To: Pierre-Clément Tosi
Cc: kvmarm, linux-arm-kernel, kvm, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon
On Thu, 14 Mar 2024 20:25:29 +0000,
Pierre-Clément Tosi <ptosi@google.com> wrote:
>
> As it is already defined twice and is about to be needed for CFI error
> detection, move esr_comment() to a header so that it can be reused.
>
> Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
> ---
> arch/arm64/include/asm/esr.h | 5 +++++
> arch/arm64/kernel/debug-monitors.c | 4 +---
> arch/arm64/kernel/traps.c | 2 --
> arch/arm64/kvm/handle_exit.c | 2 +-
> 4 files changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
> index 353fe08546cf..b0c23e7d6595 100644
> --- a/arch/arm64/include/asm/esr.h
> +++ b/arch/arm64/include/asm/esr.h
> @@ -385,6 +385,11 @@
> #ifndef __ASSEMBLY__
> #include <asm/types.h>
>
> +static inline unsigned long esr_comment(unsigned long esr)
> +{
> + return esr & ESR_ELx_BRK64_ISS_COMMENT_MASK;
> +}
nit: naming it esr_brk_comment() would make it slightly clearer what
this helper applies to, now that it is visible out of the limited
context of traps.c.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 09/10] KVM: arm64: nVHE: Support CONFIG_CFI_CLANG at EL2
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
` (7 preceding siblings ...)
2024-03-14 20:25 ` [PATCH 08/10] arm64: Move esr_comment() to <asm/esr.h> Pierre-Clément Tosi
@ 2024-03-14 20:25 ` Pierre-Clément Tosi
2024-03-17 13:09 ` Marc Zyngier
2024-03-14 20:26 ` [PATCH 10/10] KVM: arm64: Improve CONFIG_CFI_CLANG error message Pierre-Clément Tosi
2024-03-14 22:40 ` [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Marc Zyngier
10 siblings, 1 reply; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-14 20:25 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Marc Zyngier, Will Deacon, Quentin Perret, Vincent Donnefort
The compiler implements KCFI by adding type information (u32) above
every function that might be indirectly called and, whenever a function
pointer is called, injects a read-and-compare of that u32 against the
value corresponding to the expected type. In case of a mismatch, a BRK
instruction gets executed. When the hypervisor triggers such an
exception, it panics.
Therefore, teach hyp_panic() to detect KCFI errors from the ESR and
report them. If necessary, remind the user that CONFIG_CFI_PERMISSIVE
doesn't affect EL2 KCFI.
Pass $(CC_FLAGS_CFI) to the compiler when building the nVHE hyp code.
Use SYM_TYPED_FUNC_START() for __pkvm_init_switch_pgd, as nVHE can't
call it directly and must use a PA function pointer from C (because it
is part of the idmap page), which would trigger a KCFI failure if the
type ID wasn't present.
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
---
arch/arm64/include/asm/esr.h | 6 ++++++
arch/arm64/kvm/handle_exit.c | 11 +++++++++++
arch/arm64/kvm/hyp/nvhe/Makefile | 6 +++---
arch/arm64/kvm/hyp/nvhe/hyp-init.S | 3 ++-
4 files changed, 22 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index b0c23e7d6595..281e352a4c94 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -397,6 +397,12 @@ static inline bool esr_is_data_abort(unsigned long esr)
return ec == ESR_ELx_EC_DABT_LOW || ec == ESR_ELx_EC_DABT_CUR;
}
+static inline bool esr_is_cfi_brk(unsigned long esr)
+{
+ return ESR_ELx_EC(esr) == ESR_ELx_EC_BRK64 &&
+ (esr_comment(esr) & ~CFI_BRK_IMM_MASK) == CFI_BRK_IMM_BASE;
+}
+
static inline bool esr_fsc_is_translation_fault(unsigned long esr)
{
return (esr & ESR_ELx_FSC_TYPE) == ESR_ELx_FSC_FAULT;
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index ffa67ac6656c..9b6574e50b13 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -383,6 +383,15 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index)
kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu));
}
+static void kvm_nvhe_report_cfi_failure(u64 panic_addr)
+{
+ kvm_err("nVHE hyp CFI failure at: [<%016llx>] %pB!\n", panic_addr,
+ (void *)(panic_addr + kaslr_offset()));
+
+ if (IS_ENABLED(CONFIG_CFI_PERMISSIVE))
+ kvm_err(" (CONFIG_CFI_PERMISSIVE ignored for hyp failures)\n");
+}
+
void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
u64 elr_virt, u64 elr_phys,
u64 par, uintptr_t vcpu,
@@ -413,6 +422,8 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
else
kvm_err("nVHE hyp BUG at: [<%016llx>] %pB!\n", panic_addr,
(void *)(panic_addr + kaslr_offset()));
+ } else if (IS_ENABLED(CONFIG_CFI_CLANG) && esr_is_cfi_brk(esr)) {
+ kvm_nvhe_report_cfi_failure(panic_addr);
} else {
kvm_err("nVHE hyp panic at: [<%016llx>] %pB!\n", panic_addr,
(void *)(panic_addr + kaslr_offset()));
diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
index 2250253a6429..2eb915d8943f 100644
--- a/arch/arm64/kvm/hyp/nvhe/Makefile
+++ b/arch/arm64/kvm/hyp/nvhe/Makefile
@@ -89,9 +89,9 @@ quiet_cmd_hyprel = HYPREL $@
quiet_cmd_hypcopy = HYPCOPY $@
cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__kvm_nvhe_ $< $@
-# Remove ftrace, Shadow Call Stack, and CFI CFLAGS.
-# This is equivalent to the 'notrace', '__noscs', and '__nocfi' annotations.
-KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI), $(KBUILD_CFLAGS))
+# Remove ftrace and Shadow Call Stack CFLAGS.
+# This is equivalent to the 'notrace' and '__noscs' annotations.
+KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS), $(KBUILD_CFLAGS))
# Starting from 13.0.0 llvm emits SHT_REL section '.llvm.call-graph-profile'
# when profile optimization is applied. gen-hyprel does not support SHT_REL and
# causes a build failure. Remove profile optimization flags.
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
index 8958dd761837..ade73fdfaad9 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
@@ -5,6 +5,7 @@
*/
#include <linux/arm-smccc.h>
+#include <linux/cfi_types.h>
#include <linux/linkage.h>
#include <asm/alternative.h>
@@ -265,7 +266,7 @@ alternative_else_nop_endif
SYM_CODE_END(__kvm_handle_stub_hvc)
-SYM_FUNC_START(__pkvm_init_switch_pgd)
+SYM_TYPED_FUNC_START(__pkvm_init_switch_pgd)
/* Load the inputs from the VA pointer before turning the MMU off */
ldr x5, [x0, #NVHE_INIT_PGD_PA]
ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 09/10] KVM: arm64: nVHE: Support CONFIG_CFI_CLANG at EL2
2024-03-14 20:25 ` [PATCH 09/10] KVM: arm64: nVHE: Support CONFIG_CFI_CLANG at EL2 Pierre-Clément Tosi
@ 2024-03-17 13:09 ` Marc Zyngier
2024-04-10 14:58 ` Pierre-Clément Tosi
0 siblings, 1 reply; 19+ messages in thread
From: Marc Zyngier @ 2024-03-17 13:09 UTC (permalink / raw)
To: Pierre-Clément Tosi
Cc: kvmarm, linux-arm-kernel, kvm, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Will Deacon, Quentin Perret,
Vincent Donnefort
On Thu, 14 Mar 2024 20:25:43 +0000,
Pierre-Clément Tosi <ptosi@google.com> wrote:
>
> The compiler implements KCFI by adding type information (u32) above
> every function that might be indirectly called and, whenever a function
> pointer is called, injects a read-and-compare of that u32 against the
> value corresponding to the expected type. In case of a mismatch, a BRK
> instruction gets executed. When the hypervisor triggers such an
> exception, it panics.
Importantly, this triggers an exception return to EL1. If you don't
explain that, then nobody can really understand how you end-up in
nvhe_hyp_panic_handler() the first place.
>
> Therefore, teach hyp_panic() to detect KCFI errors from the ESR and
nvhe_hyp_panic_handler() instead hyp_panic()?
> report them. If necessary, remind the user that CONFIG_CFI_PERMISSIVE
> doesn't affect EL2 KCFI.
>
> Pass $(CC_FLAGS_CFI) to the compiler when building the nVHE hyp code.
>
> Use SYM_TYPED_FUNC_START() for __pkvm_init_switch_pgd, as nVHE can't
> call it directly and must use a PA function pointer from C (because it
> is part of the idmap page), which would trigger a KCFI failure if the
> type ID wasn't present.
>
> Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
> ---
> arch/arm64/include/asm/esr.h | 6 ++++++
> arch/arm64/kvm/handle_exit.c | 11 +++++++++++
> arch/arm64/kvm/hyp/nvhe/Makefile | 6 +++---
> arch/arm64/kvm/hyp/nvhe/hyp-init.S | 3 ++-
> 4 files changed, 22 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
> index b0c23e7d6595..281e352a4c94 100644
> --- a/arch/arm64/include/asm/esr.h
> +++ b/arch/arm64/include/asm/esr.h
> @@ -397,6 +397,12 @@ static inline bool esr_is_data_abort(unsigned long esr)
> return ec == ESR_ELx_EC_DABT_LOW || ec == ESR_ELx_EC_DABT_CUR;
> }
>
> +static inline bool esr_is_cfi_brk(unsigned long esr)
> +{
> + return ESR_ELx_EC(esr) == ESR_ELx_EC_BRK64 &&
> + (esr_comment(esr) & ~CFI_BRK_IMM_MASK) == CFI_BRK_IMM_BASE;
> +}
> +
nit: since there is a single user, please place this helper in handle_exit.c.
> static inline bool esr_fsc_is_translation_fault(unsigned long esr)
> {
> return (esr & ESR_ELx_FSC_TYPE) == ESR_ELx_FSC_FAULT;
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index ffa67ac6656c..9b6574e50b13 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -383,6 +383,15 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index)
> kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu));
> }
>
> +static void kvm_nvhe_report_cfi_failure(u64 panic_addr)
> +{
> + kvm_err("nVHE hyp CFI failure at: [<%016llx>] %pB!\n", panic_addr,
> + (void *)(panic_addr + kaslr_offset()));
> +
> + if (IS_ENABLED(CONFIG_CFI_PERMISSIVE))
> + kvm_err(" (CONFIG_CFI_PERMISSIVE ignored for hyp failures)\n");
> +}
> +
> void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
> u64 elr_virt, u64 elr_phys,
> u64 par, uintptr_t vcpu,
> @@ -413,6 +422,8 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
> else
> kvm_err("nVHE hyp BUG at: [<%016llx>] %pB!\n", panic_addr,
> (void *)(panic_addr + kaslr_offset()));
> + } else if (IS_ENABLED(CONFIG_CFI_CLANG) && esr_is_cfi_brk(esr)) {
It would seem logical to move the IS_ENABLED() into the ESR check helper.
> + kvm_nvhe_report_cfi_failure(panic_addr);
> } else {
> kvm_err("nVHE hyp panic at: [<%016llx>] %pB!\n", panic_addr,
> (void *)(panic_addr + kaslr_offset()));
> diff --git a/arch/arm64/kvm/hyp/nvhe/Makefile b/arch/arm64/kvm/hyp/nvhe/Makefile
> index 2250253a6429..2eb915d8943f 100644
> --- a/arch/arm64/kvm/hyp/nvhe/Makefile
> +++ b/arch/arm64/kvm/hyp/nvhe/Makefile
> @@ -89,9 +89,9 @@ quiet_cmd_hyprel = HYPREL $@
> quiet_cmd_hypcopy = HYPCOPY $@
> cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__kvm_nvhe_ $< $@
>
> -# Remove ftrace, Shadow Call Stack, and CFI CFLAGS.
> -# This is equivalent to the 'notrace', '__noscs', and '__nocfi' annotations.
> -KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI), $(KBUILD_CFLAGS))
> +# Remove ftrace and Shadow Call Stack CFLAGS.
> +# This is equivalent to the 'notrace' and '__noscs' annotations.
> +KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS), $(KBUILD_CFLAGS))
> # Starting from 13.0.0 llvm emits SHT_REL section '.llvm.call-graph-profile'
> # when profile optimization is applied. gen-hyprel does not support SHT_REL and
> # causes a build failure. Remove profile optimization flags.
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
> index 8958dd761837..ade73fdfaad9 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
> @@ -5,6 +5,7 @@
> */
>
> #include <linux/arm-smccc.h>
> +#include <linux/cfi_types.h>
> #include <linux/linkage.h>
>
> #include <asm/alternative.h>
> @@ -265,7 +266,7 @@ alternative_else_nop_endif
>
> SYM_CODE_END(__kvm_handle_stub_hvc)
>
> -SYM_FUNC_START(__pkvm_init_switch_pgd)
> +SYM_TYPED_FUNC_START(__pkvm_init_switch_pgd)
Please put a comment indicating why SYM_TYPED_FUNC_START() is
necessary, because this will otherwise bitrot very quickly.
> /* Load the inputs from the VA pointer before turning the MMU off */
> ldr x5, [x0, #NVHE_INIT_PGD_PA]
> ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
>
Another question is how do we test that this still works down the
line? In my experience, these features eventually bitrot because they
have very little functional impact (just like the panic handler broke
the ELR_EL2 handling). We really should have a way to exercise such
failure path.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 09/10] KVM: arm64: nVHE: Support CONFIG_CFI_CLANG at EL2
2024-03-17 13:09 ` Marc Zyngier
@ 2024-04-10 14:58 ` Pierre-Clément Tosi
0 siblings, 0 replies; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-04-10 14:58 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, kvm, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Will Deacon, Quentin Perret,
Vincent Donnefort
Hi Marc,
On Sun, Mar 17, 2024 at 01:09:01PM +0000, Marc Zyngier wrote:
> On Thu, 14 Mar 2024 20:25:43 +0000,
> Pierre-Clément Tosi <ptosi@google.com> wrote:
> >
> > diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
> > index b0c23e7d6595..281e352a4c94 100644
> > --- a/arch/arm64/include/asm/esr.h
> > +++ b/arch/arm64/include/asm/esr.h
> > @@ -397,6 +397,12 @@ static inline bool esr_is_data_abort(unsigned long esr)
> > return ec == ESR_ELx_EC_DABT_LOW || ec == ESR_ELx_EC_DABT_CUR;
> > }
> >
> > +static inline bool esr_is_cfi_brk(unsigned long esr)
> > +{
> > + return ESR_ELx_EC(esr) == ESR_ELx_EC_BRK64 &&
> > + (esr_comment(esr) & ~CFI_BRK_IMM_MASK) == CFI_BRK_IMM_BASE;
> > +}
> > +
>
> nit: since there is a single user, please place this helper in handle_exit.c.
I've placed this here as I'm introducing a second user in a following patch of
this series (in the VHE code) and wanted to avoid adding code then immediately
moving it around.
I've therefore kept this part unchanged in v2 but let me know if you prefer the
commits to add-then-move and I'll update that for v3.
> > static inline bool esr_fsc_is_translation_fault(unsigned long esr)
> > {
> > return (esr & ESR_ELx_FSC_TYPE) == ESR_ELx_FSC_FAULT;
> > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> > index ffa67ac6656c..9b6574e50b13 100644
> > --- a/arch/arm64/kvm/handle_exit.c
> > +++ b/arch/arm64/kvm/handle_exit.c
> > @@ -383,6 +383,15 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index)
> > kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu));
> > }
> >
> > +static void kvm_nvhe_report_cfi_failure(u64 panic_addr)
> > +{
> > + kvm_err("nVHE hyp CFI failure at: [<%016llx>] %pB!\n", panic_addr,
> > + (void *)(panic_addr + kaslr_offset()));
> > +
> > + if (IS_ENABLED(CONFIG_CFI_PERMISSIVE))
> > + kvm_err(" (CONFIG_CFI_PERMISSIVE ignored for hyp failures)\n");
> > +}
> > +
> > void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
> > u64 elr_virt, u64 elr_phys,
> > u64 par, uintptr_t vcpu,
> > @@ -413,6 +422,8 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
> > else
> > kvm_err("nVHE hyp BUG at: [<%016llx>] %pB!\n", panic_addr,
> > (void *)(panic_addr + kaslr_offset()));
> > + } else if (IS_ENABLED(CONFIG_CFI_CLANG) && esr_is_cfi_brk(esr)) {
>
> It would seem logical to move the IS_ENABLED() into the ESR check helper.
I suppose it makes sense for a static function but, given that I've kept the
helper in a shared header and as it essentially is a straightforward
shift-mask-compare (like the existing helpers in <asm/esr.h>), wouldn't it be
confusing for its result to depend on a Kconfig flag?
Anyway, same as above; left unchanged in v2 but happy to update this in v3.
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 10/10] KVM: arm64: Improve CONFIG_CFI_CLANG error message
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
` (8 preceding siblings ...)
2024-03-14 20:25 ` [PATCH 09/10] KVM: arm64: nVHE: Support CONFIG_CFI_CLANG at EL2 Pierre-Clément Tosi
@ 2024-03-14 20:26 ` Pierre-Clément Tosi
2024-03-14 22:40 ` [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Marc Zyngier
10 siblings, 0 replies; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-14 20:26 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Marc Zyngier, Vincent Donnefort
For KCFI, the compiler encodes in the immediate of the BRK (which the
CPU places in ESR_ELx) the indices of the two registers it used to hold
(resp.) the function pointer and expected type. Therefore, the CFI
handler must be able to parse the contents of the register file at the
point where the exception was triggered.
To achieve this, introduce a new hypervisor panic path that first stores
the CPU context in the per-CPU kvm_hyp_ctxt before calling (directly or
indirectly) hyp_panic() and execute it from all EL2 synchronous
exception handlers i.e.
- call it directly in host_el2_sync_vect (__kvm_hyp_host_vector, EL2t&h)
- call it directly in el2t_sync_invalid (__kvm_hyp_vector, EL2t)
- set ELR_EL2 to it in el2_sync (__kvm_hyp_vector, EL2h), which ERETs
Teach hyp_panic() to decode the KCFI ESR and extract the target and type
from the saved CPU context. In VHE, use that information to panic() with
a specialized error message. In nVHE, only report it if the host (EL1)
has access to the saved CPU context i.e. iff CONFIG_NVHE_EL2_DEBUG=y,
which aligns with the behavior of CONFIG_PROTECTED_NVHE_STACKTRACE.
Signed-off-by: Pierre-Clément Tosi <ptosi@google.com>
---
arch/arm64/kvm/handle_exit.c | 30 +++++++++++++++++++++++--
arch/arm64/kvm/hyp/entry.S | 24 +++++++++++++++++++-
arch/arm64/kvm/hyp/hyp-entry.S | 2 +-
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 ++--
arch/arm64/kvm/hyp/nvhe/host.S | 2 +-
arch/arm64/kvm/hyp/vhe/switch.c | 26 +++++++++++++++++++--
6 files changed, 79 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 9b6574e50b13..d343a5130943 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -26,6 +26,8 @@
#define CREATE_TRACE_POINTS
#include "trace_handle_exit.h"
+DECLARE_KVM_NVHE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
+
typedef int (*exit_handle_fn)(struct kvm_vcpu *);
static void kvm_handle_guest_serror(struct kvm_vcpu *vcpu, u64 esr)
@@ -383,11 +385,35 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index)
kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu));
}
-static void kvm_nvhe_report_cfi_failure(u64 panic_addr)
+static void kvm_nvhe_report_cfi_target(struct user_pt_regs *regs, u64 esr,
+ u64 hyp_offset)
+{
+ u64 va_mask = GENMASK_ULL(vabits_actual - 1, 0);
+ u8 type_idx = FIELD_GET(CFI_BRK_IMM_TYPE, esr);
+ u8 target_idx = FIELD_GET(CFI_BRK_IMM_TARGET, esr);
+ u32 expected_type = (u32)regs->regs[type_idx];
+ u64 target_addr = (regs->regs[target_idx] & va_mask) + hyp_offset;
+
+ kvm_err(" (target: [<%016llx>] %ps, expected type: 0x%08x)\n",
+ target_addr, (void *)(target_addr + kaslr_offset()),
+ expected_type);
+}
+
+static void kvm_nvhe_report_cfi_failure(u64 panic_addr, u64 esr, u64 hyp_offset)
{
+ struct user_pt_regs *regs = NULL;
+
kvm_err("nVHE hyp CFI failure at: [<%016llx>] %pB!\n", panic_addr,
(void *)(panic_addr + kaslr_offset()));
+ if (IS_ENABLED(CONFIG_NVHE_EL2_DEBUG) || !is_protected_kvm_enabled())
+ regs = &this_cpu_ptr_nvhe_sym(kvm_hyp_ctxt)->regs;
+
+ if (regs)
+ kvm_nvhe_report_cfi_target(regs, esr, hyp_offset);
+ else
+ kvm_err(" (no target information: !CONFIG_NVHE_EL2_DEBUG)\n");
+
if (IS_ENABLED(CONFIG_CFI_PERMISSIVE))
kvm_err(" (CONFIG_CFI_PERMISSIVE ignored for hyp failures)\n");
}
@@ -423,7 +449,7 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
kvm_err("nVHE hyp BUG at: [<%016llx>] %pB!\n", panic_addr,
(void *)(panic_addr + kaslr_offset()));
} else if (IS_ENABLED(CONFIG_CFI_CLANG) && esr_is_cfi_brk(esr)) {
- kvm_nvhe_report_cfi_failure(panic_addr);
+ kvm_nvhe_report_cfi_failure(panic_addr, esr, hyp_offset);
} else {
kvm_err("nVHE hyp panic at: [<%016llx>] %pB!\n", panic_addr,
(void *)(panic_addr + kaslr_offset()));
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index ac8aa8571b2f..eb6699d2bb7a 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -83,7 +83,7 @@ alternative_else_nop_endif
eret
sb
-SYM_INNER_LABEL(__hyp_panic_with_restored_elr, SYM_L_GLOBAL)
+SYM_INNER_LABEL(__hyp_panic_with_context_and_restored_elr, SYM_L_GLOBAL)
// x0-x29,lr: hyp regs
stp x0, x1, [sp, #-16]!
@@ -92,6 +92,28 @@ SYM_INNER_LABEL(__hyp_panic_with_restored_elr, SYM_L_GLOBAL)
msr elr_el2, x0
ldp x0, x1, [sp], #16
+SYM_INNER_LABEL(__hyp_panic_with_context, SYM_L_GLOBAL)
+ // x0-x29,lr: hyp regs
+
+ stp x0, x1, [sp, #-16]!
+
+ adr_this_cpu x0, kvm_hyp_ctxt, x1
+
+ stp x2, x3, [x0, #CPU_XREG_OFFSET(2)]
+
+ ldp x2, x3, [sp], #16
+
+ stp x2, x3, [x0, #CPU_XREG_OFFSET(0)]
+ stp x4, x5, [x0, #CPU_XREG_OFFSET(4)]
+ stp x6, x7, [x0, #CPU_XREG_OFFSET(6)]
+ stp x8, x9, [x0, #CPU_XREG_OFFSET(8)]
+ stp x10, x11, [x0, #CPU_XREG_OFFSET(10)]
+ stp x12, x13, [x0, #CPU_XREG_OFFSET(12)]
+ stp x14, x15, [x0, #CPU_XREG_OFFSET(14)]
+ stp x16, x17, [x0, #CPU_XREG_OFFSET(16)]
+
+ save_callee_saved_regs x0
+
SYM_INNER_LABEL(__hyp_panic, SYM_L_GLOBAL)
// x0-x29,lr: vcpu regs
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 7e65ef738ec9..6eedab7f9767 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -130,7 +130,7 @@ SYM_CODE_END(\label)
.endm
/* None of these should ever happen */
- invalid_vector el2t_sync_invalid
+ invalid_vector el2t_sync_invalid, __hyp_panic_with_context
invalid_vector el2t_irq_invalid
invalid_vector el2t_fiq_invalid
invalid_vector el2t_error_invalid
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 0dc721ced358..6c4b3f9d538f 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -747,7 +747,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline void __kvm_unexpected_el2_exception(void)
{
- extern char __hyp_panic_with_restored_elr[];
+ extern char __hyp_panic_with_context_and_restored_elr[];
unsigned long addr, fixup;
struct kvm_exception_table_entry *entry, *end;
unsigned long elr_el2 = read_sysreg(elr_el2);
@@ -769,7 +769,7 @@ static inline void __kvm_unexpected_el2_exception(void)
}
/* Trigger a panic after restoring the hyp context. */
- write_sysreg(__hyp_panic_with_restored_elr, elr_el2);
+ write_sysreg(__hyp_panic_with_context_and_restored_elr, elr_el2);
this_cpu_ptr(&kvm_hyp_ctxt)->sys_regs[ELR_EL2] = elr_el2;
}
diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S
index 1b9111c2b480..8bb6fed5ba4e 100644
--- a/arch/arm64/kvm/hyp/nvhe/host.S
+++ b/arch/arm64/kvm/hyp/nvhe/host.S
@@ -202,7 +202,7 @@ SYM_FUNC_END(__host_hvc)
* context has been saved by __host_exit or after the hyp context has
* been partially clobbered by __host_enter.
*/
- b __hyp_panic
+ b __hyp_panic_with_context
.L__hyp_sp_overflow\@:
/* Switch to the overflow stack */
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 9db04a286398..c733f5bdab59 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -17,6 +17,7 @@
#include <asm/barrier.h>
#include <asm/cpufeature.h>
+#include <asm/esr.h>
#include <asm/kprobes.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
@@ -301,7 +302,24 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
return ret;
}
-static void __noreturn __hyp_call_panic(u64 spsr, u64 elr, u64 par)
+static void __noreturn __hyp_call_panic_for_cfi(u64 elr, u64 esr)
+{
+ struct user_pt_regs *regs = &this_cpu_ptr(&kvm_hyp_ctxt)->regs;
+ u8 type_idx = FIELD_GET(CFI_BRK_IMM_TYPE, esr);
+ u8 target_idx = FIELD_GET(CFI_BRK_IMM_TARGET, esr);
+ u32 expected_type = (u32)regs->regs[type_idx];
+ u64 target = regs->regs[target_idx];
+
+ panic("VHE hyp CFI failure at: [<%016llx>] %pB (target: [<%016llx>] %ps, expected type: 0x%08x)\n"
+#ifdef CONFIG_CFI_PERMISSIVE
+ " (CONFIG_CFI_PERMISSIVE ignored for hyp failures)\n"
+#endif
+ ,
+ elr, (void *)elr, target, (void *)target, expected_type);
+}
+NOKPROBE_SYMBOL(__hyp_call_panic_for_cfi);
+
+static void __noreturn __hyp_call_panic(u64 spsr, u64 elr, u64 par, u64 esr)
{
struct kvm_cpu_context *host_ctxt;
struct kvm_vcpu *vcpu;
@@ -312,6 +330,9 @@ static void __noreturn __hyp_call_panic(u64 spsr, u64 elr, u64 par)
__deactivate_traps(vcpu);
sysreg_restore_host_state_vhe(host_ctxt);
+ if (IS_ENABLED(CONFIG_CFI_CLANG) && esr_is_cfi_brk(esr))
+ __hyp_call_panic_for_cfi(elr, esr);
+
panic("HYP panic:\nPS:%08llx PC:%016llx ESR:%08llx\nFAR:%016llx HPFAR:%016llx PAR:%016llx\nVCPU:%p\n",
spsr, elr,
read_sysreg_el2(SYS_ESR), read_sysreg_el2(SYS_FAR),
@@ -324,8 +345,9 @@ void __noreturn hyp_panic(void)
u64 spsr = read_sysreg_el2(SYS_SPSR);
u64 elr = read_sysreg_el2(SYS_ELR);
u64 par = read_sysreg_par();
+ u64 esr = read_sysreg_el2(SYS_ESR);
- __hyp_call_panic(spsr, elr, par);
+ __hyp_call_panic(spsr, elr, par, esr);
}
asmlinkage void kvm_unexpected_el2_exception(void)
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI
2024-03-14 20:23 [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Pierre-Clément Tosi
` (9 preceding siblings ...)
2024-03-14 20:26 ` [PATCH 10/10] KVM: arm64: Improve CONFIG_CFI_CLANG error message Pierre-Clément Tosi
@ 2024-03-14 22:40 ` Marc Zyngier
2024-03-15 10:22 ` Pierre-Clément Tosi
10 siblings, 1 reply; 19+ messages in thread
From: Marc Zyngier @ 2024-03-14 22:40 UTC (permalink / raw)
To: Pierre-Clément Tosi
Cc: kvmarm, linux-arm-kernel, kvm, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Sami Tolvanen, Mark Rutland
Hi Pierre-Clément,
On Thu, 14 Mar 2024 20:23:00 +0000,
Pierre-Clément Tosi <ptosi@google.com> wrote:
>
> CONFIG_CFI_CLANG ("kernel Control Flow Integrity") makes the compiler inject
> runtime type checks before any indirect function call. On AArch64, it generates
> a BRK instruction to be executed on type mismatch and encodes the indices of the
> registers holding the branch target and expected type in the immediate of the
> instruction. As a result, a synchronous exception gets triggered on kCFI failure
> and the fault handler can retrieve the immediate (and indices) from ESR_ELx.
>
> This feature has been supported at EL1 ("host") since it was introduced by
> b26e484b8bb3 ("arm64: Add CFI error handling"), where cfi_handler() decodes
> ESR_EL1, giving informative panic messages such as
>
> [ 21.885179] CFI failure at lkdtm_indirect_call+0x2c/0x44 [lkdtm]
> (target: lkdtm_increment_int+0x0/0x1c [lkdtm]; expected type: 0x7e0c52a)
> [ 21.886593] Internal error: Oops - CFI: 0 [#1] PREEMPT SMP
>
> However, it is not or only partially supported at EL2: in nVHE (or pKVM),
> CONFIG_CFI_CLANG gets filtered out at build time, preventing the compiler from
> injecting the checks. In VHE (or hVHE), EL2 code gets compiled with the checks
Are you sure about hVHE? hVHE is essentially the nVHE object running
with a slightly different HCR_EL2 configuration. So if you don't have
the checks in the nVHE code, you don't have them for hVHE either.
Or am I missing something obvious?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI
2024-03-14 22:40 ` [PATCH 00/10] KVM: arm64: Add support for hypervisor kCFI Marc Zyngier
@ 2024-03-15 10:22 ` Pierre-Clément Tosi
0 siblings, 0 replies; 19+ messages in thread
From: Pierre-Clément Tosi @ 2024-03-15 10:22 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, kvm, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Sami Tolvanen, Mark Rutland
Hi Marc,
On Thu, Mar 14, 2024 at 10:40:47PM +0000, Marc Zyngier wrote:
> Hi Pierre-Clément,
>
> On Thu, 14 Mar 2024 20:23:00 +0000,
> Pierre-Clément Tosi <ptosi@google.com> wrote:
> >
> > CONFIG_CFI_CLANG ("kernel Control Flow Integrity") makes the compiler inject
> > runtime type checks before any indirect function call. On AArch64, it generates
> > a BRK instruction to be executed on type mismatch and encodes the indices of the
> > registers holding the branch target and expected type in the immediate of the
> > instruction. As a result, a synchronous exception gets triggered on kCFI failure
> > and the fault handler can retrieve the immediate (and indices) from ESR_ELx.
> >
> > This feature has been supported at EL1 ("host") since it was introduced by
> > b26e484b8bb3 ("arm64: Add CFI error handling"), where cfi_handler() decodes
> > ESR_EL1, giving informative panic messages such as
> >
> > [ 21.885179] CFI failure at lkdtm_indirect_call+0x2c/0x44 [lkdtm]
> > (target: lkdtm_increment_int+0x0/0x1c [lkdtm]; expected type: 0x7e0c52a)
> > [ 21.886593] Internal error: Oops - CFI: 0 [#1] PREEMPT SMP
> >
> > However, it is not or only partially supported at EL2: in nVHE (or pKVM),
> > CONFIG_CFI_CLANG gets filtered out at build time, preventing the compiler from
> > injecting the checks. In VHE (or hVHE), EL2 code gets compiled with the checks
>
> Are you sure about hVHE? hVHE is essentially the nVHE object running
> with a slightly different HCR_EL2 configuration. So if you don't have
> the checks in the nVHE code, you don't have them for hVHE either.
No, I am not and my assumption that hVHE was running the VHE hyp code was wrong.
FYI, these patches were tested in VHE, nVHE, and pKVM (with NVHE_EL2_DEBUG set
and unset) but not in hVHE (clearly!). Thanks for pointing this out.
>
> Or am I missing something obvious?
>
> Thanks,
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
--
Pierre
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 19+ messages in thread