* [PATCH v4 1/3] KVM: arm64: Disable TRBE Trace Buffer Unit when running in guest context
2026-03-27 13:00 [PATCH v4 0/3] KVM: arm64: Fix SPE and TRBE nVHE world switch Will Deacon
@ 2026-03-27 13:00 ` Will Deacon
2026-03-27 13:00 ` [PATCH v4 2/3] KVM: arm64: Disable SPE Profiling Buffer " Will Deacon
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2026-03-27 13:00 UTC (permalink / raw)
To: kvmarm
Cc: mark.rutland, linux-arm-kernel, Will Deacon, Marc Zyngier,
Oliver Upton, James Clark, Leo Yan, Suzuki K Poulose, Fuad Tabba,
Alexandru Elisei, Yabin Cui
The nVHE world-switch code relies on zeroing TRFCR_EL1 to disable trace
generation in guest context when self-hosted TRBE is in use by the host.
Per D3.2.1 ("Controls to prohibit trace at Exception levels"), clearing
TRFCR_EL1 means that trace generation is prohibited at EL1 and EL0 but
per R_YCHKJ the Trace Buffer Unit will still be enabled if
TRBLIMITR_EL1.E is set. R_SJFRQ goes on to state that, when enabled, the
Trace Buffer Unit can perform address translation for the "owning
exception level" even when it is out of context.
Consequently, we can end up in a state where TRBE performs speculative
page-table walks for a host VA/IPA in guest/hypervisor context depending
on the value of MDCR_EL2.E2TB, which changes over world-switch. The
potential result appears to be a heady mixture of SErrors, data
corruption and hardware lockups.
Extend the TRBE world-switch code to clear TRBLIMITR_EL1.E after
draining the buffer, restoring the register on return to the host. This
unfortunately means we need to tackle CPU errata #2064142 and #2038923
which add additional synchronisation requirements around manipulations
of the limit register. Hopefully this doesn't need to be fast.
Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oupton@kernel.org>
Cc: James Clark <james.clark@linaro.org>
Cc: Leo Yan <leo.yan@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Tested-by: Leo Yan <leo.yan@arm.com>
Tested-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Fixes: a1319260bf62 ("arm64: KVM: Enable access to TRBE support for host")
Signed-off-by: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 1 +
arch/arm64/kvm/hyp/nvhe/debug-sr.c | 71 ++++++++++++++++++++++++++----
arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
3 files changed, 64 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 70cb9cfd760a..b1335c55dbef 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -770,6 +770,7 @@ struct kvm_host_data {
u64 pmscr_el1;
/* Self-hosted trace */
u64 trfcr_el1;
+ u64 trblimitr_el1;
/* Values of trap registers for the host before guest entry. */
u64 mdcr_el2;
u64 brbcr_el1;
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 2a1c0f49792b..0955af771ad1 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -57,12 +57,54 @@ static void __trace_do_switch(u64 *saved_trfcr, u64 new_trfcr)
write_sysreg_el1(new_trfcr, SYS_TRFCR);
}
-static bool __trace_needs_drain(void)
+static void __trace_drain_and_disable(void)
{
- if (is_protected_kvm_enabled() && host_data_test_flag(HAS_TRBE))
- return read_sysreg_s(SYS_TRBLIMITR_EL1) & TRBLIMITR_EL1_E;
+ u64 *trblimitr_el1 = host_data_ptr(host_debug_state.trblimitr_el1);
+ bool needs_drain = is_protected_kvm_enabled() ?
+ host_data_test_flag(HAS_TRBE) :
+ host_data_test_flag(TRBE_ENABLED);
- return host_data_test_flag(TRBE_ENABLED);
+ if (!needs_drain) {
+ *trblimitr_el1 = 0;
+ return;
+ }
+
+ *trblimitr_el1 = read_sysreg_s(SYS_TRBLIMITR_EL1);
+ if (*trblimitr_el1 & TRBLIMITR_EL1_E) {
+ /*
+ * The host has enabled the Trace Buffer Unit so we have
+ * to beat the CPU with a stick until it stops accessing
+ * memory.
+ */
+
+ /* First, ensure that our prior write to TRFCR has stuck. */
+ isb();
+
+ /* Now synchronise with the trace and drain the buffer. */
+ tsb_csync();
+ dsb(nsh);
+
+ /*
+ * With no more trace being generated, we can disable the
+ * Trace Buffer Unit.
+ */
+ write_sysreg_s(0, SYS_TRBLIMITR_EL1);
+ if (cpus_have_final_cap(ARM64_WORKAROUND_2064142)) {
+ /*
+ * Some CPUs are so good, we have to drain 'em
+ * twice.
+ */
+ tsb_csync();
+ dsb(nsh);
+ }
+
+ /*
+ * Ensure that the Trace Buffer Unit is disabled before
+ * we start mucking with the stage-2 and trap
+ * configuration.
+ */
+ isb();
+ }
}
static bool __trace_needs_switch(void)
@@ -79,15 +121,26 @@ static void __trace_switch_to_guest(void)
__trace_do_switch(host_data_ptr(host_debug_state.trfcr_el1),
*host_data_ptr(trfcr_while_in_guest));
-
- if (__trace_needs_drain()) {
- isb();
- tsb_csync();
- }
+ __trace_drain_and_disable();
}
static void __trace_switch_to_host(void)
{
+ u64 trblimitr_el1 = *host_data_ptr(host_debug_state.trblimitr_el1);
+
+ if (trblimitr_el1 & TRBLIMITR_EL1_E) {
+ /* Re-enable the Trace Buffer Unit for the host. */
+ write_sysreg_s(trblimitr_el1, SYS_TRBLIMITR_EL1);
+ isb();
+ if (cpus_have_final_cap(ARM64_WORKAROUND_2038923)) {
+ /*
+ * Make sure the unit is re-enabled before we
+ * poke TRFCR.
+ */
+ isb();
+ }
+ }
+
__trace_do_switch(host_data_ptr(trfcr_while_in_guest),
*host_data_ptr(host_debug_state.trfcr_el1));
}
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 779089e42681..f00688e69d88 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -278,7 +278,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
* We're about to restore some new MMU state. Make sure
* ongoing page-table walks that have started before we
* trapped to EL2 have completed. This also synchronises the
- * above disabling of BRBE, SPE and TRBE.
+ * above disabling of BRBE and SPE.
*
* See DDI0487I.a D8.1.5 "Out-of-context translation regimes",
* rule R_LFHQG and subsequent information statements.
--
2.53.0.1018.g2bb0e51243-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread* [PATCH v4 2/3] KVM: arm64: Disable SPE Profiling Buffer when running in guest context
2026-03-27 13:00 [PATCH v4 0/3] KVM: arm64: Fix SPE and TRBE nVHE world switch Will Deacon
2026-03-27 13:00 ` [PATCH v4 1/3] KVM: arm64: Disable TRBE Trace Buffer Unit when running in guest context Will Deacon
@ 2026-03-27 13:00 ` Will Deacon
2026-03-27 13:00 ` [PATCH v4 3/3] KVM: arm64: Don't pass host_debug_state to BRBE world-switch routines Will Deacon
2026-03-28 17:13 ` [PATCH v4 0/3] KVM: arm64: Fix SPE and TRBE nVHE world switch Marc Zyngier
3 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2026-03-27 13:00 UTC (permalink / raw)
To: kvmarm
Cc: mark.rutland, linux-arm-kernel, Will Deacon, Marc Zyngier,
Oliver Upton, James Clark, Leo Yan, Suzuki K Poulose, Fuad Tabba,
Alexandru Elisei, Yabin Cui
The nVHE world-switch code relies on zeroing PMSCR_EL1 to disable
profiling data generation in guest context when SPE is in use by the
host.
Unfortunately, this may leave PMBLIMITR_EL1.E set and consequently we
can end up running in guest/hypervisor context with the Profiling Buffer
enabled. The current "known issues" document for Rev M.a of the Arm ARM
states that this can lead to speculative, out-of-context translations:
| 2.18 D23136:
|
| When the Profiling Buffer is enabled, profiling is not stopped, and
| Discard mode is not enabled, the Statistical Profiling Unit might
| cause speculative translations for the owning translation regime,
| including when the owning translation regime is out-of-context.
In a similar fashion to TRBE, ensure that the Profiling Buffer is
disabled during the nVHE world switch before we start messing with the
stage-2 MMU and trap configuration.
Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oupton@kernel.org>
Cc: James Clark <james.clark@linaro.org>
Cc: Leo Yan <leo.yan@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Alexandru Elisei <alexandru.elisei@arm.com>
Tested-by: Fuad Tabba <tabba@google.com>
Fixes: f85279b4bd48 ("arm64: KVM: Save/restore the host SPE state when entering/leaving a VM")
Signed-off-by: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 1 +
arch/arm64/kvm/hyp/nvhe/debug-sr.c | 33 ++++++++++++++++++++----------
arch/arm64/kvm/hyp/nvhe/switch.c | 2 +-
3 files changed, 24 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b1335c55dbef..fe588760fe62 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -768,6 +768,7 @@ struct kvm_host_data {
struct kvm_guest_debug_arch regs;
/* Statistical profiling extension */
u64 pmscr_el1;
+ u64 pmblimitr_el1;
/* Self-hosted trace */
u64 trfcr_el1;
u64 trblimitr_el1;
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 0955af771ad1..84bc80f4e36b 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -14,20 +14,20 @@
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
-static void __debug_save_spe(u64 *pmscr_el1)
+static void __debug_save_spe(void)
{
- u64 reg;
+ u64 *pmscr_el1, *pmblimitr_el1;
- /* Clear pmscr in case of early return */
- *pmscr_el1 = 0;
+ pmscr_el1 = host_data_ptr(host_debug_state.pmscr_el1);
+ pmblimitr_el1 = host_data_ptr(host_debug_state.pmblimitr_el1);
/*
* At this point, we know that this CPU implements
* SPE and is available to the host.
* Check if the host is actually using it ?
*/
- reg = read_sysreg_s(SYS_PMBLIMITR_EL1);
- if (!(reg & BIT(PMBLIMITR_EL1_E_SHIFT)))
+ *pmblimitr_el1 = read_sysreg_s(SYS_PMBLIMITR_EL1);
+ if (!(*pmblimitr_el1 & BIT(PMBLIMITR_EL1_E_SHIFT)))
return;
/* Yes; save the control register and disable data generation */
@@ -37,18 +37,29 @@ static void __debug_save_spe(u64 *pmscr_el1)
/* Now drain all buffered data to memory */
psb_csync();
+ dsb(nsh);
+
+ /* And disable the profiling buffer */
+ write_sysreg_s(0, SYS_PMBLIMITR_EL1);
+ isb();
}
-static void __debug_restore_spe(u64 pmscr_el1)
+static void __debug_restore_spe(void)
{
- if (!pmscr_el1)
+ u64 pmblimitr_el1 = *host_data_ptr(host_debug_state.pmblimitr_el1);
+
+ if (!(pmblimitr_el1 & BIT(PMBLIMITR_EL1_E_SHIFT)))
return;
/* The host page table is installed, but not yet synchronised */
isb();
+ /* Re-enable the profiling buffer. */
+ write_sysreg_s(pmblimitr_el1, SYS_PMBLIMITR_EL1);
+ isb();
+
/* Re-enable data generation */
- write_sysreg_el1(pmscr_el1, SYS_PMSCR);
+ write_sysreg_el1(*host_data_ptr(host_debug_state.pmscr_el1), SYS_PMSCR);
}
static void __trace_do_switch(u64 *saved_trfcr, u64 new_trfcr)
@@ -175,7 +186,7 @@ void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
{
/* Disable and flush SPE data generation */
if (host_data_test_flag(HAS_SPE))
- __debug_save_spe(host_data_ptr(host_debug_state.pmscr_el1));
+ __debug_save_spe();
/* Disable BRBE branch records */
if (host_data_test_flag(HAS_BRBE))
@@ -193,7 +204,7 @@ void __debug_switch_to_guest(struct kvm_vcpu *vcpu)
void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
{
if (host_data_test_flag(HAS_SPE))
- __debug_restore_spe(*host_data_ptr(host_debug_state.pmscr_el1));
+ __debug_restore_spe();
if (host_data_test_flag(HAS_BRBE))
__debug_restore_brbe(*host_data_ptr(host_debug_state.brbcr_el1));
if (__trace_needs_switch())
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index f00688e69d88..9b6e87dac3b9 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -278,7 +278,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
* We're about to restore some new MMU state. Make sure
* ongoing page-table walks that have started before we
* trapped to EL2 have completed. This also synchronises the
- * above disabling of BRBE and SPE.
+ * above disabling of BRBE.
*
* See DDI0487I.a D8.1.5 "Out-of-context translation regimes",
* rule R_LFHQG and subsequent information statements.
--
2.53.0.1018.g2bb0e51243-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread* [PATCH v4 3/3] KVM: arm64: Don't pass host_debug_state to BRBE world-switch routines
2026-03-27 13:00 [PATCH v4 0/3] KVM: arm64: Fix SPE and TRBE nVHE world switch Will Deacon
2026-03-27 13:00 ` [PATCH v4 1/3] KVM: arm64: Disable TRBE Trace Buffer Unit when running in guest context Will Deacon
2026-03-27 13:00 ` [PATCH v4 2/3] KVM: arm64: Disable SPE Profiling Buffer " Will Deacon
@ 2026-03-27 13:00 ` Will Deacon
2026-03-27 13:23 ` Fuad Tabba
2026-03-28 17:13 ` [PATCH v4 0/3] KVM: arm64: Fix SPE and TRBE nVHE world switch Marc Zyngier
3 siblings, 1 reply; 6+ messages in thread
From: Will Deacon @ 2026-03-27 13:00 UTC (permalink / raw)
To: kvmarm
Cc: mark.rutland, linux-arm-kernel, Will Deacon, Marc Zyngier,
Oliver Upton, James Clark, Leo Yan, Suzuki K Poulose, Fuad Tabba,
Alexandru Elisei, Yabin Cui
Now that the SPE and BRBE nVHE world-switch routines operate on the
host_debug_state directly, tweak the BRBE code to do the same for
consistency.
This is purely cosmetic.
Cc: Marc Zyngier <maz@kernel.org>
Cc: Oliver Upton <oupton@kernel.org>
Cc: James Clark <james.clark@linaro.org>
Cc: Leo Yan <leo.yan@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
arch/arm64/kvm/hyp/nvhe/debug-sr.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
index 84bc80f4e36b..f8904391c125 100644
--- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
+++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
@@ -156,8 +156,10 @@ static void __trace_switch_to_host(void)
*host_data_ptr(host_debug_state.trfcr_el1));
}
-static void __debug_save_brbe(u64 *brbcr_el1)
+static void __debug_save_brbe(void)
{
+ u64 *brbcr_el1 = host_data_ptr(host_debug_state.brbcr_el1);
+
*brbcr_el1 = 0;
/* Check if the BRBE is enabled */
@@ -173,8 +175,10 @@ static void __debug_save_brbe(u64 *brbcr_el1)
write_sysreg_el1(0, SYS_BRBCR);
}
-static void __debug_restore_brbe(u64 brbcr_el1)
+static void __debug_restore_brbe(void)
{
+ u64 brbcr_el1 = *host_data_ptr(host_debug_state.brbcr_el1);
+
if (!brbcr_el1)
return;
@@ -190,7 +194,7 @@ void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
/* Disable BRBE branch records */
if (host_data_test_flag(HAS_BRBE))
- __debug_save_brbe(host_data_ptr(host_debug_state.brbcr_el1));
+ __debug_save_brbe();
if (__trace_needs_switch())
__trace_switch_to_guest();
@@ -206,7 +210,7 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
if (host_data_test_flag(HAS_SPE))
__debug_restore_spe();
if (host_data_test_flag(HAS_BRBE))
- __debug_restore_brbe(*host_data_ptr(host_debug_state.brbcr_el1));
+ __debug_restore_brbe();
if (__trace_needs_switch())
__trace_switch_to_host();
}
--
2.53.0.1018.g2bb0e51243-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH v4 3/3] KVM: arm64: Don't pass host_debug_state to BRBE world-switch routines
2026-03-27 13:00 ` [PATCH v4 3/3] KVM: arm64: Don't pass host_debug_state to BRBE world-switch routines Will Deacon
@ 2026-03-27 13:23 ` Fuad Tabba
0 siblings, 0 replies; 6+ messages in thread
From: Fuad Tabba @ 2026-03-27 13:23 UTC (permalink / raw)
To: Will Deacon
Cc: kvmarm, mark.rutland, linux-arm-kernel, Marc Zyngier,
Oliver Upton, James Clark, Leo Yan, Suzuki K Poulose,
Alexandru Elisei, Yabin Cui
On Fri, 27 Mar 2026 at 13:01, Will Deacon <will@kernel.org> wrote:
>
> Now that the SPE and BRBE nVHE world-switch routines operate on the
> host_debug_state directly, tweak the BRBE code to do the same for
> consistency.
>
> This is purely cosmetic.
It is indeed... until sashiko proves me wrong again :)
Tested-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Cheers,
/fuad
>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Oliver Upton <oupton@kernel.org>
> Cc: James Clark <james.clark@linaro.org>
> Cc: Leo Yan <leo.yan@arm.com>
> Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
> Cc: Fuad Tabba <tabba@google.com>
> Cc: Alexandru Elisei <alexandru.elisei@arm.com>
> Signed-off-by: Will Deacon <will@kernel.org>
> ---
> arch/arm64/kvm/hyp/nvhe/debug-sr.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/debug-sr.c b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> index 84bc80f4e36b..f8904391c125 100644
> --- a/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> +++ b/arch/arm64/kvm/hyp/nvhe/debug-sr.c
> @@ -156,8 +156,10 @@ static void __trace_switch_to_host(void)
> *host_data_ptr(host_debug_state.trfcr_el1));
> }
>
> -static void __debug_save_brbe(u64 *brbcr_el1)
> +static void __debug_save_brbe(void)
> {
> + u64 *brbcr_el1 = host_data_ptr(host_debug_state.brbcr_el1);
> +
> *brbcr_el1 = 0;
>
> /* Check if the BRBE is enabled */
> @@ -173,8 +175,10 @@ static void __debug_save_brbe(u64 *brbcr_el1)
> write_sysreg_el1(0, SYS_BRBCR);
> }
>
> -static void __debug_restore_brbe(u64 brbcr_el1)
> +static void __debug_restore_brbe(void)
> {
> + u64 brbcr_el1 = *host_data_ptr(host_debug_state.brbcr_el1);
> +
> if (!brbcr_el1)
> return;
>
> @@ -190,7 +194,7 @@ void __debug_save_host_buffers_nvhe(struct kvm_vcpu *vcpu)
>
> /* Disable BRBE branch records */
> if (host_data_test_flag(HAS_BRBE))
> - __debug_save_brbe(host_data_ptr(host_debug_state.brbcr_el1));
> + __debug_save_brbe();
>
> if (__trace_needs_switch())
> __trace_switch_to_guest();
> @@ -206,7 +210,7 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu)
> if (host_data_test_flag(HAS_SPE))
> __debug_restore_spe();
> if (host_data_test_flag(HAS_BRBE))
> - __debug_restore_brbe(*host_data_ptr(host_debug_state.brbcr_el1));
> + __debug_restore_brbe();
> if (__trace_needs_switch())
> __trace_switch_to_host();
> }
> --
> 2.53.0.1018.g2bb0e51243-goog
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v4 0/3] KVM: arm64: Fix SPE and TRBE nVHE world switch
2026-03-27 13:00 [PATCH v4 0/3] KVM: arm64: Fix SPE and TRBE nVHE world switch Will Deacon
` (2 preceding siblings ...)
2026-03-27 13:00 ` [PATCH v4 3/3] KVM: arm64: Don't pass host_debug_state to BRBE world-switch routines Will Deacon
@ 2026-03-28 17:13 ` Marc Zyngier
3 siblings, 0 replies; 6+ messages in thread
From: Marc Zyngier @ 2026-03-28 17:13 UTC (permalink / raw)
To: kvmarm, Will Deacon
Cc: mark.rutland, linux-arm-kernel, Oliver Upton, James Clark,
Leo Yan, Suzuki K Poulose, Fuad Tabba, Alexandru Elisei,
Yabin Cui
On Fri, 27 Mar 2026 13:00:43 +0000, Will Deacon wrote:
> I got the Sashiko treatment on v3, so here's a quick respin to address
> the BRBE thinko it found in the last patch.
>
> Previous versions of the series are available at:
>
> v1: https://lore.kernel.org/r/20260216130959.19317-1-will@kernel.org
> v2: https://lore.kernel.org/r/20260227212136.7660-1-will@kernel.org
> v3: https://lore.kernel.org/r/20260326141214.18990-1-will@kernel.org
>
> [...]
Applied to next, thanks!
[1/3] KVM: arm64: Disable TRBE Trace Buffer Unit when running in guest context
commit: d133aa75e39dd72e0b8577ab1f5fc17c72246536
[2/3] KVM: arm64: Disable SPE Profiling Buffer when running in guest context
commit: 07695f7dc1e141601254057a00bf4e23301eb0b2
[3/3] KVM: arm64: Don't pass host_debug_state to BRBE world-switch routines
commit: 7aba10efef1d972fc82b00b84911f07f6afbdb78
Cheers,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 6+ messages in thread