* [PATCH 0/4] KVM: arm64: Don't perform vgic-v2 lazy init on timer injection
@ 2026-04-17 12:46 Marc Zyngier
2026-04-17 12:46 ` [PATCH 1/4] KVM: arm64: timer: Repaint kvm_timer_should_fire() to kvm_timer_pending() Marc Zyngier
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Marc Zyngier @ 2026-04-17 12:46 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: Deepanshu Kartikey, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Syzkaller reported an interesting case [1] showing vgic-v2 being
initialised via the lazy init path on injection from the timer reset
path. Yes, that's convoluted. This resulted in a splat as we could
end-up scheduling in an atomic context.
Deepanshu proposed [2] a simple fix that unconditionally init'd the
GIC on vcpu reset. While this would do the trick, this is only
papering over the real issue.
The situation is that we currently have three ways to lazily init the
vgic:
- on first run of any vcpu
- on access from userspace injecting an interrupt
- on access from the kernel injecting an interrupt
The splat is caused by this last one, and it is interesting to drill
into why we end-up with it.
All guest interrupts generated by the kernel itself are level. Which
means that they cannot be lost unless the generating device is being
interacted with. So there shouldn't be any need to initialise the vgic
for that reason, and we could defer it to the first run of a vcpu.
However, the timers are extra special. Each one has its own little
single bit cache that contains the last level set. And as long as the
level doesn't change, the timer code doesn't call into the interrupt
injection code, making it totally optimal.
A side effect of this optimisation is that the level interrupt
effectively becomes an edge (only the changes are reported). Which
means that the interrupt must be recorded in the vgic, or it be
forever lost. Hence the need to eagerly initialise the GIC at
injection time.
But frankly, there isn't much to gain by having this cache. All we
avoid is a lookup, an uncontended lock, and an early return. The other
interrupts generated by the kernel (PMU, vgic MI) don't have such
cache, and nobody has complained yet.
So let's drop this cache, and remove the vgic init from the kernel
injection. If someone shouts about a loss of performance, then let's
improve the interrupt injection itself, and not paper over it. Also
use this opportunity to repaint kvm_timer_should_fire() as
kvm_timer_pending(), something that is way less ambiguous.
Patches on top of kvmarm-7.1. The reproducer didn't trigger on my
boxes, and syzkaller is down at the moment. But nothing bad happened
in my testing...
[1] https://syzkaller.appspot.com/bug?extid=12b178b7c756664d2518
[2] https://lore.kernel.org/r/20260412080437.38782-1-kartikey406@gmail.com
Marc Zyngier (4):
KVM: arm64: timer: Repaint kvm_timer_should_fire() to
kvm_timer_pending()
KVM: arm64: timer: Kill the per-timer level cache
KVM: arm64: vgic-v2: Force vgic init on injection from userspace
KVM: arm64: vgic-v2: Don't init the vgic on in-kernel interrupt
injection
arch/arm64/kvm/arch_timer.c | 44 ++++++++++++++++++------------------
arch/arm64/kvm/arm.c | 7 ++++++
arch/arm64/kvm/vgic/vgic.c | 6 ++---
include/kvm/arm_arch_timer.h | 5 ----
4 files changed, 31 insertions(+), 31 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 1/4] KVM: arm64: timer: Repaint kvm_timer_should_fire() to kvm_timer_pending()
2026-04-17 12:46 [PATCH 0/4] KVM: arm64: Don't perform vgic-v2 lazy init on timer injection Marc Zyngier
@ 2026-04-17 12:46 ` Marc Zyngier
2026-04-17 12:46 ` [PATCH 2/4] KVM: arm64: timer: Kill the per-timer level cache Marc Zyngier
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Marc Zyngier @ 2026-04-17 12:46 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: Deepanshu Kartikey, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
kvm_timer_should_fire() seems to date back to a time where the author
of the timer code didn't seem to have made the word "pending" part of
their vocabulary.
Having since slightly improved on that front, let's rename this predicate
to kvm_timer_pending(), which clearly indicates whether the timer
interrupt is pending or not.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/arch_timer.c | 34 +++++++++++++++++-----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index cbea4d9ee9552..d6802fc87e085 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -42,7 +42,7 @@ static const u8 default_ppi[] = {
static bool kvm_timer_irq_can_fire(struct arch_timer_context *timer_ctx);
static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
struct arch_timer_context *timer_ctx);
-static bool kvm_timer_should_fire(struct arch_timer_context *timer_ctx);
+static bool kvm_timer_pending(struct arch_timer_context *timer_ctx);
static void kvm_arm_timer_write(struct kvm_vcpu *vcpu,
struct arch_timer_context *timer,
enum kvm_arch_timer_regs treg,
@@ -224,7 +224,7 @@ static irqreturn_t kvm_arch_timer_handler(int irq, void *dev_id)
else
ctx = map.direct_ptimer;
- if (kvm_timer_should_fire(ctx))
+ if (kvm_timer_pending(ctx))
kvm_timer_update_irq(vcpu, true, ctx);
if (userspace_irqchip(vcpu->kvm) &&
@@ -358,7 +358,7 @@ static enum hrtimer_restart kvm_hrtimer_expire(struct hrtimer *hrt)
return HRTIMER_NORESTART;
}
-static bool kvm_timer_should_fire(struct arch_timer_context *timer_ctx)
+static bool kvm_timer_pending(struct arch_timer_context *timer_ctx)
{
enum kvm_arch_timers index;
u64 cval, now;
@@ -417,9 +417,9 @@ void kvm_timer_update_run(struct kvm_vcpu *vcpu)
/* Populate the device bitmap with the timer states */
regs->device_irq_level &= ~(KVM_ARM_DEV_EL1_VTIMER |
KVM_ARM_DEV_EL1_PTIMER);
- if (kvm_timer_should_fire(vtimer))
+ if (kvm_timer_pending(vtimer))
regs->device_irq_level |= KVM_ARM_DEV_EL1_VTIMER;
- if (kvm_timer_should_fire(ptimer))
+ if (kvm_timer_pending(ptimer))
regs->device_irq_level |= KVM_ARM_DEV_EL1_PTIMER;
}
@@ -473,21 +473,21 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
/* Only called for a fully emulated timer */
static void timer_emulate(struct arch_timer_context *ctx)
{
- bool should_fire = kvm_timer_should_fire(ctx);
+ bool pending = kvm_timer_pending(ctx);
- trace_kvm_timer_emulate(ctx, should_fire);
+ trace_kvm_timer_emulate(ctx, pending);
- if (should_fire != ctx->irq.level)
- kvm_timer_update_irq(timer_context_to_vcpu(ctx), should_fire, ctx);
+ if (pending != ctx->irq.level)
+ kvm_timer_update_irq(timer_context_to_vcpu(ctx), pending, ctx);
- kvm_timer_update_status(ctx, should_fire);
+ kvm_timer_update_status(ctx, pending);
/*
* If the timer can fire now, we don't need to have a soft timer
* scheduled for the future. If the timer cannot fire at all,
* then we also don't need a soft timer.
*/
- if (should_fire || !kvm_timer_irq_can_fire(ctx))
+ if (pending || !kvm_timer_irq_can_fire(ctx))
return;
soft_timer_start(&ctx->hrtimer, kvm_timer_compute_delta(ctx));
@@ -685,7 +685,7 @@ static void kvm_timer_vcpu_load_gic(struct arch_timer_context *ctx)
* this point and the register restoration, we'll take the
* interrupt anyway.
*/
- kvm_timer_update_irq(vcpu, kvm_timer_should_fire(ctx), ctx);
+ kvm_timer_update_irq(vcpu, kvm_timer_pending(ctx), ctx);
if (irqchip_in_kernel(vcpu->kvm))
phys_active = kvm_vgic_map_is_active(vcpu, timer_irq(ctx));
@@ -706,7 +706,7 @@ static void kvm_timer_vcpu_load_nogic(struct kvm_vcpu *vcpu)
* this point and the register restoration, we'll take the
* interrupt anyway.
*/
- kvm_timer_update_irq(vcpu, kvm_timer_should_fire(vtimer), vtimer);
+ kvm_timer_update_irq(vcpu, kvm_timer_pending(vtimer), vtimer);
/*
* When using a userspace irqchip with the architected timers and a
@@ -917,8 +917,8 @@ bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu)
vlevel = sregs->device_irq_level & KVM_ARM_DEV_EL1_VTIMER;
plevel = sregs->device_irq_level & KVM_ARM_DEV_EL1_PTIMER;
- return kvm_timer_should_fire(vtimer) != vlevel ||
- kvm_timer_should_fire(ptimer) != plevel;
+ return kvm_timer_pending(vtimer) != vlevel ||
+ kvm_timer_pending(ptimer) != plevel;
}
void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu)
@@ -1006,7 +1006,7 @@ static void unmask_vtimer_irq_user(struct kvm_vcpu *vcpu)
{
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
- if (!kvm_timer_should_fire(vtimer)) {
+ if (!kvm_timer_pending(vtimer)) {
kvm_timer_update_irq(vcpu, false, vtimer);
if (static_branch_likely(&has_gic_active_state))
set_timer_irq_phys_active(vtimer, false);
@@ -1579,7 +1579,7 @@ static bool kvm_arch_timer_get_input_level(int vintid)
ctx = vcpu_get_timer(vcpu, i);
if (timer_irq(ctx) == vintid)
- return kvm_timer_should_fire(ctx);
+ return kvm_timer_pending(ctx);
}
/* A timer IRQ has fired, but no matching timer was found? */
--
2.47.3
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 2/4] KVM: arm64: timer: Kill the per-timer level cache
2026-04-17 12:46 [PATCH 0/4] KVM: arm64: Don't perform vgic-v2 lazy init on timer injection Marc Zyngier
2026-04-17 12:46 ` [PATCH 1/4] KVM: arm64: timer: Repaint kvm_timer_should_fire() to kvm_timer_pending() Marc Zyngier
@ 2026-04-17 12:46 ` Marc Zyngier
2026-04-17 15:56 ` Marc Zyngier
2026-04-17 12:46 ` [PATCH 3/4] KVM: arm64: vgic-v2: Force vgic init on injection from userspace Marc Zyngier
2026-04-17 12:46 ` [PATCH 4/4] KVM: arm64: vgic-v2: Don't init the vgic on in-kernel interrupt injection Marc Zyngier
3 siblings, 1 reply; 6+ messages in thread
From: Marc Zyngier @ 2026-04-17 12:46 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: Deepanshu Kartikey, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
The timer code makes use of a per-timer irq level cache, which
looks like a very minor optimisation to avoid taking a lock upon
updating the GIC view of the interrupt when it is unchanged from
the previous state.
This is coming in the way of more important correctness issues,
so get rid of the cache, which simplifies a couple of minor things.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/arch_timer.c | 18 +++++++++---------
include/kvm/arm_arch_timer.h | 5 -----
2 files changed, 9 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
index d6802fc87e085..fdc1afff06340 100644
--- a/arch/arm64/kvm/arch_timer.c
+++ b/arch/arm64/kvm/arch_timer.c
@@ -446,9 +446,8 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
{
kvm_timer_update_status(timer_ctx, new_level);
- timer_ctx->irq.level = new_level;
trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_irq(timer_ctx),
- timer_ctx->irq.level);
+ new_level);
if (userspace_irqchip(vcpu->kvm))
return;
@@ -466,7 +465,7 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
kvm_vgic_inject_irq(vcpu->kvm, vcpu,
timer_irq(timer_ctx),
- timer_ctx->irq.level,
+ new_level,
timer_ctx);
}
@@ -477,8 +476,7 @@ static void timer_emulate(struct arch_timer_context *ctx)
trace_kvm_timer_emulate(ctx, pending);
- if (pending != ctx->irq.level)
- kvm_timer_update_irq(timer_context_to_vcpu(ctx), pending, ctx);
+ kvm_timer_update_irq(timer_context_to_vcpu(ctx), pending, ctx);
kvm_timer_update_status(ctx, pending);
@@ -677,6 +675,7 @@ static inline void set_timer_irq_phys_active(struct arch_timer_context *ctx, boo
static void kvm_timer_vcpu_load_gic(struct arch_timer_context *ctx)
{
struct kvm_vcpu *vcpu = timer_context_to_vcpu(ctx);
+ bool pending = kvm_timer_pending(ctx);
bool phys_active = false;
/*
@@ -685,12 +684,12 @@ static void kvm_timer_vcpu_load_gic(struct arch_timer_context *ctx)
* this point and the register restoration, we'll take the
* interrupt anyway.
*/
- kvm_timer_update_irq(vcpu, kvm_timer_pending(ctx), ctx);
+ kvm_timer_update_irq(vcpu, pending, ctx);
if (irqchip_in_kernel(vcpu->kvm))
phys_active = kvm_vgic_map_is_active(vcpu, timer_irq(ctx));
- phys_active |= ctx->irq.level;
+ phys_active |= pending;
phys_active |= vgic_is_v5(vcpu->kvm);
set_timer_irq_phys_active(ctx, phys_active);
@@ -699,6 +698,7 @@ static void kvm_timer_vcpu_load_gic(struct arch_timer_context *ctx)
static void kvm_timer_vcpu_load_nogic(struct kvm_vcpu *vcpu)
{
struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+ bool pending = kvm_timer_pending(vtimer);
/*
* Update the timer output so that it is likely to match the
@@ -706,7 +706,7 @@ static void kvm_timer_vcpu_load_nogic(struct kvm_vcpu *vcpu)
* this point and the register restoration, we'll take the
* interrupt anyway.
*/
- kvm_timer_update_irq(vcpu, kvm_timer_pending(vtimer), vtimer);
+ kvm_timer_update_irq(vcpu, pending, vtimer);
/*
* When using a userspace irqchip with the architected timers and a
@@ -718,7 +718,7 @@ static void kvm_timer_vcpu_load_nogic(struct kvm_vcpu *vcpu)
* being de-asserted, we unmask the interrupt again so that we exit
* from the guest when the timer fires.
*/
- if (vtimer->irq.level)
+ if (pending)
disable_percpu_irq(host_vtimer_irq);
else
enable_percpu_irq(host_vtimer_irq, host_vtimer_irq_flags);
diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
index bf8cc9589bd09..2c26d457c3510 100644
--- a/include/kvm/arm_arch_timer.h
+++ b/include/kvm/arm_arch_timer.h
@@ -66,11 +66,6 @@ struct arch_timer_context {
*/
bool loaded;
- /* Output level of the timer IRQ */
- struct {
- bool level;
- } irq;
-
/* Who am I? */
enum kvm_arch_timers timer_id;
--
2.47.3
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 3/4] KVM: arm64: vgic-v2: Force vgic init on injection from userspace
2026-04-17 12:46 [PATCH 0/4] KVM: arm64: Don't perform vgic-v2 lazy init on timer injection Marc Zyngier
2026-04-17 12:46 ` [PATCH 1/4] KVM: arm64: timer: Repaint kvm_timer_should_fire() to kvm_timer_pending() Marc Zyngier
2026-04-17 12:46 ` [PATCH 2/4] KVM: arm64: timer: Kill the per-timer level cache Marc Zyngier
@ 2026-04-17 12:46 ` Marc Zyngier
2026-04-17 12:46 ` [PATCH 4/4] KVM: arm64: vgic-v2: Don't init the vgic on in-kernel interrupt injection Marc Zyngier
3 siblings, 0 replies; 6+ messages in thread
From: Marc Zyngier @ 2026-04-17 12:46 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: Deepanshu Kartikey, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
Make sure that any attempt to inject an interrupt from userspace
results in the GICv2 lazy init to take place. This is not currently
necessary as the init is also performed on *any* interrupt injection.
But as we're about to remove that, let's introduce it here.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/arm.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 176cbe8baad30..e856cf4099f42 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -51,6 +51,7 @@
#include <linux/irqchip/arm-gic-v5.h>
+#include "vgic/vgic.h"
#include "sys_regs.h"
static enum kvm_mode kvm_mode = KVM_MODE_DEFAULT;
@@ -1475,6 +1476,12 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
trace_kvm_irq_line(irq_type, vcpu_id, irq_num, irq_level->level);
+ if (irqchip_in_kernel(kvm)) {
+ int ret = vgic_lazy_init(kvm);
+ if (ret)
+ return ret;
+ }
+
switch (irq_type) {
case KVM_ARM_IRQ_TYPE_CPU:
if (irqchip_in_kernel(kvm))
--
2.47.3
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 4/4] KVM: arm64: vgic-v2: Don't init the vgic on in-kernel interrupt injection
2026-04-17 12:46 [PATCH 0/4] KVM: arm64: Don't perform vgic-v2 lazy init on timer injection Marc Zyngier
` (2 preceding siblings ...)
2026-04-17 12:46 ` [PATCH 3/4] KVM: arm64: vgic-v2: Force vgic init on injection from userspace Marc Zyngier
@ 2026-04-17 12:46 ` Marc Zyngier
3 siblings, 0 replies; 6+ messages in thread
From: Marc Zyngier @ 2026-04-17 12:46 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: Deepanshu Kartikey, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
We how have the lazy init on three paths:
- on first run of a vcpu
- on first injection of an interrupt from userspace
- on first injection of an interrupt from kernel space
Given that we recompute the state of each in-kernel interrupt
every time we are about to enter the guest, we can drop the lazy
init from the kernel injection path.
This solves a bunch of issues related to vgic_lazy_init() being called
in non-preemptible context.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/vgic/vgic.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/vgic/vgic.c b/arch/arm64/kvm/vgic/vgic.c
index 1e9fe8764584d..9e29f03d3463c 100644
--- a/arch/arm64/kvm/vgic/vgic.c
+++ b/arch/arm64/kvm/vgic/vgic.c
@@ -534,11 +534,9 @@ int kvm_vgic_inject_irq(struct kvm *kvm, struct kvm_vcpu *vcpu,
{
struct vgic_irq *irq;
unsigned long flags;
- int ret;
- ret = vgic_lazy_init(kvm);
- if (ret)
- return ret;
+ if (unlikely(!vgic_initialized(kvm)))
+ return 0;
if (!vcpu && irq_is_private(kvm, intid))
return -EINVAL;
--
2.47.3
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 2/4] KVM: arm64: timer: Kill the per-timer level cache
2026-04-17 12:46 ` [PATCH 2/4] KVM: arm64: timer: Kill the per-timer level cache Marc Zyngier
@ 2026-04-17 15:56 ` Marc Zyngier
0 siblings, 0 replies; 6+ messages in thread
From: Marc Zyngier @ 2026-04-17 15:56 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: Deepanshu Kartikey, Joey Gouly, Suzuki K Poulose, Oliver Upton,
Zenghui Yu
On Fri, 17 Apr 2026 13:46:10 +0100,
Marc Zyngier <maz@kernel.org> wrote:
>
> The timer code makes use of a per-timer irq level cache, which
> looks like a very minor optimisation to avoid taking a lock upon
> updating the GIC view of the interrupt when it is unchanged from
> the previous state.
>
> This is coming in the way of more important correctness issues,
> so get rid of the cache, which simplifies a couple of minor things.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/arch_timer.c | 18 +++++++++---------
> include/kvm/arm_arch_timer.h | 5 -----
> 2 files changed, 9 insertions(+), 14 deletions(-)
>
> diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
> index d6802fc87e085..fdc1afff06340 100644
> --- a/arch/arm64/kvm/arch_timer.c
> +++ b/arch/arm64/kvm/arch_timer.c
> @@ -446,9 +446,8 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
> {
> kvm_timer_update_status(timer_ctx, new_level);
>
> - timer_ctx->irq.level = new_level;
> trace_kvm_timer_update_irq(vcpu->vcpu_id, timer_irq(timer_ctx),
> - timer_ctx->irq.level);
> + new_level);
>
> if (userspace_irqchip(vcpu->kvm))
> return;
> @@ -466,7 +465,7 @@ static void kvm_timer_update_irq(struct kvm_vcpu *vcpu, bool new_level,
>
> kvm_vgic_inject_irq(vcpu->kvm, vcpu,
> timer_irq(timer_ctx),
> - timer_ctx->irq.level,
> + new_level,
> timer_ctx);
> }
>
> @@ -477,8 +476,7 @@ static void timer_emulate(struct arch_timer_context *ctx)
>
> trace_kvm_timer_emulate(ctx, pending);
>
> - if (pending != ctx->irq.level)
> - kvm_timer_update_irq(timer_context_to_vcpu(ctx), pending, ctx);
> + kvm_timer_update_irq(timer_context_to_vcpu(ctx), pending, ctx);
>
> kvm_timer_update_status(ctx, pending);
As my new best mate Sashiko pointed out, kvm_timer_update_status()
here becomes redundant, as the unconditional call to
kvm_timer_update_irq() already contains that.
I'll drop it from the patch when applying, unless there are more
comments.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-04-17 15:56 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-17 12:46 [PATCH 0/4] KVM: arm64: Don't perform vgic-v2 lazy init on timer injection Marc Zyngier
2026-04-17 12:46 ` [PATCH 1/4] KVM: arm64: timer: Repaint kvm_timer_should_fire() to kvm_timer_pending() Marc Zyngier
2026-04-17 12:46 ` [PATCH 2/4] KVM: arm64: timer: Kill the per-timer level cache Marc Zyngier
2026-04-17 15:56 ` Marc Zyngier
2026-04-17 12:46 ` [PATCH 3/4] KVM: arm64: vgic-v2: Force vgic init on injection from userspace Marc Zyngier
2026-04-17 12:46 ` [PATCH 4/4] KVM: arm64: vgic-v2: Don't init the vgic on in-kernel interrupt injection Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox