linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Fuad Tabba <tabba@google.com>
Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
	kvm@vger.kernel.org, Joey Gouly <joey.gouly@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Oliver Upton <oupton@kernel.org>,
	Zenghui Yu <yuzenghui@huawei.com>,
	Christoffer Dall <christoffer.dall@arm.com>,
	Mark Brown <broonie@kernel.org>
Subject: Re: [PATCH v3 3/5] KVM: arm64: GICv3: nv: Resync LRs/VMCR/HCR early for better MI emulation
Date: Mon, 17 Nov 2025 11:34:01 +0000	[thread overview]
Message-ID: <864iqsuc46.wl-maz@kernel.org> (raw)
In-Reply-To: <CA+EHjTwn7PUykGngWRpK3T9gQ_w8=3+BrmEk9GthH0MgMi3FVw@mail.gmail.com>

On Mon, 17 Nov 2025 11:24:24 +0000,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> 
> On Mon, 17 Nov 2025 at 09:15, Marc Zyngier <maz@kernel.org> wrote:
> >
> > The current approach to nested GICv3 support is to not do anything
> > while L2 is running, wait a transition from L2 to L1 to resync
> > LRs, VMCR and HCR, and only then evaluate the state to decide
> > whether to generate a maintenance interrupt.
> >
> > This doesn't provide a good quality of emulation, and it would be
> > far preferable to find out early that we need to perform a switch.
> >
> > Move the LRs/VMCR and HCR resync into vgic_v3_sync_nested(), so
> > that we have most of the state available. As we turning the vgic
> > off at this stage to avoid a screaming host MI, add a new helper
> > vgic_v3_flush_nested() that switches the vgic on again. The MI can
> > then be directly injected as required.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_hyp.h     |  1 +
> >  arch/arm64/kvm/hyp/vgic-v3-sr.c      |  2 +-
> >  arch/arm64/kvm/vgic/vgic-v3-nested.c | 69 ++++++++++++++++------------
> >  arch/arm64/kvm/vgic/vgic.c           |  6 ++-
> >  arch/arm64/kvm/vgic/vgic.h           |  1 +
> >  5 files changed, 46 insertions(+), 33 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> > index dbf16a9f67728..76ce2b94bd97e 100644
> > --- a/arch/arm64/include/asm/kvm_hyp.h
> > +++ b/arch/arm64/include/asm/kvm_hyp.h
> > @@ -77,6 +77,7 @@ DECLARE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
> >  int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
> >
> >  u64 __gic_v3_get_lr(unsigned int lr);
> > +void __gic_v3_set_lr(u64 val, int lr);
> >
> >  void __vgic_v3_save_state(struct vgic_v3_cpu_if *cpu_if);
> >  void __vgic_v3_restore_state(struct vgic_v3_cpu_if *cpu_if);
> > diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
> > index 71199e1a92940..99342c13e1794 100644
> > --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
> > +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
> > @@ -60,7 +60,7 @@ u64 __gic_v3_get_lr(unsigned int lr)
> >         unreachable();
> >  }
> >
> > -static void __gic_v3_set_lr(u64 val, int lr)
> > +void __gic_v3_set_lr(u64 val, int lr)
> >  {
> >         switch (lr & 0xf) {
> >         case 0:
> > diff --git a/arch/arm64/kvm/vgic/vgic-v3-nested.c b/arch/arm64/kvm/vgic/vgic-v3-nested.c
> > index 17bceef83269e..bf37fd3198ba7 100644
> > --- a/arch/arm64/kvm/vgic/vgic-v3-nested.c
> > +++ b/arch/arm64/kvm/vgic/vgic-v3-nested.c
> > @@ -70,13 +70,14 @@ static int lr_map_idx_to_shadow_idx(struct shadow_if *shadow_if, int idx)
> >   * - on L2 put: perform the inverse transformation, so that the result of L2
> >   *   running becomes visible to L1 in the VNCR-accessible registers.
> >   *
> > - * - there is nothing to do on L2 entry, as everything will have happened
> > - *   on load. However, this is the point where we detect that an interrupt
> > - *   targeting L1 and prepare the grand switcheroo.
> > + * - there is nothing to do on L2 entry apart from enabling the vgic, as
> > + *   everything will have happened on load. However, this is the point where
> > + *   we detect that an interrupt targeting L1 and prepare the grand
> > + *   switcheroo.
> >   *
> > - * - on L2 exit: emulate the HW bit, and deactivate corresponding the L1
> > - *   interrupt. The L0 active state will be cleared by the HW if the L1
> > - *   interrupt was itself backed by a HW interrupt.
> > + * - on L2 exit: resync the LRs and VMCR, emulate the HW bit, and deactivate
> > + *   corresponding the L1 interrupt. The L0 active state will be cleared by
> > + *   the HW if the L1 interrupt was itself backed by a HW interrupt.
> >   *
> >   * Maintenance Interrupt (MI) management:
> >   *
> > @@ -265,15 +266,30 @@ static void vgic_v3_create_shadow_lr(struct kvm_vcpu *vcpu,
> >         s_cpu_if->used_lrs = hweight16(shadow_if->lr_map);
> >  }
> >
> > +void vgic_v3_flush_nested(struct kvm_vcpu *vcpu)
> > +{
> > +       u64 val = __vcpu_sys_reg(vcpu, ICH_HCR_EL2);
> > +
> > +       write_sysreg_s(val | vgic_ich_hcr_trap_bits(), SYS_ICH_HCR_EL2);
> > +}
> > +
> >  void vgic_v3_sync_nested(struct kvm_vcpu *vcpu)
> >  {
> >         struct shadow_if *shadow_if = get_shadow_if();
> >         int i;
> >
> >         for_each_set_bit(i, &shadow_if->lr_map, kvm_vgic_global_state.nr_lr) {
> > -               u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));
> > +               u64 val, host_lr, lr;
> >                 struct vgic_irq *irq;
> >
> > +               host_lr = __gic_v3_get_lr(lr_map_idx_to_shadow_idx(shadow_if, i));
> > +
> > +               /* Propagate the new LR state */
> > +               lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));
> > +               val = lr & ~ICH_LR_STATE;
> > +               val |= host_lr & ICH_LR_STATE;
> > +               __vcpu_assign_sys_reg(vcpu, ICH_LRN(i), val);
> > +
> 
> As I said before, I am outside of my comfort zone here. However,
> should the following check be changed to use the merged 'val', rather
> than the guest lr as it was?

[...]

>
> >                 if (!(lr & ICH_LR_HW) || !(lr & ICH_LR_STATE))
> >                         continue;

No, this decision must be taken based on the *original* state, before
the L2 guest was run. If the LR was in an invalid state the first
place, there is nothing to do.

> >
> > @@ -286,12 +302,21 @@ void vgic_v3_sync_nested(struct kvm_vcpu *vcpu)
> >                 if (WARN_ON(!irq)) /* Shouldn't happen as we check on load */
> >                         continue;
> >
> > -               lr = __gic_v3_get_lr(lr_map_idx_to_shadow_idx(shadow_if, i));
> > -               if (!(lr & ICH_LR_STATE))
> > +               if (!(host_lr & ICH_LR_STATE))
> >                         irq->active = false;

And here, if we see that the *new* state (as fished out of the HW LRs)
is now invalid, this means that a deactivation has taken place in L2,
and we must propagate it to L1.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


  reply	other threads:[~2025-11-17 11:34 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-17  9:15 [PATCH v3 0/5] KVM: arm64: Add LR overflow infrastructure (the dregs, the bad and the ugly) Marc Zyngier
2025-11-17  9:15 ` [PATCH v3 1/5] KVM: arm64: GICv3: Don't advertise ICH_HCR_EL2.En==1 when no vgic is configured Marc Zyngier
2025-11-17 10:34   ` Fuad Tabba
2025-11-17 11:28     ` Marc Zyngier
2025-11-17 11:29   ` Fuad Tabba
2025-11-17  9:15 ` [PATCH v3 2/5] KVM: arm64: GICv3: Completely disable trapping on vcpu exit Marc Zyngier
2025-11-17 10:36   ` Fuad Tabba
2025-11-17  9:15 ` [PATCH v3 3/5] KVM: arm64: GICv3: nv: Resync LRs/VMCR/HCR early for better MI emulation Marc Zyngier
2025-11-17 11:24   ` Fuad Tabba
2025-11-17 11:34     ` Marc Zyngier [this message]
2025-11-17 11:37       ` Fuad Tabba
2025-11-17  9:15 ` [PATCH v3 4/5] KVM: arm64: GICv3: Remove vgic_hcr workaround handling leftovers Marc Zyngier
2025-11-17 11:25   ` Fuad Tabba
2025-11-17  9:15 ` [PATCH v3 5/5] KVM: arm64: GICv3: Force exit to sync ICH_HCR_EL2.En Marc Zyngier
2025-11-17 11:35   ` Fuad Tabba
2025-11-17 11:42     ` Marc Zyngier
2025-11-17 11:48       ` Fuad Tabba
2025-11-18  7:16   ` Oliver Upton
2025-11-18  8:54     ` Marc Zyngier
2025-11-17  9:40 ` [PATCH v3 0/5] KVM: arm64: Add LR overflow infrastructure (the dregs, the bad and the ugly) Fuad Tabba
2025-11-17  9:54   ` Marc Zyngier
2025-11-17 10:18     ` Fuad Tabba
2025-11-17 12:54       ` Fuad Tabba
2025-11-18  7:20 ` Oliver Upton
2025-11-18 13:59   ` Fuad Tabba
2025-11-18 19:06     ` Marc Zyngier
2025-11-19 10:37       ` Fuad Tabba
2025-11-18 23:34 ` Oliver Upton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=864iqsuc46.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=broonie@kernel.org \
    --cc=christoffer.dall@arm.com \
    --cc=joey.gouly@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=oupton@kernel.org \
    --cc=suzuki.poulose@arm.com \
    --cc=tabba@google.com \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).