From: Sascha Bischoff <Sascha.Bischoff@arm.com>
To: "maz@kernel.org" <maz@kernel.org>
Cc: "yuzenghui@huawei.com" <yuzenghui@huawei.com>,
Timothy Hayes <Timothy.Hayes@arm.com>,
Suzuki Poulose <Suzuki.Poulose@arm.com>, nd <nd@arm.com>,
"peter.maydell@linaro.org" <peter.maydell@linaro.org>,
"kvmarm@lists.linux.dev" <kvmarm@lists.linux.dev>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
Joey Gouly <Joey.Gouly@arm.com>,
"lpieralisi@kernel.org" <lpieralisi@kernel.org>,
"oliver.upton@linux.dev" <oliver.upton@linux.dev>
Subject: Re: [PATCH 09/43] KVM: arm64: gic-v5: Implement VMT/vIST IRS MMIO Ops
Date: Fri, 8 May 2026 13:31:20 +0000 [thread overview]
Message-ID: <c78e94487f33a6bd733272d93f29c4e192fa9ae5.camel@arm.com> (raw)
In-Reply-To: <86jytpzrae.wl-maz@kernel.org>
On Wed, 2026-04-29 at 17:04 +0100, Marc Zyngier wrote:
> On Mon, 27 Apr 2026 17:09:06 +0100,
> Sascha Bischoff <Sascha.Bischoff@arm.com> wrote:
> >
> > GICv5 has rules about which fields of a VMTE (or L1 VMT) may be
> > directly written by the host once the table is valid. This ensures
> > that no stale state is cached by the hardware, and provides a clear
> > interface for making VMs, ISTs, etc, valid.
> >
> > The hypervisor is responsible for populating the VMTE for a
> > VM. However, it is not permitted to write the Valid bit (as the VM
> > table is already valid). Instead, the VM is made valid via an IRS
> > MMIO
> > Op. The same applies to the ISTs - they must be made valid via the
> > host IRS.
> >
> > This commit adds support for:
> >
> > * Making level 2 VMTs valid (only), allowing for dynamic level 2
> > table
> > allocation.
>
> Isn't it level 1 instead, if L2 is supposed to be dynamic?
Uh, yes. This always ends up a bit backwards in my thought process. The
L2 array is mapped in by marking the L1 VMT entry as valid. I've fixed
this in the commit message.
>
> > * Making VMTEs (VMs) valid or invalid
> > * Making SPI/LPI ISTs valid or invalid for a specific VM
> >
> > When (successfully) probing for a GICv5, the VMT is allocated, and
> > is
> > made valid via the IRS's MMIO interface.
> >
> > This commit also extends the doorbell domain to allow the doorbells
> > themselves to act as a conduit for issuing commands - this is
> > similar
> > to what exists for GICv4 support. Effectively,
> > irq_set_vcpu_affinity()
> > becomes an ioctl-like interface for issuing commands specific to
> > either a VM or the particular VPE that the doorbell belongs to.
> > This
> > change adds support for the following via the VPE doorbells:
> >
> > VMT_L2_MAP - Make a second level VM table valid
> > VMTE_MAKE_VALID - Make a single VMTE (and hence VM) valid
> > VMTE_MAKE_INVALID - Make a single VMTE (and hence VM)
> > invalid
> > SPI_VIST_MAKE_VALID - Make the SPI IST valid
> > LPI_VIST_MAKE_VALID - Make the LPI IST valid
> > LPI_VIST_MAKE_INVALID - Make the LPI IST invalid
> >
> > Note: It is intentional that there is no SPI_VIST_MAKE_INVALID -
> > this
> > cannot happen while the VM is live, and given that the SPI is
>
> This SPI_VIST_MAKE_VALID introduced in the previous patch. It feels
> weird to only explain the lack of INVALID here...
I've shuffled these things around a bit, as well as the commit
messages. Hopefully, it will be clearer in v2!
>
> > allocated as part of VM creation, there is no need to make it
> > invalid
> > again until the VM is destroyed, at which point the VMTE is
> > invalid. Therefore, there's no need to do this via the host's IRS
> > MMIO
> > interface, as it can be directly marked as invalid and freed. LPIs,
> > on
> > the other hand, are driven by the guest itself, and the guest is
> > theoretically free to invalidate and free the LPI IST at any point.
> >
> > Signed-off-by: Sascha Bischoff <sascha.bischoff@arm.com>
> > ---
> > arch/arm64/kvm/vgic/vgic-v5-tables.c | 25 +++
> > arch/arm64/kvm/vgic/vgic-v5-tables.h | 2 +
> > arch/arm64/kvm/vgic/vgic-v5.c | 236
> > ++++++++++++++++++++++++++-
> > include/linux/irqchip/arm-gic-v5.h | 30 ++++
> > 4 files changed, 290 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/vgic/vgic-v5-tables.c
> > b/arch/arm64/kvm/vgic/vgic-v5-tables.c
> > index de905f37b61a5..0120c3205dea6 100644
> > --- a/arch/arm64/kvm/vgic/vgic-v5-tables.c
> > +++ b/arch/arm64/kvm/vgic/vgic-v5-tables.c
> > @@ -666,6 +666,26 @@ int vgic_v5_vmte_free_vpe(struct kvm_vcpu
> > *vcpu)
> > return 0;
> > }
> >
> > +phys_addr_t vgic_v5_get_vmt_base(void)
> > +{
> > + phys_addr_t vmt_base;
> > +
> > + if (!vgic_v5_vmt_allocated())
> > + return -ENXIO;
> > +
> > + if (!vmt_info->two_level)
> > + vmt_base = virt_to_phys(vmt_info->linear.vmt_base);
> > + else
> > + vmt_base = virt_to_phys(vmt_info->l2.vmt_base);
> > +
> > + return vmt_base;
> > +}
> > +
> > +u8 vgic_v5_vmt_vpe_id_bits(void)
> > +{
> > + return fls(vmt_info->max_vpes) - 1;
> > +}
> > +
> > /*
> > * Assign an already allocated IST to the VM by populating the
> > fields in the
> > * corresponding VMTE. We re-use this code for both an SPI IST and
> > LPI IST, even
> > @@ -715,6 +735,11 @@ int vgic_v5_vmte_assign_ist(struct kvm *kvm,
> > phys_addr_t ist_base,
> > /* Finally, mark the entry as valid */
> > cmd_info.cmd_type = spi_ist ? SPI_VIST_MAKE_VALID :
> > LPI_VIST_MAKE_VALID;
> > ret = irq_set_vcpu_affinity(vgic_v5_vpe_db(vcpu0), &cmd_info);
> > + if (ret) {
> > + WRITE_ONCE(vmte->val[section], 0ULL);
> > + vgic_v5_clean_inval(vmte, sizeof(*vmte), true, false);
> > + return ret;
> > + }
> >
> > /* Any cached entries we now have are stale! */
> > vgic_v5_clean_inval(vmte, sizeof(*vmte), false, true);
> > diff --git a/arch/arm64/kvm/vgic/vgic-v5-tables.h
> > b/arch/arm64/kvm/vgic/vgic-v5-tables.h
> > index 37e220cda1987..6a024337eba79 100644
> > --- a/arch/arm64/kvm/vgic/vgic-v5-tables.h
> > +++ b/arch/arm64/kvm/vgic/vgic-v5-tables.h
> > @@ -150,6 +150,8 @@ int vgic_v5_vmt_allocate(bool two_level,
> > unsigned int num_entries,
> > size_t vmd_size, size_t vped_size,
> > unsigned int vpe_id_bits);
> > int vgic_v5_vmt_free(void);
> > +phys_addr_t vgic_v5_get_vmt_base(void);
> > +u8 vgic_v5_vmt_vpe_id_bits(void);
> >
> > int vgic_v5_allocate_vm_id(struct kvm *kvm);
> > void vgic_v5_release_vm_id(struct kvm *kvm);
> > diff --git a/arch/arm64/kvm/vgic/vgic-v5.c
> > b/arch/arm64/kvm/vgic/vgic-v5.c
> > index 4e0d52b309628..49eb01ca07961 100644
> > --- a/arch/arm64/kvm/vgic/vgic-v5.c
> > +++ b/arch/arm64/kvm/vgic/vgic-v5.c
> > @@ -36,6 +36,12 @@ static void vgic_v5_get_implemented_ppis(void)
> > __assign_bit(GICV5_ARCH_PPI_PMUIRQ, ppi_caps.impl_ppi_mask,
> > system_supports_pmuv3());
> > }
> >
> > +/*
> > + * The IRS MMIO interface is shared between all VMs, so make sure
> > we don't do
> > + * anything stupid!
> > + */
> > +static DEFINE_RAW_SPINLOCK(vm_config_lock);
> > +
>
> I don't think you could have picked a worse name for this lock. It
> has
> nothing to do with a VM. It really is a global IRS lock.
I've gone and change it to exactly that: global_irs_lock. At the time
of originally writing that my thinking was that we're updating a VM's
config with the IRS, but I completely see why that's a misleading name!
>
> > static void __iomem *irs_base;
> >
> > static u32 irs_readl_relaxed(const u32 reg_offset)
> > @@ -43,6 +49,21 @@ static u32 irs_readl_relaxed(const u32
> > reg_offset)
> > return readl_relaxed(irs_base + reg_offset);
> > }
> >
> > +static void irs_writel_relaxed(const u32 val, const u32
> > reg_offset)
> > +{
> > + writel_relaxed(val, irs_base + reg_offset);
> > +}
> > +
> > +static u64 irs_readq_relaxed(const u32 reg_offset)
> > +{
> > + return readq_relaxed(irs_base + reg_offset);
> > +}
> > +
> > +static void irs_writeq_relaxed(const u64 val, const u32
> > reg_offset)
> > +{
> > + writeq_relaxed(val, irs_base + reg_offset);
> > +}
> > +
> > static int gicv5_irs_extract_vm_caps(const struct gic_kvm_info
> > *info)
> > {
> > u64 idr;
> > @@ -84,16 +105,22 @@ static int gicv5_irs_extract_vm_caps(const
> > struct gic_kvm_info *info)
> > return 0;
> > }
> >
> > +/* Forward decl for cleaner code layout */
>
> Drop this comment. The intent is pretty obvious. And maybe move them
> to the top, so that all forward declarations are grouped together.
Done & done.
>
> > +static int vgic_v5_irs_assign_vmt(bool two_level, u8 vm_id_bits,
> > phys_addr_t vmt_base);
> > +static int vgic_v5_irs_clear_vmt(void);
> > +
> > /*
> > * Probe for a vGICv5 compatible interrupt controller, returning 0
> > on success.
> > */
> > int vgic_v5_probe(const struct gic_kvm_info *info)
> > {
> > + struct vgic_v5_host_ist_caps *ist_caps;
> > bool v5_registered = false;
> > u64 ich_vtr_el2;
> > int ret;
> >
> > kvm_vgic_global_state.type = VGIC_V5;
> > + kvm_vgic_global_state.max_gic_vcpus = VGIC_V5_MAX_CPUS;
> >
> > kvm_vgic_global_state.vcpu_base = 0;
> > kvm_vgic_global_state.vctrl_base = NULL;
> > @@ -114,13 +141,53 @@ int vgic_v5_probe(const struct gic_kvm_info
> > *info)
> > if (gicv5_irs_extract_vm_caps(info))
> > goto skip_v5;
> >
> > - kvm_vgic_global_state.max_gic_vcpus = VGIC_V5_MAX_CPUS;
> > + ist_caps = vgic_v5_host_caps();
> > +
> > + /*
> > + * Even if the HW supports more per-VM vCPUs, artifically cap as
> > we
> > + * can't use them all.
> > + */
> > + kvm_vgic_global_state.max_gic_vcpus = min(ist_caps->max_vpes,
> > + VGIC_V5_MAX_CPUS);
>
> Can this be less than 512, which we still want to support for GICv3?
Hmm, yes. The minimum number of VPE_ID_BITS that the hardware must
support is 7 => 128 VPEs.
It feels as if we need two different max_vcpus then. For GICv3, we
always support up to 512 on GICv5 HW, but might end up in the situation
where we support fewer for a native VM. That feels a bit backwards to
me, but definitely could happen.
>
> > +
> > + /*
> > + * GICv5 requires a set of tables to be allocated in order to
> > manage
> > + * VMs. We allocate them in advance here, which alas means that we
> > + * already have to make a decisions regarding the maximum number
> > of VMs
> > + * we want to run. For now, we match the maximum number offered by
> > the
> > + * hardware, but this might not be a wise choice in the long term.
> > + */
> > + ret = vgic_v5_vmt_allocate(ist_caps->two_level_vmt_support,
> > + ist_caps->max_vms, ist_caps->vmd_size,
> > + ist_caps->vped_size,
>
> Why don't you just pass irs_caps to the allocator instead of teasing
> out individual fields?
Given that this is now part of kvm_vgic_global_state I don't even need
to do that anymore. Have simplified this my just extracting most of the
information directly in vgic_v5_vmt_allocate().
>
> > + kvm_vgic_global_state.max_gic_vcpus);
> > + if (ret) {
> > + kvm_err("Failed to allocate the GICv5 VM tables; no GICv5
> > support\n");
> > + goto skip_v5;
>
> Turn this into a hard fail.
Done.
>
> > + }
> > +
> > + /*
> > + * We've now allocated the VM table, but the host's IRS doesn't
> > know
> > + * about it yet. Provide the base address of the VMT to the IRS,
> > as well
> > + * as the number of ID bits that it covers and the structure used
> > + * (linear/two-level).
> > + */
> > + ret = vgic_v5_irs_assign_vmt(ist_caps->two_level_vmt_support,
> > + vgic_v5_vmt_vpe_id_bits(),
> > + vgic_v5_get_vmt_base());
> > + if (ret) {
> > + kvm_err("Failed to assign the GICv5 VM tables to the IRS; no
> > GICv5 support\n");
> > + vgic_v5_vmt_free();
> > + goto skip_v5;
I've also made this a hard fail. In both of these cases, things are
rather broken!
> > + }
> >
> > vgic_v5_get_implemented_ppis();
> >
> > ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V5);
> > if (ret) {
> > kvm_err("Cannot register GICv5 KVM device.\n");
> > + vgic_v5_irs_clear_vmt();
> > + vgic_v5_vmt_free();
> > goto skip_v5;
> > }
> >
> > @@ -148,12 +215,13 @@ int vgic_v5_probe(const struct gic_kvm_info
> > *info)
> > ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V3);
> > if (ret) {
> > kvm_err("Cannot register GICv3-legacy KVM device.\n");
> > - return ret;
> > + /* vGICv5 should still work */
> > + return v5_registered ? 0 : ret;
> > }
> >
> > /* We potentially limit the max VCPUs further than we need to
> > here */
> > kvm_vgic_global_state.max_gic_vcpus = min(VGIC_V3_MAX_CPUS,
> > - VGIC_V5_MAX_CPUS);
> > + kvm_vgic_global_state.max_gic_vcpus);
> >
> > static_branch_enable(&kvm_vgic_global_state.gicv3_cpuif);
> > kvm_info("GCIE legacy system register CPU interface\n");
> > @@ -163,6 +231,167 @@ int vgic_v5_probe(const struct gic_kvm_info
> > *info)
> > return 0;
> > }
> >
> > +/*
> > + * Wait for completion of a change in any of IRS_VMT_BASER,
> > IRS_VMAP_L2_VMTR,
> > + * IRS_VMAP_VMR, IRS_VMAP_VPER, IRS_VMAP_VISTR, IRS_VMAP_L2_VISTR.
> > + */
> > +static int vgic_v5_irs_wait_for_vm_op(void)
> > +{
> > + u32 statusr;
> > + int ret;
> > +
> > + ret = readl_relaxed_poll_timeout_atomic(
> > + irs_base + GICV5_IRS_VMT_STATUSR, statusr,
> > + FIELD_GET(GICV5_IRS_VMT_STATUSR_IDLE, statusr), 1,
> > + USEC_PER_SEC);
>
> nit: please don't split this line before the first parameter of the
> function.
Ack.
I've actually gone a step further here and have dropped all of the
boilerplate here. We have a helper for this in the GICv5 header file
already, which is used in the host driver. Might as well reuse that one
here.
>
> > +
> > + if (ret == -ETIMEDOUT) {
> > + pr_err_ratelimited("Time out waiting for IRS VM Op\n");
> > + return ret;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int vgic_v5_irs_assign_vmt(bool two_level, u8 vm_id_bits,
> > phys_addr_t vmt_base)
> > +{
> > + u64 vmt_baser;
> > + u32 vmt_cfgr;
> > +
> > + vmt_baser = irs_readq_relaxed(GICV5_IRS_VMT_BASER);
> > + if (!!FIELD_GET(GICV5_IRS_VMT_BASER_VALID, vmt_baser))
> > + return -EBUSY;
> > +
> > + vmt_cfgr = FIELD_PREP(GICV5_IRS_VMT_CFGR_VM_ID_BITS, vm_id_bits);
> > + if (two_level)
> > + vmt_cfgr |= FIELD_PREP(GICV5_IRS_VMT_CFGR_STRUCTURE,
> > + GICV5_IRS_VMT_CFGR_STRUCTURE_TWO_LEVEL);
> > +
> > + irs_writel_relaxed(vmt_cfgr, GICV5_IRS_VMT_CFGR);
> > +
> > + /* The base address is intentionally only masked and not shifted
> > */
> > + vmt_baser = FIELD_PREP(GICV5_IRS_VMT_BASER_VALID, true) |
> > + (vmt_base & GICV5_IRS_VMT_BASER_ADDR);
> > + irs_writeq_relaxed(vmt_baser, GICV5_IRS_VMT_BASER);
> > +
> > + return vgic_v5_irs_wait_for_vm_op();
> > +}
> > +
> > +static int vgic_v5_irs_clear_vmt(void)
> > +{
> > + irs_writeq_relaxed(0ULL, GICV5_IRS_VMT_BASER);
> > +
> > + return vgic_v5_irs_wait_for_vm_op();
> > +}
> > +
> > +static int vgic_v5_irs_vmap_l2_vmt(int vm_id)
> > +{
> > + u64 vmap_l2_vmtr;
> > + int ret = 0;
> > +
> > + guard(raw_spinlock)(&vm_config_lock);
> > +
> > + /* Make sure that we are idle to begin with */
> > + ret = vgic_v5_irs_wait_for_vm_op();
> > + if (ret)
> > + return ret;
> > +
> > + /* Mark the VM as valid */
> > + vmap_l2_vmtr = FIELD_PREP(GICV5_IRS_VMAP_L2_VMTR_VM_ID, vm_id) |
> > + FIELD_PREP(GICV5_IRS_VMAP_L2_VMTR_M, true);
> > + irs_writeq_relaxed(vmap_l2_vmtr, GICV5_IRS_VMAP_L2_VMTR);
> > +
> > + return vgic_v5_irs_wait_for_vm_op();
> > +}
> > +
> > +static int __vgic_v5_irs_vmap_vm(int vm_id, bool unmap)
> > +{
> > + u64 vmap_vmr;
> > + int ret;
> > +
> > + guard(raw_spinlock)(&vm_config_lock);
> > +
> > + /* Make sure that we are idle to begin with */
> > + ret = vgic_v5_irs_wait_for_vm_op();
> > + if (ret)
> > + return ret;
> > +
> > + /* Mark the VM as valid */
> > + vmap_vmr = FIELD_PREP(GICV5_IRS_VMAP_VMR_VM_ID, vm_id) |
> > + FIELD_PREP(GICV5_IRS_VMAP_VMR_U, unmap) |
> > + FIELD_PREP(GICV5_IRS_VMAP_VMR_M, true);
> > + irs_writeq_relaxed(vmap_vmr, GICV5_IRS_VMAP_VMR);
> > +
> > + return vgic_v5_irs_wait_for_vm_op();
> > +}
>
> There is a pattern here:
>
> static int do_something(...)
> {
> int ret
> guard(raw_spinlock)(&vm_config_lock);
>
> /* Make sure that we are idle to begin with */
> ret = vgic_v5_irs_wait_for_vm_op();
> if (ret)
> return ret;
>
> [do the something we came here for]
>
> return vgic_v5_irs_wait_for_vm_op();
> }
>
> Surely this can be turned into a helper that avoids having that
> boilerplate code in each and every function.
I've gone and done this, and cleaned up most of these. I've skipped the
setting of the IRS_VMT_BASER as that's a bit different in operation.
>
> > +
> > +static int vgic_v5_irs_set_vm_valid(int vm_id)
> > +{
> > + return __vgic_v5_irs_vmap_vm(vm_id, false);
> > +}
> > +
> > +static int vgic_v5_irs_set_vm_invalid(int vm_id)
> > +{
> > + return __vgic_v5_irs_vmap_vm(vm_id, true);
> > +}
> > +
> > +static int __vgic_v5_irs_update_vist_validity(int vm_id, bool
> > spi_ist, bool unmap)
> > +{
> > + u8 type = spi_ist ? 0b011 : 0b010;
> > + u64 vmap_vistr;
> > + int ret;
> > +
> > + guard(raw_spinlock)(&vm_config_lock);
> > +
> > + /* Make sure that we are idle to begin with */
> > + ret = vgic_v5_irs_wait_for_vm_op();
> > + if (ret)
> > + return ret;
> > +
> > + /* Mark the IST as valid */
> > + vmap_vistr = FIELD_PREP(GICV5_IRS_VMAP_VISTR_TYPE, type) |
> > + FIELD_PREP(GICV5_IRS_VMAP_VISTR_VM_ID, vm_id) |
> > + FIELD_PREP(GICV5_IRS_VMAP_VISTR_U, unmap) |
> > + FIELD_PREP(GICV5_IRS_VMAP_VISTR_M, true);
> > + irs_writeq_relaxed(vmap_vistr, GICV5_IRS_VMAP_VISTR);
> > +
> > + return vgic_v5_irs_wait_for_vm_op();
> > +}
> > +
> > +static int vgic_v5_irs_set_vist_valid(int vm_id, bool spi_ist)
> > +{
> > + return __vgic_v5_irs_update_vist_validity(vm_id, spi_ist, false);
> > +}
> > +
> > +/* Note: We currently do not use this as we rely on the VM
> > becoming invalid. */
> > +static int vgic_v5_irs_set_vist_invalid(int vm_id, bool spi_ist)
> > +{
> > + return __vgic_v5_irs_update_vist_validity(vm_id, spi_ist, true);
> > +}
> > +
> > +static int vgic_v5_db_set_vcpu_affinity(struct irq_data *data,
> > void *vcpu_info)
> > +{
> > + struct vgic_v5_vm *vm = data->domain->host_data;
> > + struct gicv5_cmd_info *cmd_info = vcpu_info;
> > +
> > + switch (cmd_info->cmd_type) {
> > + case VMT_L2_MAP:
> > + return vgic_v5_irs_vmap_l2_vmt(vm->vm_id);
> > + case VMTE_MAKE_VALID:
> > + return vgic_v5_irs_set_vm_valid(vm->vm_id);
> > + case VMTE_MAKE_INVALID:
> > + return vgic_v5_irs_set_vm_invalid(vm->vm_id);
> > + case SPI_VIST_MAKE_VALID:
> > + return vgic_v5_irs_set_vist_valid(vm->vm_id, true);
> > + case LPI_VIST_MAKE_VALID:
> > + return vgic_v5_irs_set_vist_valid(vm->vm_id, false);
> > + case LPI_VIST_MAKE_INVALID:
> > + return vgic_v5_irs_set_vist_invalid(vm->vm_id, false);
> > + default:
> > + return -EINVAL;
> > + }
> > +}
>
> This function should be introduced ages ago, as soon as you start
> issuing vcpu_set_affinity() calls.
Have moved the introduction of this to an earlier commit in the series.
>
> > +
> > /*
> > * This set of irq_chip functions is specific for doorbells.
> > */
> > @@ -174,6 +403,7 @@ static struct irq_chip vgic_v5_db_irq_chip = {
> > .irq_set_affinity = irq_chip_set_affinity_parent,
> > .irq_get_irqchip_state = irq_chip_get_parent_state,
> > .irq_set_irqchip_state = irq_chip_set_parent_state,
> > + .irq_set_vcpu_affinity = vgic_v5_db_set_vcpu_affinity,
> > .flags = IRQCHIP_SET_TYPE_MASKED | IRQCHIP_SKIP_SET_WAKE |
> > IRQCHIP_MASK_ON_SUSPEND,
> > };
> > diff --git a/include/linux/irqchip/arm-gic-v5.h
> > b/include/linux/irqchip/arm-gic-v5.h
> > index ccec0a045927c..ff5ad653252d2 100644
> > --- a/include/linux/irqchip/arm-gic-v5.h
> > +++ b/include/linux/irqchip/arm-gic-v5.h
> > @@ -87,6 +87,12 @@
> > #define GICV5_IRS_IST_CFGR 0x0190
> > #define GICV5_IRS_IST_STATUSR 0x0194
> > #define GICV5_IRS_MAP_L2_ISTR 0x01c0
> > +#define GICV5_IRS_VMT_BASER 0x0200
> > +#define GICV5_IRS_VMT_CFGR 0x0210
> > +#define GICV5_IRS_VMT_STATUSR 0x0214
> > +#define GICV5_IRS_VMAP_L2_VMTR 0x02c0
> > +#define GICV5_IRS_VMAP_VMR 0x02c8
> > +#define GICV5_IRS_VMAP_VISTR 0x02d0
> >
> > #define GICV5_IRS_IDR0_VIRT BIT(6)
> >
> > @@ -181,6 +187,30 @@
> >
> > #define GICV5_IRS_MAP_L2_ISTR_ID GENMASK(23, 0)
> >
> > +#define GICV5_IRS_VMT_BASER_ADDR GENMASK_ULL(51, 3)
> > +#define GICV5_IRS_VMT_BASER_ADDR_SHIFT 3ULL
> > +#define GICV5_IRS_VMT_BASER_VALID BIT_ULL(0)
> > +
> > +#define GICV5_IRS_VMT_CFGR_STRUCTURE_TWO_LEVEL 0b1
> > +#define GICV5_IRS_VMT_CFGR_STRUCTURE_LINEAR 0b0
> > +
> > +#define GICV5_IRS_VMT_CFGR_STRUCTURE BIT(16)
> > +#define GICV5_IRS_VMT_CFGR_VM_ID_BITS GENMASK(4, 0)
> > +
> > +#define GICV5_IRS_VMT_STATUSR_IDLE BIT(0)
> > +
> > +#define GICV5_IRS_VMAP_L2_VMTR_M BIT_ULL(63)
> > +#define GICV5_IRS_VMAP_L2_VMTR_VM_ID GENMASK_ULL(15, 0)
> > +
> > +#define GICV5_IRS_VMAP_VMR_M BIT_ULL(63)
> > +#define GICV5_IRS_VMAP_VMR_U BIT_ULL(62)
> > +#define GICV5_IRS_VMAP_VMR_VM_ID GENMASK_ULL(15, 0)
> > +
> > +#define GICV5_IRS_VMAP_VISTR_M BIT_ULL(63)
> > +#define GICV5_IRS_VMAP_VISTR_U BIT_ULL(62)
> > +#define GICV5_IRS_VMAP_VISTR_VM_ID GENMASK_ULL(47, 32)
> > +#define GICV5_IRS_VMAP_VISTR_TYPE GENMASK_ULL(31, 29)
> > +
> > #define GICV5_ISTL1E_VALID BIT_ULL(0)
> > #define GICV5_IRS_ISTL1E_SIZE 8UL
> >
>
> Thanks,
>
> M.
>
Thanks,
Sascha
next prev parent reply other threads:[~2026-05-08 13:32 UTC|newest]
Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-27 16:06 [PATCH 00/43] KVM: arm64: Add GICv5 IRS support Sascha Bischoff
2026-04-27 16:06 ` [PATCH 01/43] arm64/sysreg: Add GICv5 GIC VDPEND and VDRCFG encodings Sascha Bischoff
2026-04-27 16:06 ` [PATCH 02/43] arm64/sysreg: Update ICC_CR0_EL1 with LINK and LINK_IDLE fields Sascha Bischoff
2026-04-27 16:07 ` [PATCH 03/43] KVM: arm64: gic-v5: Add resident/non-resident hyp calls Sascha Bischoff
2026-04-28 14:28 ` Marc Zyngier
2026-05-01 16:40 ` Sascha Bischoff
2026-04-27 16:07 ` [PATCH 04/43] irqchip/gic-v5: Provide IRS config frame attrs to KVM Sascha Bischoff
2026-04-28 14:56 ` Marc Zyngier
2026-05-01 16:46 ` Sascha Bischoff
2026-04-27 16:07 ` [PATCH 05/43] KVM: arm64: gic-v5: Extract host IRS caps from IRS config frame Sascha Bischoff
2026-04-28 15:20 ` Marc Zyngier
2026-05-01 16:44 ` Sascha Bischoff
2026-04-27 16:08 ` [PATCH 06/43] KVM: arm64: gic-v5: Add VPE doorbell domain Sascha Bischoff
2026-04-28 16:40 ` Marc Zyngier
2026-05-01 16:54 ` Sascha Bischoff
2026-04-27 16:08 ` [PATCH 07/43] KVM: arm64: gic-v5: Create & manage VM and VPE tables Sascha Bischoff
2026-04-28 14:54 ` Vladimir Murzin
2026-05-01 16:42 ` Sascha Bischoff
2026-04-28 15:55 ` Joey Gouly
2026-05-08 12:42 ` Sascha Bischoff
2026-04-29 10:25 ` Marc Zyngier
2026-05-08 12:37 ` Sascha Bischoff
2026-04-27 16:08 ` [PATCH 08/43] KVM: arm64: gic-v5: Introduce guest IST alloc and management Sascha Bischoff
2026-04-29 14:29 ` Marc Zyngier
2026-05-08 12:43 ` Sascha Bischoff
2026-04-27 16:09 ` [PATCH 09/43] KVM: arm64: gic-v5: Implement VMT/vIST IRS MMIO Ops Sascha Bischoff
2026-04-29 12:50 ` Joey Gouly
2026-05-08 12:38 ` Sascha Bischoff
2026-04-29 16:04 ` Marc Zyngier
2026-05-08 13:31 ` Sascha Bischoff [this message]
2026-04-27 16:09 ` [PATCH 10/43] KVM: arm64: gic-v5: Implement VPE " Sascha Bischoff
2026-04-30 8:46 ` Marc Zyngier
2026-05-08 17:11 ` Sascha Bischoff
2026-04-27 16:09 ` [PATCH 11/43] KVM: arm64: gic-v5: Make VPEs valid in vgic_v5_reset() Sascha Bischoff
2026-04-30 9:37 ` Marc Zyngier
2026-05-08 17:08 ` Sascha Bischoff
2026-04-27 16:10 ` [PATCH 12/43] KVM: arm64: gic-v5: Clear db_fired flag before making VPE non-resident Sascha Bischoff
2026-04-27 16:10 ` [PATCH 13/43] KVM: arm64: gic-v5: Make VPEs (non-)resident in vgic_load/put Sascha Bischoff
2026-04-30 10:26 ` Marc Zyngier
2026-05-08 17:07 ` Sascha Bischoff
2026-04-27 16:10 ` [PATCH 14/43] KVM: arm64: gic-v5: Request VPE doorbells when going non-resident Sascha Bischoff
2026-04-30 10:37 ` Marc Zyngier
2026-04-27 16:11 ` [PATCH 15/43] KVM: arm64: gic-v5: Handle doorbells in kvm_vgic_vcpu_pending_irq() Sascha Bischoff
2026-04-27 16:11 ` [PATCH 16/43] KVM: arm64: gic-v5: Initialise and teardown VMTEs & doorbells Sascha Bischoff
2026-04-30 12:23 ` Marc Zyngier
2026-04-27 16:11 ` [PATCH 17/43] KVM: arm64: gic-v5: Enable VPE DBs on VPE reset and disable on teardown Sascha Bischoff
2026-05-06 15:03 ` Marc Zyngier
2026-04-27 16:12 ` [PATCH 18/43] KVM: arm64: gic-v5: Define remaining IRS MMIO registers Sascha Bischoff
2026-05-07 15:10 ` Marc Zyngier
2026-04-27 16:12 ` [PATCH 19/43] KVM: arm64: gic-v5: Introduce struct vgic_v5_irs and IRS base address Sascha Bischoff
2026-04-27 16:12 ` [PATCH 20/43] KVM: arm64: gic-v5: Add IRS IODEV to iodev_types and generic MMIO handlers Sascha Bischoff
2026-04-27 16:13 ` [PATCH 21/43] KVM: arm64: gic-v5: Add KVM_VGIC_V5_ADDR_TYPE_IRS to UAPI Sascha Bischoff
2026-04-27 16:13 ` [PATCH 22/43] KVM: arm64: gic-v5: Add GICv5 IRS IODEV and MMIO emulation Sascha Bischoff
2026-04-27 16:13 ` [PATCH 23/43] KVM: arm64: gic-v5: Set IRICHPPIDIS based on IRS enable state Sascha Bischoff
2026-04-27 16:14 ` [PATCH 24/43] KVM: arm64: gic-v5: Call IRS init/teardown from vgic_v5 init/teardown Sascha Bischoff
2026-04-27 16:14 ` [PATCH 25/43] KVM: arm64: gic-v5: Register the IRS IODEV Sascha Bischoff
2026-04-27 16:14 ` [PATCH 26/43] Documentation: KVM: Extend VGICv5 docs for KVM_VGIC_V5_ADDR_TYPE_IRS Sascha Bischoff
2026-04-27 16:15 ` [PATCH 27/43] KVM: arm64: selftests: Update vGICv5 selftest to set IRS address Sascha Bischoff
2026-04-27 16:15 ` [PATCH 28/43] KVM: arm64: gic-v5: Introduce SPI AP list Sascha Bischoff
2026-04-27 16:15 ` [PATCH 29/43] KVM: arm64: gic-v5: Add GIC VDPEND and GIC VDRCFG hyp calls Sascha Bischoff
2026-04-27 16:16 ` [PATCH 30/43] KVM: arm64: gic-v5: Track SPI state for in-flight SPIs Sascha Bischoff
2026-04-27 16:16 ` [PATCH 31/43] KVM: arm64: gic: Introduce set_pending_state() to irq_op Sascha Bischoff
2026-04-27 16:16 ` [PATCH 32/43] KVM: arm64: gic-v5: Support SPI injection Sascha Bischoff
2026-04-27 16:17 ` [PATCH 33/43] KVM: arm64: gic-v5: Add GICv5 SPI injection to irqfd Sascha Bischoff
2026-04-27 16:17 ` [PATCH 34/43] KVM: arm64: gic-v5: Mask per-vcpu PPI state in vgic_v5_finalize_ppi_state() Sascha Bischoff
2026-04-27 16:17 ` [PATCH 35/43] KVM: arm64: gic-v5: Add GICv5 EL1 sysreg userspace set/get interface Sascha Bischoff
2026-04-27 16:18 ` [PATCH 36/43] KVM: arm64: gic-v5: Implement save/restore mechanisms for ISTs Sascha Bischoff
2026-05-01 18:54 ` Vladimir Murzin
2026-04-27 16:18 ` [PATCH 37/43] KVM: arm64: gic-v5: Handle userspace accesses to IRS MMIO region Sascha Bischoff
2026-04-27 16:19 ` [PATCH 38/43] KVM: arm64: gic-v5: Add VGIC_GRP_IRS_REGS/VGIC_GRP_IST to UAPI Sascha Bischoff
2026-04-27 16:19 ` [PATCH 39/43] KVM: arm64: gic-v5: Plumb in has/set/get_attr for sysregs & IRS MMIO regs Sascha Bischoff
2026-04-27 16:19 ` [PATCH 40/43] Documentation: KVM: Document KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS for VGICv5 Sascha Bischoff
2026-04-27 16:20 ` [PATCH 41/43] Documentation: KVM: Add KVM_DEV_ARM_VGIC_GRP_IRS_REGS to VGICv5 docs Sascha Bischoff
2026-04-27 16:20 ` [PATCH 42/43] Documentation: KVM: Add docs for KVM_DEV_ARM_VGIC_GRP_IST Sascha Bischoff
2026-04-27 16:20 ` [PATCH 43/43] Documentation: KVM: Add the VGICv5 IRS save/restore sequences Sascha Bischoff
2026-04-30 8:57 ` Peter Maydell
2026-05-08 17:10 ` Sascha Bischoff
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c78e94487f33a6bd733272d93f29c4e192fa9ae5.camel@arm.com \
--to=sascha.bischoff@arm.com \
--cc=Joey.Gouly@arm.com \
--cc=Suzuki.Poulose@arm.com \
--cc=Timothy.Hayes@arm.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=lpieralisi@kernel.org \
--cc=maz@kernel.org \
--cc=nd@arm.com \
--cc=oliver.upton@linux.dev \
--cc=peter.maydell@linaro.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox