public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Sascha Bischoff <Sascha.Bischoff@arm.com>
To: "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"kvmarm@lists.linux.dev" <kvmarm@lists.linux.dev>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Cc: nd <nd@arm.com>, "maz@kernel.org" <maz@kernel.org>,
	"oliver.upton@linux.dev" <oliver.upton@linux.dev>,
	Joey Gouly <Joey.Gouly@arm.com>,
	Suzuki Poulose <Suzuki.Poulose@arm.com>,
	"yuzenghui@huawei.com" <yuzenghui@huawei.com>,
	"peter.maydell@linaro.org" <peter.maydell@linaro.org>,
	"lpieralisi@kernel.org" <lpieralisi@kernel.org>,
	Timothy Hayes <Timothy.Hayes@arm.com>
Subject: [PATCH 22/43] KVM: arm64: gic-v5: Add GICv5 IRS IODEV and MMIO emulation
Date: Mon, 27 Apr 2026 16:13:33 +0000	[thread overview]
Message-ID: <20260427160547.3129448-23-sascha.bischoff@arm.com> (raw)
In-Reply-To: <20260427160547.3129448-1-sascha.bischoff@arm.com>

In order to properly support GICv5-based VMs in KVM, we need to
emulate the CONFIG_FRAME for a virtual IRS. This emulation needs to
handle all guest accesses to the MMIO region, and mimic the behaviour
of a real IRS.

Introduce an IODEV for the GICv5 IRS, and an associated init function
that sets up the SPIs and initial state for the IRS. The MMIO emulation
provides support for the guest to query the IRS_IDx registers,
manipulate SPIs, configure ISTs, and so forth.

Some of the guest's interactions with the MMIO region require KVM to
interact with the host IRS to complete the operation. One example of
this is a guest write to the emulated IRS_PE_CR0. First of all, the
guest must write to the IRS_PE_SELR register to select a PE by IAFFID
- this is the VPE ID for a VM, but the guest doesn't know this - which
is stashed. Ideally, the guest should read the IRS_PE_STATUSR at this
point in order to check that the written IAFFID is valid. At this
point, the IRS emulation code checks this, and sets the V bit
accordingly. Finally, when the guest writes to the emulated
IRS_PE_CR0, we again check that the selected VPE is valid, and then
relay this write to the host IRS via a VPE doorbell.

Similar interactions take place for SPIs too.

When it comes to the LPI IST this also requires KVM to perform actions
on behalf of the guest. When the emulated IRS_IST_BASER is written,
KVM re-allocates the IST on the host, matching the guest's
configuration (from the emulated IRS_IST_CFGR) where appropriate. This
is then provided to the physical IRS via the VMTE. As far as the guest
is concerned, the IST it allocated is being used by the hardware, but
in reality the host IST is used instead.

This change provides the IRS IODEV as a whole, but this is not plumbed
into the rest of KVM yet.

Signed-off-by: Sascha Bischoff <sascha.bischoff@arm.com>
---
 arch/arm64/kvm/Makefile              |   2 +-
 arch/arm64/kvm/vgic/vgic-irs-v5.c    | 823 +++++++++++++++++++++++++++
 arch/arm64/kvm/vgic/vgic-v5-tables.c |  16 +
 arch/arm64/kvm/vgic/vgic-v5-tables.h |   1 +
 arch/arm64/kvm/vgic/vgic.h           |   2 +
 5 files changed, 843 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kvm/vgic/vgic-irs-v5.c

diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 431de9b145ca1..92dda57c08766 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -24,7 +24,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
 	 vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \
 	 vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \
 	 vgic/vgic-its.o vgic/vgic-debug.o vgic/vgic-v3-nested.o \
-	 vgic/vgic-v5.o vgic/vgic-v5-tables.o
+	 vgic/vgic-v5.o vgic/vgic-v5-tables.o vgic/vgic-irs-v5.o
 
 kvm-$(CONFIG_HW_PERF_EVENTS)  += pmu-emul.o pmu.o
 kvm-$(CONFIG_ARM64_PTR_AUTH)  += pauth.o
diff --git a/arch/arm64/kvm/vgic/vgic-irs-v5.c b/arch/arm64/kvm/vgic/vgic-irs-v5.c
new file mode 100644
index 0000000000000..729a3a3aca3a3
--- /dev/null
+++ b/arch/arm64/kvm/vgic/vgic-irs-v5.c
@@ -0,0 +1,823 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2025 ARM Limited, All Rights Reserved.
+ */
+#include <linux/bitops.h>
+#include <linux/bsearch.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+#include <kvm/iodev.h>
+#include <kvm/arm_arch_timer.h>
+#include <kvm/arm_vgic.h>
+
+#include "vgic.h"
+#include "vgic-mmio.h"
+#include "vgic-v5-tables.h"
+
+static struct vgic_dist *vgic_v5_get_vgic(struct kvm_vcpu *vcpu)
+{
+	return &vcpu->kvm->arch.vgic;
+}
+
+static struct vgic_v5_irs *vgic_v5_get_irs(struct kvm_vcpu *vcpu)
+{
+	return vcpu->kvm->arch.vgic.vgic_v5_irs_data;
+}
+
+static unsigned long vgic_v5_mmio_read_irs_misc(struct kvm_vcpu *vcpu,
+						gpa_t addr, unsigned int len)
+{
+	struct vgic_v5_irs *irs = vgic_v5_get_irs(vcpu);
+	const size_t offset = addr & (SZ_64K - 1);
+	struct gicv5_cmd_info cmd_info;
+	struct kvm_vcpu *target_vcpu;
+	u64 value = 0;
+	int rc;
+
+	switch (offset) {
+	case GICV5_IRS_IDR0:
+		value = FIELD_PREP(GICV5_IRS_IDR0_DOM, irs->idr0.domain);
+		value |= FIELD_PREP(GICV5_IRS_IDR0_PA_RANGE, irs->idr0.pa_range);
+		value |= FIELD_PREP(GICV5_IRS_IDR0_VIRT, irs->idr0.virt);
+		value |= FIELD_PREP(GICV5_IRS_IDR0_ONEOFN, irs->idr0.one_of_n);
+		value |= FIELD_PREP(GICV5_IRS_IDR0_VIRT1OFN, irs->idr0.virt_one_of_n);
+		value |= FIELD_PREP(GICV5_IRS_IDR0_SETLPI, irs->idr0.setlpi);
+		value |= FIELD_PREP(GICV5_IRS_IDR0_MEC, irs->idr0.mec);
+		value |= FIELD_PREP(GICV5_IRS_IDR0_MPAM, irs->idr0.mpam);
+		value |= FIELD_PREP(GICV5_IRS_IDR0_SWE, irs->idr0.swe);
+		value |= FIELD_PREP(GICV5_IRS_IDR0_IRSID, irs->idr0.irs_id);
+		break;
+	case GICV5_IRS_IDR1:
+		value = FIELD_PREP(GICV5_IRS_IDR1_PE_CNT,
+				   atomic_read(&vcpu->kvm->online_vcpus));
+		value |= FIELD_PREP(GICV5_IRS_IDR1_IAFFID_BITS, vgic_v5_vmte_vpe_id_bits(vcpu));
+		value |= FIELD_PREP(GICV5_IRS_IDR1_PRIORITY_BITS, irs->idr1.priority_bits);
+		break;
+	case GICV5_IRS_IDR2:
+		value = FIELD_PREP(GICV5_IRS_IDR2_ISTMD_SZ, irs->idr2.istmd_sz);
+		value |= FIELD_PREP(GICV5_IRS_IDR2_ISTMD, irs->idr2.istmd);
+		value |= FIELD_PREP(GICV5_IRS_IDR2_IST_L2SZ, irs->idr2.ist_l2sz);
+		value |= FIELD_PREP(GICV5_IRS_IDR2_IST_LEVELS, irs->idr2.ist_levels);
+		value |= FIELD_PREP(GICV5_IRS_IDR2_MIN_LPI_ID_BITS, irs->idr2.min_lpi_id_bits);
+		value |= GICV5_IRS_IDR2_LPI; /* We always support LPIs */
+		value |= FIELD_PREP(GICV5_IRS_IDR2_ID_BITS, irs->idr2.id_bits);
+		break;
+	case GICV5_IRS_IDR5:
+		value = FIELD_PREP(GICV5_IRS_IDR5_SPI_RANGE, irs->idr5.spi_range);
+		break;
+	case GICV5_IRS_IDR6:
+		value = FIELD_PREP(GICV5_IRS_IDR6_SPI_IRS_RANGE, irs->idr6.spi_irs_range);
+		break;
+	case GICV5_IRS_IDR7:
+		value = FIELD_PREP(GICV5_IRS_IDR7_SPI_BASE, irs->idr7.spi_base);
+		break;
+	case GICV5_IRS_IIDR:
+		/* Revision, Variant, ProductID are implementation defined */
+		value = FIELD_PREP(GICV5_IRS_IIDR_PRODUCT_ID, PRODUCT_ID_KVM);
+		value |= FIELD_PREP(GICV5_IRS_IIDR_VARIANT, 0);
+		value |= FIELD_PREP(GICV5_IRS_IIDR_REVISION, 0);
+		value |= FIELD_PREP(GICV5_IRS_IIDR_IMPLEMENTER, IMPLEMENTER_ARM);
+		break;
+	case GICV5_IRS_AIDR:
+		value = FIELD_PREP(GICV5_IRS_AIDR_COMPONENT,
+				   GICV5_AIDR_COMPONENT_IRS);
+		value |= FIELD_PREP(GICV5_IRS_AIDR_ARCHMAJORREV,
+				    GICV5_AIDR_ARCH_MAJ_REV_V5);
+		value |= FIELD_PREP(GICV5_IRS_AIDR_ARCHMINORREV,
+				    GICV5_AIDR_ARCH_MIN_REV_V0);
+		break;
+	case GICV5_IRS_CR0:
+		/*
+		 * The IRS is ALWAYS idle as we handle things instantaneously
+		 * from a guest's viewpoint.
+		 */
+		value = GICV5_IRS_CR0_IDLE;
+		value |= FIELD_PREP(GICV5_IRS_CR0_IRSEN,
+				    irs->enabled);
+		break;
+	case GICV5_IRS_CR1:
+		value = FIELD_PREP(GICV5_IRS_CR1_VPED_WA, irs->cr1.vped_wa);
+		value |= FIELD_PREP(GICV5_IRS_CR1_VPED_RA, irs->cr1.vped_ra);
+		value |= FIELD_PREP(GICV5_IRS_CR1_VMD_WA, irs->cr1.vmd_wa);
+		value |= FIELD_PREP(GICV5_IRS_CR1_VMD_RA, irs->cr1.vmd_ra);
+		value |= FIELD_PREP(GICV5_IRS_CR1_VPET_RA, irs->cr1.vpet_ra);
+		value |= FIELD_PREP(GICV5_IRS_CR1_VMT_RA, irs->cr1.vmt_ra);
+		value |= FIELD_PREP(GICV5_IRS_CR1_IST_WA, irs->cr1.ist_wa);
+		value |= FIELD_PREP(GICV5_IRS_CR1_IST_RA, irs->cr1.ist_ra);
+		value |= FIELD_PREP(GICV5_IRS_CR1_IC, irs->cr1.ic);
+		value |= FIELD_PREP(GICV5_IRS_CR1_OC, irs->cr1.oc);
+		value |= FIELD_PREP(GICV5_IRS_CR1_SH, irs->cr1.sh);
+		break;
+	case GICV5_IRS_SYNC_STATUSR:
+		value = GICV5_IRS_SYNC_STATUSR_IDLE;
+		break;
+	case GICV5_IRS_PE_SELR:
+		value = FIELD_PREP(GICV5_IRS_PE_SELR_IAFFID, irs->pe_selr.iaffid);
+		break;
+	case GICV5_IRS_PE_STATUSR:
+		/* We assume that the PE is Online if present. Always IDLE too */
+		value = GICV5_IRS_PE_STATUSR_IDLE;
+
+		/* Set ONLINE and V if IAFFID selects a present PE */
+		if (kvm_get_vcpu_by_id(vcpu->kvm, irs->pe_selr.iaffid)) {
+			value |= GICV5_IRS_PE_STATUSR_ONLINE;
+			value |= GICV5_IRS_PE_STATUSR_V;
+		}
+		break;
+	case GICV5_IRS_PE_CR0:
+		/*
+		 * Make sure that we are doing something reasonable first.
+		 * Remember, the IAFFID is the same as the VPE_ID
+		 */
+		target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, irs->pe_selr.iaffid);
+		if (!target_vcpu) {
+			kvm_err("Guest programmed invalid IAFFID (0x%x) into the IRS_PE_SELR\n",
+				irs->pe_selr.iaffid);
+			break;
+		}
+
+		mutex_lock(&vcpu->kvm->arch.config_lock);
+
+		/*
+		 * Read the corresponding IRS_VPE_CR0. We do so via the doorbell
+		 * for the specific vcpu we have in the PE_SELR.
+		 */
+		cmd_info.cmd_type = VPE_CR0_READ;
+		rc = irq_set_vcpu_affinity(vgic_v5_vpe_db(target_vcpu), &cmd_info);
+		if (rc)
+			kvm_err("Could not read VPE_CR0 in IRS: %d\n", rc);
+		else
+			value = cmd_info.data;
+
+		mutex_unlock(&vcpu->kvm->arch.config_lock);
+
+		break;
+	default:
+		return 0;
+	}
+
+	return value;
+}
+
+static void vgic_v5_mmio_write_irs_misc(struct kvm_vcpu *vcpu, gpa_t addr,
+					unsigned int len, unsigned long val)
+{
+	struct vgic_v5_irs *irs = vgic_v5_get_irs(vcpu);
+	struct vgic_dist *vgic = vgic_v5_get_vgic(vcpu);
+	const size_t offset = addr & (SZ_64K - 1);
+	struct gicv5_cmd_info cmd_info;
+	struct kvm_vcpu *target_vcpu;
+	int rc;
+
+	switch (offset) {
+	case GICV5_IRS_CR0:
+		mutex_lock(&vcpu->kvm->arch.config_lock);
+		/*
+		 * We need to make sure that the IRS coming online (or
+		 * going offline) is visible to all vCPUs, even if
+		 * they are currently resident. Halt all of the vCPUs
+		 * now, and resume once we've done the update.
+		 */
+		kvm_arm_halt_guest(vcpu->kvm);
+
+		if (FIELD_GET(GICV5_IRS_CR0_IRSEN, val)) {
+			irs->enabled = true;
+			/*
+			 * This second enable is the one used by the existing,
+			 * non-GICv5 code.
+			 */
+			vgic->enabled = true;
+		} else {
+			irs->enabled = false;
+			/* Ditto */
+			vgic->enabled = false;
+		}
+
+		kvm_arm_resume_guest(vcpu->kvm);
+		mutex_unlock(&vcpu->kvm->arch.config_lock);
+
+		return;
+	case GICV5_IRS_CR1:
+		irs->cr1.sh = FIELD_GET(GICV5_IRS_CR1_SH, val);
+		irs->cr1.oc = FIELD_GET(GICV5_IRS_CR1_OC, val);
+		irs->cr1.ic = FIELD_GET(GICV5_IRS_CR1_IC, val);
+		irs->cr1.ist_ra = FIELD_GET(GICV5_IRS_CR1_IST_RA, val);
+		irs->cr1.ist_wa = FIELD_GET(GICV5_IRS_CR1_IST_WA, val);
+		irs->cr1.vmt_ra = FIELD_GET(GICV5_IRS_CR1_VMT_RA, val);
+		irs->cr1.vpet_ra = FIELD_GET(GICV5_IRS_CR1_VPET_RA, val);
+		irs->cr1.vmd_ra = FIELD_GET(GICV5_IRS_CR1_VMD_RA, val);
+		irs->cr1.vmd_wa = FIELD_GET(GICV5_IRS_CR1_VMD_WA, val);
+		irs->cr1.vped_ra = FIELD_GET(GICV5_IRS_CR1_VPED_RA, val);
+		irs->cr1.vped_wa = FIELD_GET(GICV5_IRS_CR1_VPED_WA, val);
+		return;
+	case GICV5_IRS_PE_SELR:
+		irs->pe_selr.iaffid = FIELD_GET(GICV5_IRS_PE_SELR_IAFFID, val);
+		return;
+	case GICV5_IRS_PE_CR0:
+		/*
+		 * Make sure that we are doing something reasonable first.
+		 * Remember, the IAFFID is the same as the VPE_ID.
+		 */
+		target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, irs->pe_selr.iaffid);
+		if (!target_vcpu)
+			return;
+
+		mutex_lock(&vcpu->kvm->arch.config_lock);
+
+		/*
+		 * Write the corresponding IRS_VPE_CR0. We do so via the
+		 * doorbell for the specific vcpu we have in the PE_SELR.
+		 */
+		cmd_info.cmd_type = VPE_CR0_WRITE;
+		cmd_info.data = val;
+		rc = irq_set_vcpu_affinity(vgic_v5_vpe_db(target_vcpu), &cmd_info);
+		if (rc)
+			kvm_err("Could not update VPE_CR0 in IRS: %d\n", rc);
+
+		mutex_unlock(&vcpu->kvm->arch.config_lock);
+		return;
+	default:
+		return;
+	}
+}
+
+static bool vgic_v5_is_spi_selr_valid(struct vgic_v5_irs *irs)
+{
+	/* Invalid - we don't have any SPIs at all */
+	if (irs->idr5.spi_range == 0)
+		return false;
+
+	/* Invalid - we don't have any on this IRS */
+	if (irs->idr6.spi_irs_range == 0)
+		return false;
+
+	/* Invalid - ID is less than min */
+	if (irs->spi_selr.id < irs->idr7.spi_base)
+		return false;
+
+	/* Invalid - ID is greater than max */
+	if (irs->spi_selr.id >=
+	    (irs->idr7.spi_base + irs->idr6.spi_irs_range))
+		return false;
+
+	return true;
+}
+
+static unsigned long vgic_v5_mmio_read_irs_spi(struct kvm_vcpu *vcpu,
+					       gpa_t addr, unsigned int len)
+{
+	struct vgic_v5_irs *irs = vgic_v5_get_irs(vcpu);
+	struct vgic_dist *vgic = vgic_v5_get_vgic(vcpu);
+	const size_t offset = addr & (SZ_64K - 1);
+	u64 value = 0;
+
+	switch (offset) {
+	case GICV5_IRS_SPI_SELR:
+		/* Return whatever was last written */
+		value = FIELD_PREP(GICV5_IRS_SPI_SELR_ID, irs->spi_selr.id);
+		break;
+	case GICV5_IRS_SPI_STATUSR:
+		/* We assume that we can always claim to be idle */
+		value = GICV5_IRS_SPI_STATUSR_IDLE;
+		value |= FIELD_PREP(GICV5_IRS_SPI_STATUSR_V, vgic_v5_is_spi_selr_valid(irs));
+		break;
+	case GICV5_IRS_SPI_DOMAINR:
+		value = FIELD_PREP(GICV5_IRS_SPI_DOMAINR_DOMAIN,
+				   GICV5_IRS_SPI_DOMAINR_DOMAIN_NON_SECURE);
+		break;
+	case GICV5_IRS_SPI_CFGR:
+		if (!vgic_v5_is_spi_selr_valid(irs)) {
+			/* Fault with IRS_SPI_SELR; return 0*/
+			value = 0;
+			break;
+		}
+
+		/* Sanity check for KVM's sake */
+		if (irs->spi_selr.id >= vgic->nr_spis) {
+			kvm_err("Guest trying to access SPI not backed by KVM\n");
+			value = 0;
+			break;
+		}
+
+		if (vgic->spis[irs->spi_selr.id].config == VGIC_CONFIG_EDGE)
+			value = FIELD_PREP(GICV5_IRS_SPI_CFGR_TM, GICV5_IRS_SPI_CFGR_TM_EDGE);
+		else
+			value = FIELD_PREP(GICV5_IRS_SPI_CFGR_TM, GICV5_IRS_SPI_CFGR_TM_LEVEL);
+
+		break;
+	default:
+		return 0;
+	}
+
+	return value;
+}
+
+static void vgic_v5_mmio_write_irs_spi(struct kvm_vcpu *vcpu, gpa_t addr,
+				       unsigned int len, unsigned long val)
+{
+	struct vgic_v5_irs *irs = vgic_v5_get_irs(vcpu);
+	const size_t offset = addr & (SZ_64K - 1);
+	struct vgic_irq *irq;
+
+	switch (offset) {
+	case GICV5_IRS_SPI_SELR:
+		irs->spi_selr.id = FIELD_GET(GICV5_IRS_SPI_SELR_ID, val);
+		return;
+	case GICV5_IRS_SPI_CFGR:
+		if (!vgic_v5_is_spi_selr_valid(irs))
+			return;
+
+		/*
+		 * Find KVM's representation of the interrupt - we need to make
+		 * sure that KVM's view agrees with the guest's, else interrupt
+		 * injection won't work properly for level-triggered interrupts
+		 * (we fail to handle the clearing of the pending state if KVM
+		 * thinks that the interrupt is edge-triggered, which is the
+		 * default.)
+		 */
+		irq = vgic_get_irq(vcpu->kvm, vgic_v5_make_spi(irs->spi_selr.id));
+		if (!irq)
+			return;
+
+		scoped_guard(raw_spinlock_irqsave, &irq->irq_lock) {
+			if (FIELD_GET(GICV5_IRS_SPI_CFGR_TM, val))
+				irq->config = VGIC_CONFIG_LEVEL;
+			else
+				irq->config = VGIC_CONFIG_EDGE;
+		}
+
+		vgic_put_irq(vcpu->kvm, irq);
+
+		return;
+	default:
+		return;
+	}
+}
+
+static bool vgic_v5_ist_cfgr_valid(struct vgic_v5_irs *irs)
+{
+	unsigned int expected_istsz;
+
+	if (irs->ist_cfgr.lpi_id_bits < irs->idr2.min_lpi_id_bits ||
+	    irs->ist_cfgr.lpi_id_bits > irs->idr2.id_bits)
+		return false;
+
+	if (!irs->idr2.istmd)
+		expected_istsz = GICV5_IRS_IST_CFGR_ISTSZ_4;
+	else if (irs->ist_cfgr.lpi_id_bits >= irs->idr2.istmd_sz)
+		expected_istsz = GICV5_IRS_IST_CFGR_ISTSZ_16;
+	else
+		expected_istsz = GICV5_IRS_IST_CFGR_ISTSZ_8;
+
+	if (irs->ist_cfgr.istsz != expected_istsz)
+		return false;
+
+	if (irs->ist_cfgr.structure && !irs->idr2.ist_levels)
+		return false;
+
+	if (!irs->ist_cfgr.structure)
+		return true;
+
+	return irs->ist_cfgr.l2sz == irs->idr2.ist_l2sz;
+}
+
+static unsigned long vgic_v5_mmio_read_irs_ist(struct kvm_vcpu *vcpu,
+					       gpa_t addr, unsigned int len)
+{
+	struct vgic_v5_irs *irs = vgic_v5_get_irs(vcpu);
+	const size_t offset = addr & (SZ_64K - 1);
+	u64 value = 0;
+
+	switch (offset) {
+	case GICV5_IRS_IST_STATUSR:
+		return GICV5_IRS_IST_STATUSR_IDLE;
+	case GICV5_IRS_IST_CFGR:
+		value = FIELD_PREP(GICV5_IRS_IST_CFGR_STRUCTURE, irs->ist_cfgr.structure);
+		value |= FIELD_PREP(GICV5_IRS_IST_CFGR_ISTSZ, irs->ist_cfgr.istsz);
+		value |= FIELD_PREP(GICV5_IRS_IST_CFGR_L2SZ, irs->ist_cfgr.l2sz);
+		value |= FIELD_PREP(GICV5_IRS_IST_CFGR_LPI_ID_BITS, irs->ist_cfgr.lpi_id_bits);
+		break;
+	case GICV5_IRS_IST_BASER:
+		value = FIELD_PREP(GICV5_IRS_IST_BASER_ADDR_MASK,
+				   irs->ist_baser.addr >> GICV5_IRS_IST_BASER_ADDR_SHIFT);
+		value |= FIELD_PREP(GICV5_IRS_IST_BASER_VALID, irs->ist_baser.valid);
+		break;
+	default:
+		return 0;
+	}
+
+	return value;
+}
+
+static void vgic_v5_mmio_write_irs_ist(struct kvm_vcpu *vcpu, gpa_t addr,
+				       unsigned int len, unsigned long val)
+{
+	struct vgic_v5_irs *irs = vgic_v5_get_irs(vcpu);
+	const size_t offset = addr & (SZ_64K - 1);
+	struct gicv5_cmd_info cmd_info;
+	int rc;
+
+	switch (offset) {
+	case GICV5_IRS_IST_CFGR:
+		irs->ist_cfgr.lpi_id_bits = FIELD_GET(GICV5_IRS_IST_CFGR_LPI_ID_BITS, val);
+		irs->ist_cfgr.l2sz = FIELD_GET(GICV5_IRS_IST_CFGR_L2SZ, val);
+		irs->ist_cfgr.istsz = FIELD_GET(GICV5_IRS_IST_CFGR_ISTSZ, val);
+		irs->ist_cfgr.structure = FIELD_GET(GICV5_IRS_IST_CFGR_STRUCTURE, val);
+		return;
+	case GICV5_IRS_IST_BASER: {
+		bool valid = FIELD_GET(GICV5_IRS_IST_BASER_VALID, val);
+
+		guard(mutex)(&vcpu->kvm->arch.config_lock);
+
+		/* Valid -> Invalid */
+		if (irs->ist_baser.valid && !valid) {
+			/* Make the LPI IST invalid and then ... */
+			cmd_info.cmd_type = LPI_VIST_MAKE_INVALID;
+			rc = irq_set_vcpu_affinity(vgic_v5_vpe_db(vcpu), &cmd_info);
+			if (WARN_ON_ONCE(rc))
+				break;
+
+			/*
+			 * ... free the host IST if we successfully marked the
+			 * IST as invalid. Frankly, if we failed to make the
+			 * guest's IST as invalid, we're cooked because it means
+			 * that the IRS may still be using the memory that we
+			 * want to free. Hence, we leave it allocated and skip
+			 * the clearing of valid bit in the baser.
+			 */
+			rc = vgic_v5_lpi_ist_free(vcpu->kvm);
+			if (WARN_ON_ONCE(rc))
+				break;
+		} else if (!irs->ist_baser.valid && valid) { /* Invalid -> Valid */
+			if (!vgic_v5_ist_cfgr_valid(irs)) {
+				kvm_err("Guest programmed invalid IRS_IST_CFGR\n");
+				break;
+			}
+
+			rc = vgic_v5_lpi_ist_alloc(vcpu->kvm,
+						   irs->ist_cfgr.lpi_id_bits);
+			if (WARN_ON_ONCE(rc))
+				break;
+		}
+
+		/* Now that we've handled the edges, update the valid bit and addr */
+		irs->ist_baser.valid = FIELD_GET(GICV5_IRS_IST_BASER_VALID, val);
+		irs->ist_baser.addr = FIELD_GET(GICV5_IRS_IST_BASER_ADDR_MASK, val)
+			<< GICV5_IRS_IST_BASER_ADDR_SHIFT;
+
+		return;
+	}
+	default:
+		return;
+	}
+}
+
+static const struct vgic_register_region vgic_v5_irs_registers[] = {
+	/*
+	 * This is the IRS_CONFIG_FRAME.
+	 */
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IDR0, vgic_v5_mmio_read_irs_misc,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IDR1, vgic_v5_mmio_read_irs_misc,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IDR2, vgic_v5_mmio_read_irs_misc,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IDR3, vgic_mmio_read_raz,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IDR4, vgic_mmio_read_raz,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IDR5, vgic_v5_mmio_read_irs_misc,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IDR6, vgic_v5_mmio_read_irs_misc,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IDR7, vgic_v5_mmio_read_irs_misc,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IIDR, vgic_v5_mmio_read_irs_misc,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_AIDR, vgic_v5_mmio_read_irs_misc,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_CR0, vgic_v5_mmio_read_irs_misc,
+				  vgic_v5_mmio_write_irs_misc, 4,
+				  VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_CR1, vgic_v5_mmio_read_irs_misc,
+				  vgic_v5_mmio_write_irs_misc, 4,
+				  VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_SYNCR, vgic_mmio_read_raz,
+				  vgic_mmio_write_wi, 4,
+				  VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_SYNC_STATUSR,
+				  vgic_v5_mmio_read_irs_misc,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_SPI_VMR, vgic_mmio_read_raz,
+				  vgic_mmio_write_wi, 8,
+				  VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_SPI_SELR, vgic_v5_mmio_read_irs_spi,
+				  vgic_v5_mmio_write_irs_spi, 4,
+				  VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_SPI_DOMAINR, vgic_v5_mmio_read_irs_spi,
+		vgic_v5_mmio_write_irs_spi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_SPI_RESAMPLER, vgic_mmio_read_raz,
+				  vgic_mmio_write_wi, 4,
+				  VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_SPI_CFGR, vgic_v5_mmio_read_irs_spi,
+				  vgic_v5_mmio_write_irs_spi, 4,
+				  VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_SPI_STATUSR,
+				  vgic_v5_mmio_read_irs_spi, vgic_mmio_write_wi,
+				  4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_PE_SELR, vgic_v5_mmio_read_irs_misc,
+				  vgic_v5_mmio_write_irs_misc, 4,
+				  VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_PE_STATUSR,
+				  vgic_v5_mmio_read_irs_misc,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_PE_CR0, vgic_v5_mmio_read_irs_misc,
+				  vgic_v5_mmio_write_irs_misc, 4,
+				  VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_IST_BASER, vgic_v5_mmio_read_irs_ist,
+		vgic_v5_mmio_write_irs_ist, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IST_CFGR, vgic_v5_mmio_read_irs_ist,
+				  vgic_v5_mmio_write_irs_ist, 4,
+				  VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_IST_STATUSR,
+				  vgic_v5_mmio_read_irs_ist, vgic_mmio_write_wi,
+				  4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(GICV5_IRS_MAP_L2_ISTR, vgic_mmio_read_raz,
+				  vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+
+	/*
+	 * The following registers are only for running VMs. They are not yet
+	 * supported as we don't currently support nested, so expose them as
+	 * read-as-zero/write-ignored.
+	 */
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VMT_BASER, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VMT_CFGR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VMT_STATUSR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VPE_SELR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VPE_DBR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VPE_HPPIR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VPE_CR0, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VPE_STATUSR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VM_DBR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VM_SELR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VM_STATUSR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VMAP_L2_VMTR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VMAP_VMR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VMAP_VISTR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VMAP_L2_VISTR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_VMAP_VPER, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_SAVE_VMR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_SAVE_VM_STATUSR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+
+	/* MEC, MPAM, SWERR - all unimplemented */
+
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_MEC_IDR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_MEC_MECID_R, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_MPAM_IDR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_MPAM_PARTID_R, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_SWERR_STATUSR, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_SWERR_SYNDROMER0, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+	REGISTER_DESC_WITH_LENGTH(
+		GICV5_IRS_SWERR_SYNDROMER1, vgic_mmio_read_raz,
+		vgic_mmio_write_wi, 8, VGIC_ACCESS_64bit),
+};
+
+unsigned int vgic_v5_init_irs_iodev(struct vgic_io_device *dev)
+{
+	dev->regions = vgic_v5_irs_registers;
+	dev->nr_regions = ARRAY_SIZE(vgic_v5_irs_registers);
+
+	kvm_iodevice_init(&dev->dev, &kvm_io_gic_ops);
+
+	/* We represent both of the IRS frames back to back, so this is 128K */
+	return KVM_VGIC_V5_IRS_SIZE;
+}
+
+int vgic_v5_register_irs_iodev(struct kvm *kvm, gpa_t irs_base_address)
+{
+	struct vgic_io_device *io_device = &kvm->arch.vgic.vgic_v5_irs_data->iodev;
+	unsigned int len;
+
+	/*
+	 * Design choice: Force MMIO region to be 64k aligned. Simplifies
+	 * pulling out registers.
+	 */
+	if (!IS_ALIGNED(irs_base_address, SZ_64K)) {
+		kvm_err("IRS Base address is not aligned to 64k\n");
+		return -EINVAL;
+	}
+
+	len = vgic_v5_init_irs_iodev(io_device);
+
+	io_device->base_addr = irs_base_address;
+	io_device->iodev_type = IODEV_GICV5_IRS;
+	io_device->redist_vcpu = NULL;
+
+	return kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, irs_base_address, len,
+				       &io_device->dev);
+}
+
+/**
+ * kvm_vgic_v5_irs_init: initialize the IRS data structures
+ * @kvm: kvm struct pointer
+ * @nr_spis: number of spis, frozen by caller
+ */
+int kvm_vgic_v5_irs_init(struct kvm *kvm, unsigned int nr_spis)
+{
+	struct vgic_dist *dist = &kvm->arch.vgic;
+	struct vgic_v5_irs *irs = dist->vgic_v5_irs_data;
+	struct kvm_vcpu *vcpu0 = kvm_get_vcpu(kvm, 0);
+	size_t istsz, nr_spi_bits, istmd_sz;
+	phys_addr_t spi_ist_phys_base;
+	u64 mmfr0;
+	int ret;
+	int i;
+
+	/*
+	 * We (KVM) allocate an Interrupt State Table (IST) for SPIs. The
+	 * hardware mandates that lower 6 bits of the address are 0. Each ISTE
+	 * is 4 bytes in size (or larger if metadata storage is required). In
+	 * order to simplify the allocation logic, we round up the minimum
+	 * number of SPIs to 16 (2^6 = 64, 64/4 = 16).
+	 */
+	if (nr_spis && nr_spis < 16)
+		nr_spis = 16;
+
+	if (nr_spis) {
+		dist->spis = kcalloc(nr_spis, sizeof(struct vgic_irq),
+				     GFP_KERNEL_ACCOUNT);
+		if (!dist->spis)
+			return -ENOMEM;
+
+		/*
+		 * In the following code we do not take the irq struct lock since
+		 * no other action on irq structs can happen while the VGIC is
+		 * not initialized yet.
+		 */
+		for (i = 0; i < nr_spis; i++) {
+			struct vgic_irq *irq = &dist->spis[i];
+
+			irq->intid = vgic_v5_make_spi(i);
+			INIT_LIST_HEAD(&irq->ap_list);
+			raw_spin_lock_init(&irq->irq_lock);
+			irq->vcpu = NULL;
+			irq->target_vcpu = vcpu0;
+			refcount_set(&irq->refcount, 0);
+			/*
+			 * The guest controls the enable state, and again it is
+			 * directly handled by the hardware. From our point of
+			 * view it is always enabled.
+			 */
+			irq->enabled = 1;
+		}
+
+		nr_spi_bits = fls(roundup_pow_of_two(nr_spis)) - 1;
+
+		istsz = GICV5_IRS_IST_CFGR_ISTSZ_4;
+		if (vgic_v5_host_caps()->istmd) {
+			istmd_sz = vgic_v5_host_caps()->istmd_sz;
+
+			if (nr_spi_bits < istmd_sz)
+				istsz = GICV5_IRS_IST_CFGR_ISTSZ_8;
+			else
+				istsz = GICV5_IRS_IST_CFGR_ISTSZ_16;
+		}
+
+		ret = vgic_v5_spi_ist_allocate(kvm, &spi_ist_phys_base,
+					       nr_spi_bits, istsz);
+		if (ret)
+			return ret;
+
+		ret = vgic_v5_vmte_assign_ist(kvm, spi_ist_phys_base, false,
+					      nr_spi_bits, 0, istsz, true);
+		if (ret) {
+			vgic_v5_free_allocated_spi_ist(kvm);
+			return ret;
+		}
+	}
+
+	/* Set sane initial state for the IRS MMIO registers */
+
+	irs->idr0.domain = GICV5_IRS_IDR0_DOMAIN_NON_SECURE;
+
+	mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+	irs->idr0.pa_range = cpuid_feature_extract_unsigned_field(
+		mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
+
+	irs->idr0.virt = 0;
+	irs->idr0.one_of_n = 0;
+	irs->idr0.virt_one_of_n = 0;
+	irs->idr0.setlpi = 0;
+	irs->idr0.mec = 0;
+	irs->idr0.mpam = 0;
+	irs->idr0.swe = 0;
+	irs->idr0.irs_id = 0;
+
+		irs->idr1.priority_bits = gicv5_global_data.irs_pri_bits - 1;
+
+	/*
+	 * Support 16-bits of ID space for the IRS. This should be sufficient
+	 * for most applications, and the CPUIF is guaranteed to have at least
+	 * 16-bits of ID space support (we actually present 16-bits there, even
+	 * if the hardware supports more). Warn if the hardware doesn't support
+	 * 16 bits, and use the smaller value. YMMV!
+	 *
+	 * As for the minimum number of ID bits, we match the hardware's
+	 * capability.
+	 */
+	if (vgic_v5_host_caps()->ist_id_bits < 16)
+		pr_warn("Host IRS supports fewer than 16 ID bits for ISTs (%u)\n",
+			vgic_v5_host_caps()->ist_id_bits);
+
+	irs->idr2.id_bits = min(16, vgic_v5_host_caps()->ist_id_bits);
+	irs->idr2.min_lpi_id_bits = vgic_v5_host_caps()->min_lpi_id_bits;
+
+	/* Only allow the guest to create Linear ISTs - simplifies Save/Restore */
+	irs->idr2.ist_levels = 0;
+	irs->idr2.ist_l2sz = GICV5_IRS_IST_CFGR_L2SZ_4K;
+	irs->idr2.istmd = 0;
+	irs->idr2.istmd_sz = 0;
+
+	/* We have a single IRS, only. All SPIs reside here! */
+	irs->idr5.spi_range = nr_spis;
+	irs->idr6.spi_irs_range = nr_spis;
+	irs->idr7.spi_base = 0;
+
+	irs->cr1.sh = 0;
+	irs->cr1.oc = 0;
+	irs->cr1.ic = 0;
+	irs->cr1.ist_ra = 0;
+	irs->cr1.ist_wa = 0;
+	irs->cr1.vmt_ra = 0;
+	irs->cr1.vpet_ra = 0;
+	irs->cr1.vmd_ra = 0;
+	irs->cr1.vmd_wa = 0;
+	irs->cr1.vped_ra = 0;
+	irs->cr1.vped_wa = 0;
+
+	irs->spi_selr.id = -1;
+
+	irs->pe_selr.iaffid = -1;
+
+	irs->ist_cfgr.lpi_id_bits = 0;
+	irs->ist_cfgr.l2sz = 0;
+	irs->ist_cfgr.istsz = 0;
+	irs->ist_cfgr.structure = 0;
+
+	irs->ist_baser.valid = 0;
+	irs->ist_baser.addr = 0;
+
+	return 0;
+}
diff --git a/arch/arm64/kvm/vgic/vgic-v5-tables.c b/arch/arm64/kvm/vgic/vgic-v5-tables.c
index 0120c3205dea6..77fc5fb27f30d 100644
--- a/arch/arm64/kvm/vgic/vgic-v5-tables.c
+++ b/arch/arm64/kvm/vgic/vgic-v5-tables.c
@@ -578,6 +578,22 @@ int vgic_v5_vmte_release(struct kvm *kvm)
 	return 0;
 }
 
+/*
+ * Provide a way for the IRS MMIO emulation to correctly populate the number of
+ * IAFFID bits (which correspond to our vpe_id_bits.
+ */
+u8 vgic_v5_vmte_vpe_id_bits(struct kvm_vcpu *vcpu)
+{
+	u16 vm_id = vgic_v5_vm_id(vcpu->kvm);
+	struct vgic_v5_vm_info *vmi;
+
+	vmi = xa_load(&vm_info, vm_id);
+	if (WARN_ON_ONCE(!vmi))
+		return 0;
+
+	return vmi->vpe_id_bits;
+}
+
 /*
  * Allocate a VPE descriptor and provide it to the hardware via the VPE Table.
  */
diff --git a/arch/arm64/kvm/vgic/vgic-v5-tables.h b/arch/arm64/kvm/vgic/vgic-v5-tables.h
index 6a024337eba79..25e1c9fff87b4 100644
--- a/arch/arm64/kvm/vgic/vgic-v5-tables.h
+++ b/arch/arm64/kvm/vgic/vgic-v5-tables.h
@@ -158,6 +158,7 @@ void vgic_v5_release_vm_id(struct kvm *kvm);
 
 int vgic_v5_vmte_init(struct kvm *kvm);
 int vgic_v5_vmte_release(struct kvm *kvm);
+u8 vgic_v5_vmte_vpe_id_bits(struct kvm_vcpu *vcpu);
 int vgic_v5_vmte_alloc_vpe(struct kvm_vcpu *vcpu);
 int vgic_v5_vmte_free_vpe(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
index f2f5fdc3211d7..282278e4a6c19 100644
--- a/arch/arm64/kvm/vgic/vgic.h
+++ b/arch/arm64/kvm/vgic/vgic.h
@@ -366,6 +366,7 @@ void vgic_debug_destroy(struct kvm *kvm);
 int vgic_v5_probe(const struct gic_kvm_info *info);
 void vgic_v5_reset(struct kvm_vcpu *vcpu);
 int vgic_v5_init(struct kvm *kvm);
+int kvm_vgic_v5_irs_init(struct kvm *kvm, unsigned int nr_spis);
 void vgic_v5_teardown(struct kvm *kvm);
 int vgic_v5_map_resources(struct kvm *kvm);
 void vgic_v5_set_ppi_ops(struct kvm_vcpu *vcpu, u32 vintid);
@@ -378,6 +379,7 @@ void vgic_v5_set_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcr);
 void vgic_v5_get_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcr);
 void vgic_v5_restore_state(struct kvm_vcpu *vcpu);
 void vgic_v5_save_state(struct kvm_vcpu *vcpu);
+int vgic_v5_register_irs_iodev(struct kvm *kvm, gpa_t irs_base_address);
 
 #define for_each_visible_v5_ppi(__i, __k)		\
 	for_each_set_bit(__i, (__k)->arch.vgic.gicv5_vm.vgic_ppi_mask, VGIC_V5_NR_PRIVATE_IRQS)
-- 
2.34.1


  parent reply	other threads:[~2026-04-27 16:14 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-27 16:06 [PATCH 00/43] KVM: arm64: Add GICv5 IRS support Sascha Bischoff
2026-04-27 16:06 ` [PATCH 01/43] arm64/sysreg: Add GICv5 GIC VDPEND and VDRCFG encodings Sascha Bischoff
2026-04-27 16:06 ` [PATCH 02/43] arm64/sysreg: Update ICC_CR0_EL1 with LINK and LINK_IDLE fields Sascha Bischoff
2026-04-27 16:07 ` [PATCH 03/43] KVM: arm64: gic-v5: Add resident/non-resident hyp calls Sascha Bischoff
2026-04-28 14:28   ` Marc Zyngier
2026-05-01 16:40     ` Sascha Bischoff
2026-04-27 16:07 ` [PATCH 04/43] irqchip/gic-v5: Provide IRS config frame attrs to KVM Sascha Bischoff
2026-04-28 14:56   ` Marc Zyngier
2026-05-01 16:46     ` Sascha Bischoff
2026-04-27 16:07 ` [PATCH 05/43] KVM: arm64: gic-v5: Extract host IRS caps from IRS config frame Sascha Bischoff
2026-04-28 15:20   ` Marc Zyngier
2026-05-01 16:44     ` Sascha Bischoff
2026-04-27 16:08 ` [PATCH 06/43] KVM: arm64: gic-v5: Add VPE doorbell domain Sascha Bischoff
2026-04-28 16:40   ` Marc Zyngier
2026-05-01 16:54     ` Sascha Bischoff
2026-04-27 16:08 ` [PATCH 07/43] KVM: arm64: gic-v5: Create & manage VM and VPE tables Sascha Bischoff
2026-04-28 14:54   ` Vladimir Murzin
2026-05-01 16:42     ` Sascha Bischoff
2026-04-28 15:55   ` Joey Gouly
2026-04-29 10:25   ` Marc Zyngier
2026-04-27 16:08 ` [PATCH 08/43] KVM: arm64: gic-v5: Introduce guest IST alloc and management Sascha Bischoff
2026-04-29 14:29   ` Marc Zyngier
2026-04-27 16:09 ` [PATCH 09/43] KVM: arm64: gic-v5: Implement VMT/vIST IRS MMIO Ops Sascha Bischoff
2026-04-29 12:50   ` Joey Gouly
2026-04-29 16:04   ` Marc Zyngier
2026-04-27 16:09 ` [PATCH 10/43] KVM: arm64: gic-v5: Implement VPE " Sascha Bischoff
2026-04-30  8:46   ` Marc Zyngier
2026-04-27 16:09 ` [PATCH 11/43] KVM: arm64: gic-v5: Make VPEs valid in vgic_v5_reset() Sascha Bischoff
2026-04-30  9:37   ` Marc Zyngier
2026-04-27 16:10 ` [PATCH 12/43] KVM: arm64: gic-v5: Clear db_fired flag before making VPE non-resident Sascha Bischoff
2026-04-27 16:10 ` [PATCH 13/43] KVM: arm64: gic-v5: Make VPEs (non-)resident in vgic_load/put Sascha Bischoff
2026-04-30 10:26   ` Marc Zyngier
2026-04-27 16:10 ` [PATCH 14/43] KVM: arm64: gic-v5: Request VPE doorbells when going non-resident Sascha Bischoff
2026-04-30 10:37   ` Marc Zyngier
2026-04-27 16:11 ` [PATCH 15/43] KVM: arm64: gic-v5: Handle doorbells in kvm_vgic_vcpu_pending_irq() Sascha Bischoff
2026-04-27 16:11 ` [PATCH 16/43] KVM: arm64: gic-v5: Initialise and teardown VMTEs & doorbells Sascha Bischoff
2026-04-30 12:23   ` Marc Zyngier
2026-04-27 16:11 ` [PATCH 17/43] KVM: arm64: gic-v5: Enable VPE DBs on VPE reset and disable on teardown Sascha Bischoff
2026-04-27 16:12 ` [PATCH 18/43] KVM: arm64: gic-v5: Define remaining IRS MMIO registers Sascha Bischoff
2026-04-27 16:12 ` [PATCH 19/43] KVM: arm64: gic-v5: Introduce struct vgic_v5_irs and IRS base address Sascha Bischoff
2026-04-27 16:12 ` [PATCH 20/43] KVM: arm64: gic-v5: Add IRS IODEV to iodev_types and generic MMIO handlers Sascha Bischoff
2026-04-27 16:13 ` [PATCH 21/43] KVM: arm64: gic-v5: Add KVM_VGIC_V5_ADDR_TYPE_IRS to UAPI Sascha Bischoff
2026-04-27 16:13 ` Sascha Bischoff [this message]
2026-04-27 16:13 ` [PATCH 23/43] KVM: arm64: gic-v5: Set IRICHPPIDIS based on IRS enable state Sascha Bischoff
2026-04-27 16:14 ` [PATCH 24/43] KVM: arm64: gic-v5: Call IRS init/teardown from vgic_v5 init/teardown Sascha Bischoff
2026-04-27 16:14 ` [PATCH 25/43] KVM: arm64: gic-v5: Register the IRS IODEV Sascha Bischoff
2026-04-27 16:14 ` [PATCH 26/43] Documentation: KVM: Extend VGICv5 docs for KVM_VGIC_V5_ADDR_TYPE_IRS Sascha Bischoff
2026-04-27 16:15 ` [PATCH 27/43] KVM: arm64: selftests: Update vGICv5 selftest to set IRS address Sascha Bischoff
2026-04-27 16:15 ` [PATCH 28/43] KVM: arm64: gic-v5: Introduce SPI AP list Sascha Bischoff
2026-04-27 16:15 ` [PATCH 29/43] KVM: arm64: gic-v5: Add GIC VDPEND and GIC VDRCFG hyp calls Sascha Bischoff
2026-04-27 16:16 ` [PATCH 30/43] KVM: arm64: gic-v5: Track SPI state for in-flight SPIs Sascha Bischoff
2026-04-27 16:16 ` [PATCH 31/43] KVM: arm64: gic: Introduce set_pending_state() to irq_op Sascha Bischoff
2026-04-27 16:16 ` [PATCH 32/43] KVM: arm64: gic-v5: Support SPI injection Sascha Bischoff
2026-04-27 16:17 ` [PATCH 33/43] KVM: arm64: gic-v5: Add GICv5 SPI injection to irqfd Sascha Bischoff
2026-04-27 16:17 ` [PATCH 34/43] KVM: arm64: gic-v5: Mask per-vcpu PPI state in vgic_v5_finalize_ppi_state() Sascha Bischoff
2026-04-27 16:17 ` [PATCH 35/43] KVM: arm64: gic-v5: Add GICv5 EL1 sysreg userspace set/get interface Sascha Bischoff
2026-04-27 16:18 ` [PATCH 36/43] KVM: arm64: gic-v5: Implement save/restore mechanisms for ISTs Sascha Bischoff
2026-05-01 18:54   ` Vladimir Murzin
2026-04-27 16:18 ` [PATCH 37/43] KVM: arm64: gic-v5: Handle userspace accesses to IRS MMIO region Sascha Bischoff
2026-04-27 16:19 ` [PATCH 38/43] KVM: arm64: gic-v5: Add VGIC_GRP_IRS_REGS/VGIC_GRP_IST to UAPI Sascha Bischoff
2026-04-27 16:19 ` [PATCH 39/43] KVM: arm64: gic-v5: Plumb in has/set/get_attr for sysregs & IRS MMIO regs Sascha Bischoff
2026-04-27 16:19 ` [PATCH 40/43] Documentation: KVM: Document KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS for VGICv5 Sascha Bischoff
2026-04-27 16:20 ` [PATCH 41/43] Documentation: KVM: Add KVM_DEV_ARM_VGIC_GRP_IRS_REGS to VGICv5 docs Sascha Bischoff
2026-04-27 16:20 ` [PATCH 42/43] Documentation: KVM: Add docs for KVM_DEV_ARM_VGIC_GRP_IST Sascha Bischoff
2026-04-27 16:20 ` [PATCH 43/43] Documentation: KVM: Add the VGICv5 IRS save/restore sequences Sascha Bischoff
2026-04-30  8:57   ` Peter Maydell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260427160547.3129448-23-sascha.bischoff@arm.com \
    --to=sascha.bischoff@arm.com \
    --cc=Joey.Gouly@arm.com \
    --cc=Suzuki.Poulose@arm.com \
    --cc=Timothy.Hayes@arm.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=lpieralisi@kernel.org \
    --cc=maz@kernel.org \
    --cc=nd@arm.com \
    --cc=oliver.upton@linux.dev \
    --cc=peter.maydell@linaro.org \
    --cc=yuzenghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox