From: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
To: qemu-devel@nongnu.org, Mohamed Mediouni <mohamed@unpredictable.fr>
Cc: "Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Yanan Wang" <wangyanan55@huawei.com>,
"Zhao Liu" <zhao1.liu@intel.com>,
qemu-arm@nongnu.org, "Peter Maydell" <peter.maydell@linaro.org>,
"Roman Bolshakov" <rbolshakov@ddn.com>,
"Alexander Graf" <agraf@csgraf.de>,
"Philippe Mathieu-Daudé " <philmd@linaro.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Eduardo Habkost" <eduardo@habkost.net>,
"Phil Dennis-Jordan" <phil@philjordan.eu>,
"Mohamed Mediouni" <mohamed@unpredictable.fr>
Subject: Re: [PATCH v20 01/15] hw/intc: Add hvf vGIC interrupt controller support
Date: Fri, 24 Apr 2026 09:38:54 +0300 [thread overview]
Message-ID: <tdzkba.5057eg6jyy9z@linaro.org> (raw)
In-Reply-To: <20260316130642.13246-2-mohamed@unpredictable.fr>
On Mon, 16 Mar 2026 15:06, Mohamed Mediouni <mohamed@unpredictable.fr> wrote:
>This opens up the door to nested virtualisation support.
>
>Signed-off-by: Mohamed Mediouni <mohamed@unpredictable.fr>
>Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
>---
Not familiar with nested virt, but I took a look through the code and
overall LGTM, left a few comments inline.
Feel free to add my r-b regardless of whether you address them since
they are nitpicking:
Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
> hw/intc/arm_gicv3_hvf.c | 741 +++++++++++++++++++++++++++++
> hw/intc/meson.build | 1 +
> include/hw/intc/arm_gicv3_common.h | 1 +
> 3 files changed, 743 insertions(+)
> create mode 100644 hw/intc/arm_gicv3_hvf.c
>
>diff --git a/hw/intc/arm_gicv3_hvf.c b/hw/intc/arm_gicv3_hvf.c
>new file mode 100644
>index 0000000000..55171a796b
>--- /dev/null
>+++ b/hw/intc/arm_gicv3_hvf.c
>@@ -0,0 +1,741 @@
>+/* SPDX-License-Identifier: GPL-2.0-or-later */
>+/*
>+ * ARM Generic Interrupt Controller using HVF platform support
>+ *
>+ * Copyright (c) 2025 Mohamed Mediouni
>+ * Based on vGICv3 KVM code by Pavel Fedin
>+ *
>+ */
>+
>+#include "qemu/osdep.h"
>+#include "qapi/error.h"
>+#include "hw/intc/arm_gicv3_common.h"
>+#include "qemu/error-report.h"
>+#include "qemu/module.h"
>+#include "system/runstate.h"
>+#include "system/hvf.h"
>+#include "system/hvf_int.h"
>+#include "hvf_arm.h"
>+#include "gicv3_internal.h"
>+#include "vgic_common.h"
>+#include "qom/object.h"
>+#include "target/arm/cpregs.h"
>+#include <Hypervisor/Hypervisor.h>
>+
>+/* For the GIC, override the check outright, as availability is checked elsewhere. */
>+#pragma clang diagnostic push
>+#pragma clang diagnostic ignored "-Wunguarded-availability"
Question: is this necessary/the only way or just a quick workaround?
>+
>+struct HVFARMGICv3Class {
>+ ARMGICv3CommonClass parent_class;
>+ DeviceRealize parent_realize;
>+ ResettablePhases parent_phases;
>+};
>+
>+typedef struct HVFARMGICv3Class HVFARMGICv3Class;
>+
>+/* This is reusing the GICv3State typedef from ARM_GICV3_ITS_COMMON */
>+DECLARE_OBJ_CHECKERS(GICv3State, HVFARMGICv3Class,
>+ HVF_GICV3, TYPE_HVF_GICV3);
>+
>+/*
>+ * Loop through each distributor IRQ related register; since bits
>+ * corresponding to SPIs and PPIs are RAZ/WI when affinity routing
>+ * is enabled, we skip those.
>+ */
>+#define for_each_dist_irq_reg(_irq, _max, _field_width) \
>+ for (_irq = GIC_INTERNAL; _irq < _max; _irq += (32 / _field_width))
Nit: _max and _field_width should be enclosed in parentheses since this
is a macro
>+
>+/*
>+ * Wrap calls to the vGIC APIs to assert_hvf_ok()
>+ * as a macro to keep the code clean.
>+ */
Suggestion, not a request: I like seeing the assert inline when reading
code instead of indirection :)
>+#define hv_gic_get_distributor_reg(offset, reg) \
>+ assert_hvf_ok(hv_gic_get_distributor_reg(offset, reg))
>+
>+#define hv_gic_set_distributor_reg(offset, reg) \
>+ assert_hvf_ok(hv_gic_set_distributor_reg(offset, reg))
>+
>+#define hv_gic_get_redistributor_reg(vcpu, reg, value) \
>+ assert_hvf_ok(hv_gic_get_redistributor_reg(vcpu, reg, value))
>+
>+#define hv_gic_set_redistributor_reg(vcpu, reg, value) \
>+ assert_hvf_ok(hv_gic_set_redistributor_reg(vcpu, reg, value))
>+
>+#define hv_gic_get_icc_reg(vcpu, reg, value) \
>+ assert_hvf_ok(hv_gic_get_icc_reg(vcpu, reg, value))
>+
>+#define hv_gic_set_icc_reg(vcpu, reg, value) \
>+ assert_hvf_ok(hv_gic_set_icc_reg(vcpu, reg, value))
>+
>+#define hv_gic_get_ich_reg(vcpu, reg, value) \
>+ assert_hvf_ok(hv_gic_get_ich_reg(vcpu, reg, value))
>+
>+#define hv_gic_set_ich_reg(vcpu, reg, value) \
>+ assert_hvf_ok(hv_gic_set_ich_reg(vcpu, reg, value))
>+
>+static void hvf_dist_get_priority(GICv3State *s, hv_gic_distributor_reg_t offset
>+ , uint8_t *bmp)
Not sure if it's email formatting or code formatting, but that starting
comma seems out of place (ditto in the next function)
>+{
>+ uint64_t reg;
>+ uint32_t *field;
>+ int irq;
>+ field = (uint32_t *)(bmp);
>+
>+ for_each_dist_irq_reg(irq, s->num_irq, 8) {
>+ hv_gic_get_distributor_reg(offset, ®);
>+ *field = reg;
>+ offset += 4;
>+ field++;
>+ }
>+}
>+
>+static void hvf_dist_put_priority(GICv3State *s, hv_gic_distributor_reg_t offset
>+ , uint8_t *bmp)
>+{
>+ uint32_t reg, *field;
>+ int irq;
>+ field = (uint32_t *)(bmp);
>+
>+ for_each_dist_irq_reg(irq, s->num_irq, 8) {
>+ reg = *field;
>+ hv_gic_set_distributor_reg(offset, reg);
>+ offset += 4;
>+ field++;
>+ }
>+}
>+
>+static void hvf_dist_get_edge_trigger(GICv3State *s, hv_gic_distributor_reg_t offset,
>+ uint32_t *bmp)
>+{
>+ uint64_t reg;
>+ int irq;
>+
>+ for_each_dist_irq_reg(irq, s->num_irq, 2) {
>+ hv_gic_get_distributor_reg(offset, ®);
>+ reg = half_unshuffle32(reg >> 1);
>+ if (irq % 32 != 0) {
>+ reg = (reg << 16);
>+ }
>+ *gic_bmp_ptr32(bmp, irq) |= reg;
>+ offset += 4;
>+ }
>+}
>+
>+static void hvf_dist_put_edge_trigger(GICv3State *s, hv_gic_distributor_reg_t offset,
>+ uint32_t *bmp)
>+{
>+ uint32_t reg;
>+ int irq;
>+
>+ for_each_dist_irq_reg(irq, s->num_irq, 2) {
>+ reg = *gic_bmp_ptr32(bmp, irq);
>+ if (irq % 32 != 0) {
>+ reg = (reg & 0xffff0000) >> 16;
>+ } else {
>+ reg = reg & 0xffff;
>+ }
>+ reg = half_shuffle32(reg) << 1;
>+ hv_gic_set_distributor_reg(offset, reg);
>+ offset += 4;
>+ }
>+}
>+
>+/* Read a bitmap register group from the kernel VGIC. */
>+static void hvf_dist_getbmp(GICv3State *s, hv_gic_distributor_reg_t offset, uint32_t *bmp)
>+{
>+ uint64_t reg;
>+ int irq;
>+
>+ for_each_dist_irq_reg(irq, s->num_irq, 1) {
>+
Stray line
>+ hv_gic_get_distributor_reg(offset, ®);
>+ *gic_bmp_ptr32(bmp, irq) = reg;
>+ offset += 4;
>+ }
>+}
>+
>+static void hvf_dist_putbmp(GICv3State *s, hv_gic_distributor_reg_t offset,
>+ hv_gic_distributor_reg_t clroffset, uint32_t *bmp)
>+{
>+ uint32_t reg;
>+ int irq;
>+
>+ for_each_dist_irq_reg(irq, s->num_irq, 1) {
>+ /*
>+ * If this bitmap is a set/clear register pair, first write to the
>+ * clear-reg to clear all bits before using the set-reg to write
>+ * the 1 bits.
>+ */
>+ if (clroffset != 0) {
>+ reg = 0;
>+ hv_gic_set_distributor_reg(clroffset, reg);
>+ clroffset += 4;
>+ }
>+ reg = *gic_bmp_ptr32(bmp, irq);
>+ hv_gic_set_distributor_reg(offset, reg);
>+ offset += 4;
>+ }
>+}
>+
>+static void hvf_gicv3_check(GICv3State *s)
>+{
>+ uint64_t reg;
>+ uint32_t num_irq;
>+
>+ /* Sanity checking s->num_irq */
>+ hv_gic_get_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_TYPER, ®);
>+ num_irq = ((reg & 0x1f) + 1) * 32;
>+
>+ if (num_irq < s->num_irq) {
>+ error_report("Model requests %u IRQs, but HVF supports max %u",
>+ s->num_irq, num_irq);
>+ abort();
>+ }
>+}
>+
>+static void hvf_gicv3_put_cpu_el2(CPUState *cpu_state, run_on_cpu_data arg)
>+{
>+ int num_pri_bits;
>+
>+ /* Redistributor state */
>+ GICv3CPUState *c = arg.host_ptr;
>+ hv_vcpu_t vcpu = c->cpu->accel->fd;
>+
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_VMCR_EL2, c->ich_vmcr_el2);
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_HCR_EL2, c->ich_hcr_el2);
>+
>+ for (int i = 0; i < GICV3_LR_MAX; i++) {
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_LR0_EL2, c->ich_lr_el2[i]);
>+ }
>+
>+ num_pri_bits = c->vpribits;
>+
>+ switch (num_pri_bits) {
>+ case 7:
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 3,
>+ c->ich_apr[GICV3_G0][3]);
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 2,
>+ c->ich_apr[GICV3_G0][2]);
>+ /* fall through */
>+ case 6:
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 1,
>+ c->ich_apr[GICV3_G0][1]);
>+ /* fall through */
>+ default:
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2,
>+ c->ich_apr[GICV3_G0][0]);
>+ }
>+
>+ switch (num_pri_bits) {
>+ case 7:
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 3,
>+ c->ich_apr[GICV3_G1NS][3]);
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 2,
>+ c->ich_apr[GICV3_G1NS][2]);
>+ /* fall through */
>+ case 6:
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 1,
>+ c->ich_apr[GICV3_G1NS][1]);
>+ /* fall through */
>+ default:
>+ hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2,
>+ c->ich_apr[GICV3_G1NS][0]);
>+ }
>+}
>+
>+static void hvf_gicv3_put_cpu(CPUState *cpu_state, run_on_cpu_data arg)
>+{
>+ uint32_t reg;
>+ uint64_t reg64;
>+ int i, num_pri_bits;
>+
>+ /* Redistributor state */
>+ GICv3CPUState *c = arg.host_ptr;
>+ hv_vcpu_t vcpu = c->cpu->accel->fd;
>+
>+ reg = c->gicr_waker;
>+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROUPR0, reg);
>+
>+ reg = c->gicr_igroupr0;
>+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROUPR0, reg);
>+
>+ reg = ~0;
>+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICENABLER0, reg);
>+ reg = c->gicr_ienabler0;
>+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISENABLER0, reg);
>+
>+ /* Restore config before pending so we treat level/edge correctly */
>+ reg = half_shuffle32(c->edge_trigger >> 16) << 1;
>+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICFGR1, reg);
>+
>+ reg = ~0;
>+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICPENDR0, reg);
>+ reg = c->gicr_ipendr0;
>+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISPENDR0, reg);
>+
>+ reg = ~0;
>+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICACTIVER0, reg);
>+ reg = c->gicr_iactiver0;
>+ hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISACTIVER0, reg);
>+
>+ for (i = 0; i < GIC_INTERNAL; i += 4) {
>+ reg = c->gicr_ipriorityr[i] |
>+ (c->gicr_ipriorityr[i + 1] << 8) |
>+ (c->gicr_ipriorityr[i + 2] << 16) |
>+ (c->gicr_ipriorityr[i + 3] << 24);
>+ hv_gic_set_redistributor_reg(vcpu,
>+ HV_GIC_REDISTRIBUTOR_REG_GICR_IPRIORITYR0 + i, reg);
>+ }
>+
>+ /* CPU interface state */
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_SRE_EL1, c->icc_sre_el1);
>+
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_CTLR_EL1,
>+ c->icc_ctlr_el1[GICV3_NS]);
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN0_EL1,
>+ c->icc_igrpen[GICV3_G0]);
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN1_EL1,
>+ c->icc_igrpen[GICV3_G1NS]);
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_PMR_EL1, c->icc_pmr_el1);
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_BPR0_EL1, c->icc_bpr[GICV3_G0]);
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_BPR1_EL1, c->icc_bpr[GICV3_G1NS]);
>+
>+ num_pri_bits = ((c->icc_ctlr_el1[GICV3_NS] &
>+ ICC_CTLR_EL1_PRIBITS_MASK) >>
>+ ICC_CTLR_EL1_PRIBITS_SHIFT) + 1;
>+
>+ switch (num_pri_bits) {
>+ case 7:
>+ reg64 = c->icc_apr[GICV3_G0][3];
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 3, reg64);
>+ reg64 = c->icc_apr[GICV3_G0][2];
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 2, reg64);
>+ /* fall through */
>+ case 6:
>+ reg64 = c->icc_apr[GICV3_G0][1];
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 1, reg64);
>+ /* fall through */
>+ default:
>+ reg64 = c->icc_apr[GICV3_G0][0];
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1, reg64);
>+ }
>+
>+ switch (num_pri_bits) {
>+ case 7:
>+ reg64 = c->icc_apr[GICV3_G1NS][3];
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 3, reg64);
>+ reg64 = c->icc_apr[GICV3_G1NS][2];
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 2, reg64);
>+ /* fall through */
>+ case 6:
>+ reg64 = c->icc_apr[GICV3_G1NS][1];
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 1, reg64);
>+ /* fall through */
>+ default:
>+ reg64 = c->icc_apr[GICV3_G1NS][0];
>+ hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1, reg64);
>+ }
>+
>+ /* Registers beyond this point are with nested virt only */
>+ if (c->gic->maint_irq) {
>+ hvf_gicv3_put_cpu_el2(cpu_state, arg);
>+ }
>+}
>+
>+static void hvf_gicv3_put(GICv3State *s)
>+{
>+ uint32_t reg;
>+ int ncpu, i;
>+
>+ hvf_gicv3_check(s);
>+
>+ reg = s->gicd_ctlr;
>+ hv_gic_set_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_CTLR, reg);
>+
>+ /* per-CPU state */
>+
>+ for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
>+ run_on_cpu_data data;
>+ data.host_ptr = &s->cpu[ncpu];
>+ run_on_cpu(s->cpu[ncpu].cpu, hvf_gicv3_put_cpu, data);
>+ }
>+
>+ /* s->enable bitmap -> GICD_ISENABLERn */
>+ hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISENABLER0
>+ , HV_GIC_DISTRIBUTOR_REG_GICD_ICENABLER0, s->enabled);
stray comma again
>+
>+ /* s->group bitmap -> GICD_IGROUPRn */
>+ hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_IGROUPR0
>+ , 0, s->group);
>+
>+ /* Restore targets before pending to ensure the pending state is set on
>+ * the appropriate CPU interfaces in the kernel
>+ */
>+
>+ /* s->gicd_irouter[irq] -> GICD_IROUTERn */
>+ for (i = GIC_INTERNAL; i < s->num_irq; i++) {
>+ uint32_t offset = HV_GIC_DISTRIBUTOR_REG_GICD_IROUTER32 + (8 * i)
>+ - (8 * GIC_INTERNAL);
>+ hv_gic_set_distributor_reg(offset, s->gicd_irouter[i]);
>+ }
>+
>+ /*
>+ * s->trigger bitmap -> GICD_ICFGRn
>+ * (restore configuration registers before pending IRQs so we treat
>+ * level/edge correctly)
>+ */
>+ hvf_dist_put_edge_trigger(s, HV_GIC_DISTRIBUTOR_REG_GICD_ICFGR0, s->edge_trigger);
>+
>+ /* s->pending bitmap -> GICD_ISPENDRn */
>+ hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISPENDR0,
>+ HV_GIC_DISTRIBUTOR_REG_GICD_ICPENDR0, s->pending);
>+
>+ /* s->active bitmap -> GICD_ISACTIVERn */
>+ hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISACTIVER0,
>+ HV_GIC_DISTRIBUTOR_REG_GICD_ICACTIVER0, s->active);
>+
>+ /* s->gicd_ipriority[] -> GICD_IPRIORITYRn */
>+ hvf_dist_put_priority(s, HV_GIC_DISTRIBUTOR_REG_GICD_IPRIORITYR0, s->gicd_ipriority);
>+}
>+
>+static void hvf_gicv3_get_cpu_el2(CPUState *cpu_state, run_on_cpu_data arg)
>+{
>+ int num_pri_bits;
>+
>+ /* Redistributor state */
>+ GICv3CPUState *c = arg.host_ptr;
>+ hv_vcpu_t vcpu = c->cpu->accel->fd;
>+
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_VMCR_EL2, &c->ich_vmcr_el2);
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_HCR_EL2, &c->ich_hcr_el2);
>+
>+ for (int i = 0; i < GICV3_LR_MAX; i++) {
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_LR0_EL2, &c->ich_lr_el2[i]);
>+ }
>+
>+ num_pri_bits = c->vpribits;
>+
>+ switch (num_pri_bits) {
>+ case 7:
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 3,
>+ &c->ich_apr[GICV3_G0][3]);
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 2,
>+ &c->ich_apr[GICV3_G0][2]);
>+ /* fall through */
>+ case 6:
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 1,
>+ &c->ich_apr[GICV3_G0][1]);
>+ /* fall through */
>+ default:
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2,
>+ &c->ich_apr[GICV3_G0][0]);
>+ }
>+
>+ switch (num_pri_bits) {
>+ case 7:
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 3,
>+ &c->ich_apr[GICV3_G1NS][3]);
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 2,
>+ &c->ich_apr[GICV3_G1NS][2]);
>+ /* fall through */
>+ case 6:
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 1,
>+ &c->ich_apr[GICV3_G1NS][1]);
>+ /* fall through */
>+ default:
>+ hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2,
>+ &c->ich_apr[GICV3_G1NS][0]);
>+ }
>+}
>+
>+static void hvf_gicv3_get_cpu(CPUState *cpu_state, run_on_cpu_data arg)
>+{
>+ uint64_t reg;
>+ int i, num_pri_bits;
>+
>+ /* Redistributor state */
>+ GICv3CPUState *c = arg.host_ptr;
>+ hv_vcpu_t vcpu = c->cpu->accel->fd;
>+
>+ hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROUPR0,
>+ ®);
>+ c->gicr_igroupr0 = reg;
>+ hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISENABLER0,
>+ ®);
>+ c->gicr_ienabler0 = reg;
>+ hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICFGR1,
>+ ®);
>+ c->edge_trigger = half_unshuffle32(reg >> 1) << 16;
>+ hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISPENDR0,
>+ ®);
>+ c->gicr_ipendr0 = reg;
>+ hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISACTIVER0,
>+ ®);
>+ c->gicr_iactiver0 = reg;
>+
>+ for (i = 0; i < GIC_INTERNAL; i += 4) {
>+ hv_gic_get_redistributor_reg(
>+ vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IPRIORITYR0 + i, ®);
>+ c->gicr_ipriorityr[i] = extract32(reg, 0, 8);
>+ c->gicr_ipriorityr[i + 1] = extract32(reg, 8, 8);
>+ c->gicr_ipriorityr[i + 2] = extract32(reg, 16, 8);
>+ c->gicr_ipriorityr[i + 3] = extract32(reg, 24, 8);
>+ }
>+
>+ /* CPU interface */
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_SRE_EL1, &c->icc_sre_el1);
>+
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_CTLR_EL1,
>+ &c->icc_ctlr_el1[GICV3_NS]);
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN0_EL1,
>+ &c->icc_igrpen[GICV3_G0]);
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN1_EL1,
>+ &c->icc_igrpen[GICV3_G1NS]);
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_PMR_EL1, &c->icc_pmr_el1);
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_BPR0_EL1, &c->icc_bpr[GICV3_G0]);
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_BPR1_EL1, &c->icc_bpr[GICV3_G1NS]);
>+ num_pri_bits = ((c->icc_ctlr_el1[GICV3_NS] & ICC_CTLR_EL1_PRIBITS_MASK) >>
>+ ICC_CTLR_EL1_PRIBITS_SHIFT) +
>+ 1;
>+
>+ switch (num_pri_bits) {
>+ case 7:
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 3,
>+ &c->icc_apr[GICV3_G0][3]);
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 2,
>+ &c->icc_apr[GICV3_G0][2]);
>+ /* fall through */
>+ case 6:
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 1,
>+ &c->icc_apr[GICV3_G0][1]);
>+ /* fall through */
>+ default:
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1,
>+ &c->icc_apr[GICV3_G0][0]);
>+ }
>+
>+ switch (num_pri_bits) {
>+ case 7:
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 3,
>+ &c->icc_apr[GICV3_G1NS][3]);
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 2,
>+ &c->icc_apr[GICV3_G1NS][2]);
>+ /* fall through */
>+ case 6:
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 1,
>+ &c->icc_apr[GICV3_G1NS][1]);
>+ /* fall through */
>+ default:
>+ hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1,
>+ &c->icc_apr[GICV3_G1NS][0]);
>+ }
>+
>+ /* Registers beyond this point are with nested virt only */
>+ if (c->gic->maint_irq) {
>+ hvf_gicv3_get_cpu_el2(cpu_state, arg);
>+ }
>+}
>+
>+static void hvf_gicv3_get(GICv3State *s)
>+{
>+ uint64_t reg;
>+ int ncpu, i;
>+
>+ hvf_gicv3_check(s);
>+
>+ hv_gic_get_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_CTLR, ®);
>+ s->gicd_ctlr = reg;
>+
>+ /* Redistributor state (one per CPU) */
>+
>+ for (ncpu = 0; ncpu < s->num_cpu; ncpu++) {
>+ run_on_cpu_data data;
>+ data.host_ptr = &s->cpu[ncpu];
>+ run_on_cpu(s->cpu[ncpu].cpu, hvf_gicv3_get_cpu, data);
>+ }
>+
>+ /* GICD_IGROUPRn -> s->group bitmap */
>+ hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_IGROUPR0, s->group);
>+
>+ /* GICD_ISENABLERn -> s->enabled bitmap */
>+ hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISENABLER0, s->enabled);
>+
>+ /* GICD_ISPENDRn -> s->pending bitmap */
>+ hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISPENDR0, s->pending);
>+
>+ /* GICD_ISACTIVERn -> s->active bitmap */
>+ hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISACTIVER0, s->active);
>+
>+ /* GICD_ICFGRn -> s->trigger bitmap */
>+ hvf_dist_get_edge_trigger(s, HV_GIC_DISTRIBUTOR_REG_GICD_ICFGR0
>+ , s->edge_trigger);
>+
>+ /* GICD_IPRIORITYRn -> s->gicd_ipriority[] */
>+ hvf_dist_get_priority(s, HV_GIC_DISTRIBUTOR_REG_GICD_IPRIORITYR0
>+ , s->gicd_ipriority);
>+
>+ /* GICD_IROUTERn -> s->gicd_irouter[irq] */
>+ for (i = GIC_INTERNAL; i < s->num_irq; i++) {
>+ uint32_t offset = HV_GIC_DISTRIBUTOR_REG_GICD_IROUTER32
>+ + (8 * i) - (8 * GIC_INTERNAL);
>+ hv_gic_get_distributor_reg(offset, &s->gicd_irouter[i]);
>+ }
>+}
>+
>+static void hvf_gicv3_set_irq(void *opaque, int irq, int level)
>+{
>+ GICv3State *s = opaque;
>+ if (irq > s->num_irq) {
>+ return;
>+ }
>+ hv_gic_set_spi(GIC_INTERNAL + irq, !!level);
>+}
>+
>+static void hvf_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri)
>+{
>+ GICv3CPUState *c;
>+
>+ c = env->gicv3state;
>+ c->icc_pmr_el1 = 0;
>+ /*
>+ * Architecturally the reset value of the ICC_BPR registers
>+ * is UNKNOWN. We set them all to 0 here; when the kernel
>+ * uses these values to program the ICH_VMCR_EL2 fields that
>+ * determine the guest-visible ICC_BPR register values, the
>+ * hardware's "writing a value less than the minimum sets
>+ * the field to the minimum value" behaviour will result in
>+ * them effectively resetting to the correct minimum value
>+ * for the host GIC.
>+ */
>+ c->icc_bpr[GICV3_G0] = 0;
>+ c->icc_bpr[GICV3_G1] = 0;
>+ c->icc_bpr[GICV3_G1NS] = 0;
>+
>+ c->icc_sre_el1 = 0x7;
>+ memset(c->icc_apr, 0, sizeof(c->icc_apr));
>+ memset(c->icc_igrpen, 0, sizeof(c->icc_igrpen));
>+}
>+
>+static void hvf_gicv3_reset_hold(Object *obj, ResetType type)
>+{
>+ GICv3State *s = ARM_GICV3_COMMON(obj);
>+ HVFARMGICv3Class *kgc = HVF_GICV3_GET_CLASS(s);
>+
>+ if (kgc->parent_phases.hold) {
>+ kgc->parent_phases.hold(obj, type);
>+ }
>+
>+ hvf_gicv3_put(s);
>+}
>+
>+
>+/*
>+ * CPU interface registers of GIC needs to be reset on CPU reset.
>+ * For the calling arm_gicv3_icc_reset() on CPU reset, we register
>+ * below ARMCPRegInfo. As we reset the whole cpu interface under single
>+ * register reset, we define only one register of CPU interface instead
>+ * of defining all the registers.
>+ */
>+static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
>+ { .name = "ICC_CTLR_EL1", .state = ARM_CP_STATE_BOTH,
>+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 12, .opc2 = 4,
>+ /*
>+ * If ARM_CP_NOP is used, resetfn is not called,
>+ * So ARM_CP_NO_RAW is appropriate type.
>+ */
>+ .type = ARM_CP_NO_RAW,
>+ .access = PL1_RW,
>+ .readfn = arm_cp_read_zero,
>+ .writefn = arm_cp_write_ignore,
>+ /*
>+ * We hang the whole cpu interface reset routine off here
>+ * rather than parcelling it out into one little function
>+ * per register
>+ */
>+ .resetfn = hvf_gicv3_icc_reset,
>+ },
>+};
>+
>+static void hvf_gicv3_realize(DeviceState *dev, Error **errp)
>+{
>+ ERRP_GUARD();
>+ GICv3State *s = HVF_GICV3(dev);
>+ HVFARMGICv3Class *kgc = HVF_GICV3_GET_CLASS(s);
>+ int i;
>+
>+ kgc->parent_realize(dev, errp);
>+ if (*errp) {
>+ return;
>+ }
>+
>+ if (s->revision != 3) {
>+ error_setg(errp, "unsupported GIC revision %d for platform GIC",
>+ s->revision);
>+ }
>+
>+ if (s->security_extn) {
>+ error_setg(errp, "the platform vGICv3 does not implement the "
>+ "security extensions");
>+ return;
>+ }
>+
>+ if (s->nmi_support) {
>+ error_setg(errp, "NMI is not supported with the platform GIC");
>+ return;
>+ }
>+
>+ if (s->nb_redist_regions > 1) {
>+ error_setg(errp, "Multiple VGICv3 redistributor regions are not "
>+ "supported by HVF");
>+ error_append_hint(errp, "A maximum of %d VCPUs can be used",
>+ s->redist_region_count[0]);
>+ return;
>+ }
>+
>+ gicv3_init_irqs_and_mmio(s, hvf_gicv3_set_irq, NULL);
>+
>+ for (i = 0; i < s->num_cpu; i++) {
>+ ARMCPU *cpu = ARM_CPU(qemu_get_cpu(i));
>+
>+ define_arm_cp_regs(cpu, gicv3_cpuif_reginfo);
>+ }
>+
>+ if (s->maint_irq && s->maint_irq != HV_GIC_INT_MAINTENANCE) {
>+ error_setg(errp, "vGIC maintenance IRQ mismatch with the hardcoded one in HVF.");
>+ return;
>+ }
>+}
>+
>+static void hvf_gicv3_class_init(ObjectClass *klass, const void *data)
>+{
>+ DeviceClass *dc = DEVICE_CLASS(klass);
>+ ResettableClass *rc = RESETTABLE_CLASS(klass);
>+ ARMGICv3CommonClass *agcc = ARM_GICV3_COMMON_CLASS(klass);
>+ HVFARMGICv3Class *kgc = HVF_GICV3_CLASS(klass);
>+
>+ agcc->pre_save = hvf_gicv3_get;
>+ agcc->post_load = hvf_gicv3_put;
>+
>+ device_class_set_parent_realize(dc, hvf_gicv3_realize,
>+ &kgc->parent_realize);
>+ resettable_class_set_parent_phases(rc, NULL, hvf_gicv3_reset_hold, NULL,
>+ &kgc->parent_phases);
>+}
>+
>+static const TypeInfo hvf_arm_gicv3_info = {
>+ .name = TYPE_HVF_GICV3,
>+ .parent = TYPE_ARM_GICV3_COMMON,
>+ .instance_size = sizeof(GICv3State),
>+ .class_init = hvf_gicv3_class_init,
>+ .class_size = sizeof(HVFARMGICv3Class),
>+};
>+
>+static void hvf_gicv3_register_types(void)
>+{
>+ type_register_static(&hvf_arm_gicv3_info);
>+}
>+
>+type_init(hvf_gicv3_register_types)
>+
>+#pragma clang diagnostic pop
>diff --git a/hw/intc/meson.build b/hw/intc/meson.build
>index 96742df090..b7baf8a0f6 100644
>--- a/hw/intc/meson.build
>+++ b/hw/intc/meson.build
>@@ -42,6 +42,7 @@ arm_common_ss.add(when: 'CONFIG_ARM_GIC', if_true: files('arm_gicv3_cpuif_common
> arm_common_ss.add(when: 'CONFIG_ARM_GICV3', if_true: files('arm_gicv3_cpuif.c'))
> specific_ss.add(when: 'CONFIG_ARM_GIC_KVM', if_true: files('arm_gic_kvm.c'))
> specific_ss.add(when: ['CONFIG_WHPX', 'TARGET_AARCH64'], if_true: files('arm_gicv3_whpx.c'))
>+specific_ss.add(when: ['CONFIG_HVF', 'CONFIG_ARM_GICV3'], if_true: files('arm_gicv3_hvf.c'))
> specific_ss.add(when: ['CONFIG_ARM_GIC_KVM', 'TARGET_AARCH64'], if_true: files('arm_gicv3_kvm.c', 'arm_gicv3_its_kvm.c'))
> arm_common_ss.add(when: 'CONFIG_ARM_V7M', if_true: files('armv7m_nvic.c'))
> specific_ss.add(when: 'CONFIG_GRLIB', if_true: files('grlib_irqmp.c'))
>diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
>index c55cf18120..9adcab0a0c 100644
>--- a/include/hw/intc/arm_gicv3_common.h
>+++ b/include/hw/intc/arm_gicv3_common.h
>@@ -315,6 +315,7 @@ DECLARE_OBJ_CHECKERS(GICv3State, ARMGICv3CommonClass,
>
> /* Types for GICv3 kernel-irqchip */
> #define TYPE_WHPX_GICV3 "whpx-arm-gicv3"
>+#define TYPE_HVF_GICV3 "hvf-arm-gicv3"
>
> struct ARMGICv3CommonClass {
> /*< private >*/
>--
>2.50.1 (Apple Git-155)
>
>
next prev parent reply other threads:[~2026-04-24 6:50 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-16 13:06 [PATCH v20 00/15] HVF: Add support for platform vGIC and nested virtualisation Mohamed Mediouni
2026-03-16 13:06 ` [PATCH v20 01/15] hw/intc: Add hvf vGIC interrupt controller support Mohamed Mediouni
2026-04-24 6:38 ` Manos Pitsidianakis [this message]
2026-03-16 13:06 ` [PATCH v20 02/15] hw/intc: arm_gicv3_hvf: save/restore Apple GIC state Mohamed Mediouni
2026-04-24 6:56 ` Manos Pitsidianakis
2026-04-24 7:29 ` Philippe Mathieu-Daudé
2026-03-16 13:06 ` [PATCH v20 03/15] accel, hw/arm, include/system/hvf: infrastructure changes for HVF vGIC Mohamed Mediouni
2026-04-23 16:10 ` Philippe Mathieu-Daudé
2026-04-23 17:01 ` Mohamed Mediouni
2026-03-16 13:06 ` [PATCH v20 04/15] target/arm: hvf: instantiate GIC early Mohamed Mediouni
2026-03-16 13:06 ` [PATCH v20 05/15] hw/arm, target/arm: nested virtualisation on HVF Mohamed Mediouni
2026-04-24 7:07 ` Manos Pitsidianakis
2026-03-16 13:06 ` [PATCH v20 06/15] hvf: only call hvf_sync_vtimer() when running without the platform vGIC Mohamed Mediouni
2026-03-16 13:06 ` [PATCH v20 07/15] hvf: gate ARM_FEATURE_PMU register emulation when using the Apple vGIC Mohamed Mediouni
2026-04-24 7:15 ` Manos Pitsidianakis
2026-03-16 13:06 ` [PATCH v20 08/15] hvf: arm: allow exposing minimal PMU when running with nested virt on Mohamed Mediouni
2026-04-23 16:03 ` Philippe Mathieu-Daudé
2026-03-16 13:06 ` [PATCH v20 09/15] target/arm: hvf: add asserts for code paths not leveraged when using the vGIC Mohamed Mediouni
2026-03-16 13:06 ` [PATCH v20 10/15] hvf: sync registers used at EL2 Mohamed Mediouni
2026-03-16 13:06 ` [PATCH v20 11/15] target/arm: hvf: pass through CNTHCTL_EL2 and MDCCINT_EL1 Mohamed Mediouni
2026-03-16 13:06 ` [PATCH v20 12/15] hvf: arm: disable SME when nested virt is active Mohamed Mediouni
2026-03-16 13:06 ` [PATCH v20 13/15] hvf: arm: physical timer emulation Mohamed Mediouni
2026-04-23 16:07 ` Philippe Mathieu-Daudé
2026-03-16 13:06 ` [PATCH v20 14/15] hvf: enable nested virtualisation support Mohamed Mediouni
2026-04-24 7:11 ` Manos Pitsidianakis
2026-03-16 13:06 ` [PATCH v20 15/15] hvf: arm: enable vGIC by default for virt-11.1 and later Mohamed Mediouni
2026-04-24 7:13 ` Manos Pitsidianakis
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=tdzkba.5057eg6jyy9z@linaro.org \
--to=manos.pitsidianakis@linaro.org \
--cc=agraf@csgraf.de \
--cc=eduardo@habkost.net \
--cc=marcel.apfelbaum@gmail.com \
--cc=mohamed@unpredictable.fr \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=phil@philjordan.eu \
--cc=philmd@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=rbolshakov@ddn.com \
--cc=wangyanan55@huawei.com \
--cc=zhao1.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.