* [RFC PATCH 0/3] target/arm: Implement FEAT_NMI and FEAT_GICv3_NMI
@ 2024-02-20 12:17 Jinjie Ruan via
2024-02-20 12:17 ` [RFC PATCH 1/3] target/arm: Implement FEAT_NMI to support Non-maskable Interrupt Jinjie Ruan via
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Jinjie Ruan via @ 2024-02-20 12:17 UTC (permalink / raw)
To: peter.maydell, eduardo, marcel.apfelbaum, philmd, wangyanan55,
qemu-devel, qemu-arm
Cc: ruanjinjie
This patch set implements FEAT_NMI and FEAT_GICv3_NMI for armv8. These
introduce support for a new category of interrupts in the architecture
which we can use to provide NMI like functionality.
There are two modes for using this FEAT_NMI. When PSTATE.ALLINT or
PSTATE.SP & SCTLR_ELx.SCTLR_SPINTMASK is set, any entry to ELx causes all
interrupts including those with superpriority to be masked on entry to ELn
until the mask is explicitly removed by software or hardware. PSTATE.ALLINT
can be managed by software using the new register control ALLINT.ALLINT.
Independent controls are provided for this feature at each EL, usage at EL1
should not disrupt EL2 or EL3.
I have tested it with the following linux patches which try to support
FEAT_NMI in linux kernel:
>-------https://lore.kernel.org/linux-arm-kernel/Y4sH5qX5bK9xfEBp@lpieralisi/T/#mb4ba4a2c045bf72c10c2202c1dd1b82d3240dc88
In the test, SGI, PPI and SPI interrupts can all be set to have super priority
to be converted to a hardware NMI interrupt. The SGI is tested with kernel
IPI as NMI framework, and the PPI interrupt is tested with "perf top" command
with hardware NMI enabled, and the PPI interrupt is tested with a custom
test module, in which NMI interrupts can be received and transmitted normally.
Jinjie Ruan (3):
target/arm: Implement FEAT_NMI to support Non-maskable Interrupt
target/arm: Add NMI exception and handle PSTATE.ALLINT on taking an
exception
hw/intc/arm_gicv3: Implement FEAT_GICv3_NMI feature to support
FEAT_NMI
hw/arm/virt.c | 2 +
hw/intc/arm_gicv3.c | 61 ++++++++++++++++++++++++++----
hw/intc/arm_gicv3_common.c | 4 ++
hw/intc/arm_gicv3_cpuif.c | 57 ++++++++++++++++++++++++++--
hw/intc/arm_gicv3_dist.c | 39 +++++++++++++++++++
hw/intc/arm_gicv3_redist.c | 23 +++++++++++
hw/intc/gicv3_internal.h | 5 +++
include/hw/core/cpu.h | 1 +
include/hw/intc/arm_gic_common.h | 1 +
include/hw/intc/arm_gicv3_common.h | 6 +++
target/arm/cpu-features.h | 5 +++
target/arm/cpu-qom.h | 1 +
target/arm/cpu.c | 43 ++++++++++++++++++---
target/arm/cpu.h | 12 +++++-
target/arm/cpu64.c | 31 +++++++++++++++
target/arm/helper.c | 58 ++++++++++++++++++++++++++++
target/arm/internals.h | 4 ++
target/arm/tcg/a64.decode | 1 +
target/arm/tcg/cpu64.c | 11 ++++++
target/arm/tcg/helper-a64.c | 25 ++++++++++++
target/arm/tcg/helper-a64.h | 1 +
target/arm/tcg/translate-a64.c | 10 +++++
22 files changed, 383 insertions(+), 18 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [RFC PATCH 1/3] target/arm: Implement FEAT_NMI to support Non-maskable Interrupt
2024-02-20 12:17 [RFC PATCH 0/3] target/arm: Implement FEAT_NMI and FEAT_GICv3_NMI Jinjie Ruan via
@ 2024-02-20 12:17 ` Jinjie Ruan via
2024-02-20 12:31 ` Peter Maydell
2024-02-20 12:17 ` [RFC PATCH 2/3] target/arm: Add NMI exception and handle PSTATE.ALLINT on taking an exception Jinjie Ruan via
2024-02-20 12:17 ` [RFC PATCH 3/3] hw/intc/arm_gicv3: Implement FEAT_GICv3_NMI feature to support FEAT_NMI Jinjie Ruan via
2 siblings, 1 reply; 6+ messages in thread
From: Jinjie Ruan via @ 2024-02-20 12:17 UTC (permalink / raw)
To: peter.maydell, eduardo, marcel.apfelbaum, philmd, wangyanan55,
qemu-devel, qemu-arm
Cc: ruanjinjie
Enable Non-maskable Interrupt feature.
Enable HCRX register feature to support TALLINT read/write.
Add support for enable/disable NMI at qemu startup as below:
qemu-system-aarch64 -cpu cortex-a53/a57/a72/a76,nmi=[on/off]
Add support for allint read/write as follow:
mrs <xt>, ALLINT // read allint
msr ALLINT, <xt> // write allint with imm
msr ALLINT, #<imm> // write allint with 1 or 0
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
target/arm/cpu-features.h | 5 +++++
target/arm/cpu.h | 2 ++
target/arm/cpu64.c | 31 +++++++++++++++++++++++++++++++
target/arm/helper.c | 33 +++++++++++++++++++++++++++++++++
target/arm/internals.h | 1 +
target/arm/tcg/a64.decode | 1 +
target/arm/tcg/cpu64.c | 11 +++++++++++
target/arm/tcg/helper-a64.c | 25 +++++++++++++++++++++++++
target/arm/tcg/helper-a64.h | 1 +
target/arm/tcg/translate-a64.c | 10 ++++++++++
10 files changed, 120 insertions(+)
diff --git a/target/arm/cpu-features.h b/target/arm/cpu-features.h
index 7567854db6..2ad1179be7 100644
--- a/target/arm/cpu-features.h
+++ b/target/arm/cpu-features.h
@@ -681,6 +681,11 @@ static inline bool isar_feature_aa64_sme(const ARMISARegisters *id)
return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, SME) != 0;
}
+static inline bool isar_feature_aa64_nmi(const ARMISARegisters *id)
+{
+ return FIELD_EX64(id->id_aa64pfr1, ID_AA64PFR1, NMI) != 0;
+}
+
static inline bool isar_feature_aa64_tgran4_lpa2(const ARMISARegisters *id)
{
return FIELD_SEX64(id->id_aa64mmfr0, ID_AA64MMFR0, TGRAN4) >= 1;
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index 63f31e0d98..ea6e8d6501 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -261,6 +261,7 @@ typedef struct CPUArchState {
uint32_t btype; /* BTI branch type. spsr[11:10]. */
uint64_t daif; /* exception masks, in the bits they are in PSTATE */
uint64_t svcr; /* PSTATE.{SM,ZA} in the bits they are in SVCR */
+ uint64_t allint; /* All IRQ or FIQ interrupt mask, in the bit in PSTATE */
uint64_t elr_el[4]; /* AArch64 exception link regs */
uint64_t sp_el[4]; /* AArch64 banked stack pointers */
@@ -1543,6 +1544,7 @@ FIELD(VTCR, SL2, 33, 1)
#define PSTATE_D (1U << 9)
#define PSTATE_BTYPE (3U << 10)
#define PSTATE_SSBS (1U << 12)
+#define PSTATE_ALLINT (1U << 13)
#define PSTATE_IL (1U << 20)
#define PSTATE_SS (1U << 21)
#define PSTATE_PAN (1U << 22)
diff --git a/target/arm/cpu64.c b/target/arm/cpu64.c
index 8e30a7993e..3a5a3fda1b 100644
--- a/target/arm/cpu64.c
+++ b/target/arm/cpu64.c
@@ -295,6 +295,22 @@ static void cpu_arm_set_sve(Object *obj, bool value, Error **errp)
cpu->isar.id_aa64pfr0 = t;
}
+static bool cpu_arm_get_nmi(Object *obj, Error **errp)
+{
+ ARMCPU *cpu = ARM_CPU(obj);
+ return cpu_isar_feature(aa64_nmi, cpu);
+}
+
+static void cpu_arm_set_nmi(Object *obj, bool value, Error **errp)
+{
+ ARMCPU *cpu = ARM_CPU(obj);
+ uint64_t t;
+
+ t = cpu->isar.id_aa64pfr1;
+ t = FIELD_DP64(t, ID_AA64PFR1, NMI, value);
+ cpu->isar.id_aa64pfr1 = t;
+}
+
void arm_cpu_sme_finalize(ARMCPU *cpu, Error **errp)
{
uint32_t vq_map = cpu->sme_vq.map;
@@ -472,6 +488,11 @@ void aarch64_add_sme_properties(Object *obj)
#endif
}
+void aarch64_add_nmi_properties(Object *obj)
+{
+ object_property_add_bool(obj, "nmi", cpu_arm_get_nmi, cpu_arm_set_nmi);
+}
+
void arm_cpu_pauth_finalize(ARMCPU *cpu, Error **errp)
{
ARMPauthFeature features = cpu_isar_feature(pauth_feature, cpu);
@@ -593,9 +614,14 @@ void arm_cpu_lpa2_finalize(ARMCPU *cpu, Error **errp)
static void aarch64_a57_initfn(Object *obj)
{
+ uint64_t t;
ARMCPU *cpu = ARM_CPU(obj);
cpu->dtb_compatible = "arm,cortex-a57";
+ t = cpu->isar.id_aa64mmfr1;
+ t = FIELD_DP64(t, ID_AA64MMFR1, HCX, 1); /* FEAT_HCX */
+ cpu->isar.id_aa64mmfr1 = t;
+ aarch64_add_nmi_properties(obj);
set_feature(&cpu->env, ARM_FEATURE_V8);
set_feature(&cpu->env, ARM_FEATURE_NEON);
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
@@ -650,9 +676,14 @@ static void aarch64_a57_initfn(Object *obj)
static void aarch64_a53_initfn(Object *obj)
{
+ uint64_t t;
ARMCPU *cpu = ARM_CPU(obj);
cpu->dtb_compatible = "arm,cortex-a53";
+ t = cpu->isar.id_aa64mmfr1;
+ t = FIELD_DP64(t, ID_AA64MMFR1, HCX, 1); /* FEAT_HCX */
+ cpu->isar.id_aa64mmfr1 = t;
+ aarch64_add_nmi_properties(obj);
set_feature(&cpu->env, ARM_FEATURE_V8);
set_feature(&cpu->env, ARM_FEATURE_NEON);
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 90c4fb72ce..1194e1e2db 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -4618,6 +4618,28 @@ static void aa64_daif_write(CPUARMState *env, const ARMCPRegInfo *ri,
env->daif = value & PSTATE_DAIF;
}
+static void aa64_allint_write(CPUARMState *env, const ARMCPRegInfo *ri,
+ uint64_t value)
+{
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env))) {
+ env->allint = value & PSTATE_ALLINT;
+ }
+}
+
+static CPAccessResult aa64_allint_access(CPUARMState *env,
+ const ARMCPRegInfo *ri, bool isread)
+{
+ if (arm_current_el(env) == 0) {
+ return CP_ACCESS_TRAP_UNCATEGORIZED;
+ }
+
+ if (arm_current_el(env) == 1 && arm_is_el2_enabled(env) &&
+ cpu_isar_feature(aa64_hcx, env_archcpu(env)) &&
+ (env->cp15.hcrx_el2 & HCRX_TALLINT))
+ return CP_ACCESS_TRAP_EL2;
+ return CP_ACCESS_OK;
+}
+
static uint64_t aa64_pan_read(CPUARMState *env, const ARMCPRegInfo *ri)
{
return env->pstate & PSTATE_PAN;
@@ -5437,6 +5459,12 @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
.access = PL0_RW, .accessfn = aa64_daif_access,
.fieldoffset = offsetof(CPUARMState, daif),
.writefn = aa64_daif_write, .resetfn = arm_cp_reset_ignore },
+ { .name = "ALLINT", .state = ARM_CP_STATE_AA64,
+ .opc0 = 3, .opc1 = 0, .opc2 = 0, .crn = 4, .crm = 3,
+ .type = ARM_CP_NO_RAW,
+ .access = PL1_RW, .accessfn = aa64_allint_access,
+ .fieldoffset = offsetof(CPUARMState, allint),
+ .writefn = aa64_allint_write, .resetfn = arm_cp_reset_ignore },
{ .name = "FPCR", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 3, .opc2 = 0, .crn = 4, .crm = 4,
.access = PL0_RW, .type = ARM_CP_FPU | ARM_CP_SUPPRESS_TB_END,
@@ -6056,6 +6084,11 @@ static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri,
valid_mask |= HCRX_MSCEN | HCRX_MCE2;
}
+ /* FEAT_NMI adds TALLINT */
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env))) {
+ valid_mask |= HCRX_TALLINT;
+ }
+
/* Clear RES0 bits. */
env->cp15.hcrx_el2 = value & valid_mask;
}
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 50bff44549..2b9f287c52 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1466,6 +1466,7 @@ void aarch64_max_tcg_initfn(Object *obj);
void aarch64_add_pauth_properties(Object *obj);
void aarch64_add_sve_properties(Object *obj);
void aarch64_add_sme_properties(Object *obj);
+void aarch64_add_nmi_properties(Object *obj);
#endif
/* Read the CONTROL register as the MRS instruction would. */
diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode
index 8a20dce3c8..3588080024 100644
--- a/target/arm/tcg/a64.decode
+++ b/target/arm/tcg/a64.decode
@@ -207,6 +207,7 @@ MSR_i_DIT 1101 0101 0000 0 011 0100 .... 010 11111 @msr_i
MSR_i_TCO 1101 0101 0000 0 011 0100 .... 100 11111 @msr_i
MSR_i_DAIFSET 1101 0101 0000 0 011 0100 .... 110 11111 @msr_i
MSR_i_DAIFCLEAR 1101 0101 0000 0 011 0100 .... 111 11111 @msr_i
+MSR_i_ALLINT 1101 0101 0000 0 001 0100 .... 000 11111 @msr_i
MSR_i_SVCR 1101 0101 0000 0 011 0100 0 mask:2 imm:1 011 11111
# MRS, MSR (register), SYS, SYSL. These are all essentially the
diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c
index 5fba2c0f04..e08eb0ce94 100644
--- a/target/arm/tcg/cpu64.c
+++ b/target/arm/tcg/cpu64.c
@@ -293,9 +293,14 @@ static void aarch64_a55_initfn(Object *obj)
static void aarch64_a72_initfn(Object *obj)
{
+ uint64_t t;
ARMCPU *cpu = ARM_CPU(obj);
cpu->dtb_compatible = "arm,cortex-a72";
+ t = cpu->isar.id_aa64mmfr1;
+ t = FIELD_DP64(t, ID_AA64MMFR1, HCX, 1); /* FEAT_HCX */
+ cpu->isar.id_aa64mmfr1 = t;
+ aarch64_add_nmi_properties(obj);
set_feature(&cpu->env, ARM_FEATURE_V8);
set_feature(&cpu->env, ARM_FEATURE_NEON);
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
@@ -348,9 +353,14 @@ static void aarch64_a72_initfn(Object *obj)
static void aarch64_a76_initfn(Object *obj)
{
+ uint64_t t;
ARMCPU *cpu = ARM_CPU(obj);
cpu->dtb_compatible = "arm,cortex-a76";
+ t = cpu->isar.id_aa64mmfr1;
+ t = FIELD_DP64(t, ID_AA64MMFR1, HCX, 1); /* FEAT_HCX */
+ cpu->isar.id_aa64mmfr1 = t;
+ aarch64_add_nmi_properties(obj);
set_feature(&cpu->env, ARM_FEATURE_V8);
set_feature(&cpu->env, ARM_FEATURE_NEON);
set_feature(&cpu->env, ARM_FEATURE_GENERIC_TIMER);
@@ -1175,6 +1185,7 @@ void aarch64_max_tcg_initfn(Object *obj)
t = FIELD_DP64(t, ID_AA64PFR1, RAS_FRAC, 0); /* FEAT_RASv1p1 + FEAT_DoubleFault */
t = FIELD_DP64(t, ID_AA64PFR1, SME, 1); /* FEAT_SME */
t = FIELD_DP64(t, ID_AA64PFR1, CSV2_FRAC, 0); /* FEAT_CSV2_2 */
+ t = FIELD_DP64(t, ID_AA64PFR1, NMI, 0); /* FEAT_NMI */
cpu->isar.id_aa64pfr1 = t;
t = cpu->isar.id_aa64mmfr0;
diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c
index ebaa7f00df..9b2a7cd891 100644
--- a/target/arm/tcg/helper-a64.c
+++ b/target/arm/tcg/helper-a64.c
@@ -66,6 +66,31 @@ void HELPER(msr_i_spsel)(CPUARMState *env, uint32_t imm)
update_spsel(env, imm);
}
+static void allint_check(CPUARMState *env, uint32_t op,
+ uint32_t imm, uintptr_t ra)
+{
+ /* ALLINT update to PSTATE.*/
+ if (arm_current_el(env) == 0) {
+ raise_exception_ra(env, EXCP_UDEF,
+ syn_aa64_sysregtrap(0, extract32(op, 0, 3),
+ extract32(op, 3, 3), 4,
+ imm, 0x1f, 0),
+ exception_target_el(env), ra);
+ }
+ /* todo */
+}
+
+void HELPER(msr_i_allint)(CPUARMState *env, uint32_t imm)
+{
+ allint_check(env, 0x8, imm, GETPC());
+ if (imm == 1) {
+ env->allint |= PSTATE_ALLINT;
+ } else {
+ env->allint &= ~PSTATE_ALLINT;
+ }
+ arm_rebuild_hflags(env);
+}
+
static void daif_check(CPUARMState *env, uint32_t op,
uint32_t imm, uintptr_t ra)
{
diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h
index 575a5dab7d..3aec703d4a 100644
--- a/target/arm/tcg/helper-a64.h
+++ b/target/arm/tcg/helper-a64.h
@@ -22,6 +22,7 @@ DEF_HELPER_FLAGS_1(rbit64, TCG_CALL_NO_RWG_SE, i64, i64)
DEF_HELPER_2(msr_i_spsel, void, env, i32)
DEF_HELPER_2(msr_i_daifset, void, env, i32)
DEF_HELPER_2(msr_i_daifclear, void, env, i32)
+DEF_HELPER_2(msr_i_allint, void, env, i32)
DEF_HELPER_3(vfp_cmph_a64, i64, f16, f16, ptr)
DEF_HELPER_3(vfp_cmpeh_a64, i64, f16, f16, ptr)
DEF_HELPER_3(vfp_cmps_a64, i64, f32, f32, ptr)
diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c
index 340265beb0..f1800f7c71 100644
--- a/target/arm/tcg/translate-a64.c
+++ b/target/arm/tcg/translate-a64.c
@@ -2036,6 +2036,16 @@ static bool trans_MSR_i_DAIFCLEAR(DisasContext *s, arg_i *a)
return true;
}
+static bool trans_MSR_i_ALLINT(DisasContext *s, arg_i *a)
+{
+ if (!dc_isar_feature(aa64_nmi, s) || s->current_el == 0) {
+ return false;
+ }
+ gen_helper_msr_i_allint(tcg_env, tcg_constant_i32(a->imm));
+ s->base.is_jmp = DISAS_TOO_MANY;
+ return true;
+}
+
static bool trans_MSR_i_SVCR(DisasContext *s, arg_MSR_i_SVCR *a)
{
if (!dc_isar_feature(aa64_sme, s) || a->mask == 0) {
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC PATCH 2/3] target/arm: Add NMI exception and handle PSTATE.ALLINT on taking an exception
2024-02-20 12:17 [RFC PATCH 0/3] target/arm: Implement FEAT_NMI and FEAT_GICv3_NMI Jinjie Ruan via
2024-02-20 12:17 ` [RFC PATCH 1/3] target/arm: Implement FEAT_NMI to support Non-maskable Interrupt Jinjie Ruan via
@ 2024-02-20 12:17 ` Jinjie Ruan via
2024-02-20 12:17 ` [RFC PATCH 3/3] hw/intc/arm_gicv3: Implement FEAT_GICv3_NMI feature to support FEAT_NMI Jinjie Ruan via
2 siblings, 0 replies; 6+ messages in thread
From: Jinjie Ruan via @ 2024-02-20 12:17 UTC (permalink / raw)
To: peter.maydell, eduardo, marcel.apfelbaum, philmd, wangyanan55,
qemu-devel, qemu-arm
Cc: ruanjinjie
Add a new exception called NMI for Arm PE.
Set/clear PSTATE.ALLINT on taking an exception to ELx according to the
SCTLR_ELx.SPINTMASK bit.
Mask IRQ/FIQ/NMI with ALLINT and PSTATE.SP & SCTLR_SPINTMASK in addition
to PSTATE.DAIF.
Save to and restore from SPSR_ELx with PSTATE.ALLINT on taking an exception
to ELx and executing an exception return operation in ELx.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
target/arm/cpu-qom.h | 1 +
target/arm/cpu.c | 43 +++++++++++++++++++++++++++++++++++++-----
target/arm/cpu.h | 10 ++++++++--
target/arm/helper.c | 26 ++++++++++++++++++++++++-
target/arm/internals.h | 3 +++
5 files changed, 75 insertions(+), 8 deletions(-)
diff --git a/target/arm/cpu-qom.h b/target/arm/cpu-qom.h
index 8e032691db..5a7f876bf8 100644
--- a/target/arm/cpu-qom.h
+++ b/target/arm/cpu-qom.h
@@ -41,6 +41,7 @@ DECLARE_CLASS_CHECKERS(AArch64CPUClass, AARCH64_CPU,
#define ARM_CPU_FIQ 1
#define ARM_CPU_VIRQ 2
#define ARM_CPU_VFIQ 3
+#define ARM_CPU_NMI 4
/* For M profile, some registers are banked secure vs non-secure;
* these are represented as a 2-element array where the first element
diff --git a/target/arm/cpu.c b/target/arm/cpu.c
index 5fa86bc8d5..947efa76c1 100644
--- a/target/arm/cpu.c
+++ b/target/arm/cpu.c
@@ -128,7 +128,7 @@ static bool arm_cpu_has_work(CPUState *cs)
return (cpu->power_state != PSCI_OFF)
&& cs->interrupt_request &
- (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD
+ (CPU_INTERRUPT_FIQ | CPU_INTERRUPT_HARD | CPU_INTERRUPT_NMI
| CPU_INTERRUPT_VFIQ | CPU_INTERRUPT_VIRQ | CPU_INTERRUPT_VSERR
| CPU_INTERRUPT_EXITTB);
}
@@ -357,6 +357,10 @@ static void arm_cpu_reset_hold(Object *obj)
}
env->daif = PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F;
+ if (cpu_isar_feature(aa64_nmi, cpu)) {
+ env->allint = PSTATE_ALLINT;
+ }
+
/* AArch32 has a hard highvec setting of 0xFFFF0000. If we are currently
* executing as AArch32 then check if highvecs are enabled and
* adjust the PC accordingly.
@@ -668,6 +672,7 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
CPUARMState *env = cpu_env(cs);
bool pstate_unmasked;
bool unmasked = false;
+ bool nmi_unmasked = false;
/*
* Don't take exceptions if they target a lower EL.
@@ -678,13 +683,29 @@ static inline bool arm_excp_unmasked(CPUState *cs, unsigned int excp_idx,
return false;
}
+ nmi_unmasked = (!(env->allint & PSTATE_ALLINT)) &
+ (!((env->cp15.sctlr_el[target_el] & SCTLR_SPINTMASK) &&
+ (env->pstate & PSTATE_SP) && cur_el == target_el));
+
switch (excp_idx) {
+ case EXCP_NMI:
+ pstate_unmasked = nmi_unmasked;
+ break;
+
case EXCP_FIQ:
- pstate_unmasked = !(env->daif & PSTATE_F);
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env))) {
+ pstate_unmasked = (!(env->daif & PSTATE_F)) & nmi_unmasked;
+ } else {
+ pstate_unmasked = !(env->daif & PSTATE_F);
+ }
break;
case EXCP_IRQ:
- pstate_unmasked = !(env->daif & PSTATE_I);
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env))) {
+ pstate_unmasked = (!(env->daif & PSTATE_I)) & nmi_unmasked;
+ } else {
+ pstate_unmasked = !(env->daif & PSTATE_I);
+ }
break;
case EXCP_VFIQ:
@@ -804,6 +825,16 @@ static bool arm_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
/* The prioritization of interrupts is IMPLEMENTATION DEFINED. */
+ if (cpu_isar_feature(aa64_nmi, env_archcpu(env))) {
+ if (interrupt_request & CPU_INTERRUPT_NMI) {
+ excp_idx = EXCP_NMI;
+ target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
+ if (arm_excp_unmasked(cs, excp_idx, target_el,
+ cur_el, secure, hcr_el2)) {
+ goto found;
+ }
+ }
+ }
if (interrupt_request & CPU_INTERRUPT_FIQ) {
excp_idx = EXCP_FIQ;
target_el = arm_phys_excp_target_el(cs, excp_idx, cur_el, secure);
@@ -929,7 +960,8 @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
[ARM_CPU_IRQ] = CPU_INTERRUPT_HARD,
[ARM_CPU_FIQ] = CPU_INTERRUPT_FIQ,
[ARM_CPU_VIRQ] = CPU_INTERRUPT_VIRQ,
- [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ
+ [ARM_CPU_VFIQ] = CPU_INTERRUPT_VFIQ,
+ [ARM_CPU_NMI] = CPU_INTERRUPT_NMI
};
if (!arm_feature(env, ARM_FEATURE_EL2) &&
@@ -957,6 +989,7 @@ static void arm_cpu_set_irq(void *opaque, int irq, int level)
break;
case ARM_CPU_IRQ:
case ARM_CPU_FIQ:
+ case ARM_CPU_NMI:
if (level) {
cpu_interrupt(cs, mask[irq]);
} else {
@@ -1355,7 +1388,7 @@ static void arm_cpu_initfn(Object *obj)
*/
qdev_init_gpio_in(DEVICE(cpu), arm_cpu_kvm_set_irq, 4);
} else {
- qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 4);
+ qdev_init_gpio_in(DEVICE(cpu), arm_cpu_set_irq, 5);
}
qdev_init_gpio_out(DEVICE(cpu), cpu->gt_timer_outputs,
diff --git a/target/arm/cpu.h b/target/arm/cpu.h
index ea6e8d6501..b6af1380d3 100644
--- a/target/arm/cpu.h
+++ b/target/arm/cpu.h
@@ -60,6 +60,7 @@
#define EXCP_DIVBYZERO 23 /* v7M DIVBYZERO UsageFault */
#define EXCP_VSERR 24
#define EXCP_GPC 25 /* v9 Granule Protection Check Fault */
+#define EXCP_NMI 26
/* NB: add new EXCP_ defines to the array in arm_log_exception() too */
#define ARMV7M_EXCP_RESET 1
@@ -79,6 +80,7 @@
#define CPU_INTERRUPT_VIRQ CPU_INTERRUPT_TGT_EXT_2
#define CPU_INTERRUPT_VFIQ CPU_INTERRUPT_TGT_EXT_3
#define CPU_INTERRUPT_VSERR CPU_INTERRUPT_TGT_INT_0
+#define CPU_INTERRUPT_NMI CPU_INTERRUPT_TGT_EXT_4
/* The usual mapping for an AArch64 system register to its AArch32
* counterpart is for the 32 bit world to have access to the lower
@@ -1471,6 +1473,8 @@ FIELD(CPTR_EL3, TCPAC, 31, 1)
#define CPSR_N (1U << 31)
#define CPSR_NZCV (CPSR_N | CPSR_Z | CPSR_C | CPSR_V)
#define CPSR_AIF (CPSR_A | CPSR_I | CPSR_F)
+#define ISR_FS (1U << 9)
+#define ISR_IS (1U << 10)
#define CPSR_IT (CPSR_IT_0_1 | CPSR_IT_2_7)
#define CACHED_CPSR_BITS (CPSR_T | CPSR_AIF | CPSR_GE | CPSR_IT | CPSR_Q \
@@ -1557,7 +1561,8 @@ FIELD(VTCR, SL2, 33, 1)
#define PSTATE_N (1U << 31)
#define PSTATE_NZCV (PSTATE_N | PSTATE_Z | PSTATE_C | PSTATE_V)
#define PSTATE_DAIF (PSTATE_D | PSTATE_A | PSTATE_I | PSTATE_F)
-#define CACHED_PSTATE_BITS (PSTATE_NZCV | PSTATE_DAIF | PSTATE_BTYPE)
+#define CACHED_PSTATE_BITS (PSTATE_NZCV | PSTATE_DAIF | PSTATE_ALLINT | \
+ PSTATE_BTYPE)
/* Mode values for AArch64 */
#define PSTATE_MODE_EL3h 13
#define PSTATE_MODE_EL3t 12
@@ -1597,7 +1602,7 @@ static inline uint32_t pstate_read(CPUARMState *env)
ZF = (env->ZF == 0);
return (env->NF & 0x80000000) | (ZF << 30)
| (env->CF << 29) | ((env->VF & 0x80000000) >> 3)
- | env->pstate | env->daif | (env->btype << 10);
+ | env->pstate | env->allint | env->daif | (env->btype << 10);
}
static inline void pstate_write(CPUARMState *env, uint32_t val)
@@ -1607,6 +1612,7 @@ static inline void pstate_write(CPUARMState *env, uint32_t val)
env->CF = (val >> 29) & 1;
env->VF = (val << 3) & 0x80000000;
env->daif = val & PSTATE_DAIF;
+ env->allint = val & PSTATE_ALLINT;
env->btype = (val >> 10) & 3;
env->pstate = val & ~CACHED_PSTATE_BITS;
}
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 1194e1e2db..8d525c6b82 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -2022,6 +2022,10 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
if (cs->interrupt_request & CPU_INTERRUPT_HARD) {
ret |= CPSR_I;
}
+
+ if (cs->interrupt_request & CPU_INTERRUPT_NMI) {
+ ret |= ISR_IS;
+ }
}
if (hcr_el2 & HCR_FMO) {
@@ -2032,6 +2036,10 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
if (cs->interrupt_request & CPU_INTERRUPT_FIQ) {
ret |= CPSR_F;
}
+
+ if (cs->interrupt_request & CPU_INTERRUPT_NMI) {
+ ret |= ISR_FS;
+ }
}
if (hcr_el2 & HCR_AMO) {
@@ -4626,6 +4634,11 @@ static void aa64_allint_write(CPUARMState *env, const ARMCPRegInfo *ri,
}
}
+static uint64_t aa64_allint_read(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+ return env->allint & PSTATE_ALLINT;
+}
+
static CPAccessResult aa64_allint_access(CPUARMState *env,
const ARMCPRegInfo *ri, bool isread)
{
@@ -5464,7 +5477,8 @@ static const ARMCPRegInfo v8_cp_reginfo[] = {
.type = ARM_CP_NO_RAW,
.access = PL1_RW, .accessfn = aa64_allint_access,
.fieldoffset = offsetof(CPUARMState, allint),
- .writefn = aa64_allint_write, .resetfn = arm_cp_reset_ignore },
+ .writefn = aa64_allint_write, .readfn = aa64_allint_read,
+ .resetfn = arm_cp_reset_ignore },
{ .name = "FPCR", .state = ARM_CP_STATE_AA64,
.opc0 = 3, .opc1 = 3, .opc2 = 0, .crn = 4, .crm = 4,
.access = PL0_RW, .type = ARM_CP_FPU | ARM_CP_SUPPRESS_TB_END,
@@ -10622,6 +10636,7 @@ void arm_log_exception(CPUState *cs)
[EXCP_DIVBYZERO] = "v7M DIVBYZERO UsageFault",
[EXCP_VSERR] = "Virtual SERR",
[EXCP_GPC] = "Granule Protection Check",
+ [EXCP_NMI] = "NMI"
};
if (idx >= 0 && idx < ARRAY_SIZE(excnames)) {
@@ -11517,6 +11532,15 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
}
}
+ if (cpu_isar_feature(aa64_nmi, cpu) &&
+ (env->cp15.sctlr_el[new_el] & SCTLR_NMI)) {
+ if (!(env->cp15.sctlr_el[new_el] & SCTLR_SPINTMASK)) {
+ new_mode |= PSTATE_ALLINT;
+ } else {
+ new_mode &= ~PSTATE_ALLINT;
+ }
+ }
+
pstate_write(env, PSTATE_DAIF | new_mode);
env->aarch64 = true;
aarch64_restore_sp(env, new_el);
diff --git a/target/arm/internals.h b/target/arm/internals.h
index 2b9f287c52..28894ba6ea 100644
--- a/target/arm/internals.h
+++ b/target/arm/internals.h
@@ -1078,6 +1078,9 @@ static inline uint32_t aarch64_pstate_valid_mask(const ARMISARegisters *id)
if (isar_feature_aa64_mte(id)) {
valid |= PSTATE_TCO;
}
+ if (isar_feature_aa64_nmi(id)) {
+ valid |= PSTATE_ALLINT;
+ }
return valid;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [RFC PATCH 3/3] hw/intc/arm_gicv3: Implement FEAT_GICv3_NMI feature to support FEAT_NMI
2024-02-20 12:17 [RFC PATCH 0/3] target/arm: Implement FEAT_NMI and FEAT_GICv3_NMI Jinjie Ruan via
2024-02-20 12:17 ` [RFC PATCH 1/3] target/arm: Implement FEAT_NMI to support Non-maskable Interrupt Jinjie Ruan via
2024-02-20 12:17 ` [RFC PATCH 2/3] target/arm: Add NMI exception and handle PSTATE.ALLINT on taking an exception Jinjie Ruan via
@ 2024-02-20 12:17 ` Jinjie Ruan via
2 siblings, 0 replies; 6+ messages in thread
From: Jinjie Ruan via @ 2024-02-20 12:17 UTC (permalink / raw)
To: peter.maydell, eduardo, marcel.apfelbaum, philmd, wangyanan55,
qemu-devel, qemu-arm
Cc: ruanjinjie
Connect NMI wire from GICv3 to every Arm PE.
+-------------------------------------------------+
| Distributor |
+-------------------------------------------------+
| NMI | NMI
\ / \ /
+--------+ +-------+
| redist | | redist|
+--------+ +-------+
| NMI | NMI
\ / \ /
+-------------+ +---------------+
|CPU interface| ... | CPU interface |
+-------------+ +---------------+
| NMI | NMI
\ / \ /
+-----+ +-----+
| PE | | PE |
+-----+ +-----+
Support the superpriority property for SPI/SGI/PPI interrupt and these
interrupts with a NMI high priority.
Also support configure a interrupt with superpriority property with
GICR_INMIR0 or GICD_INMIR register.
Support ack a NMI interrupt with ICC_NMIAR1_EL1 register. and support PE to
distinguish IRQ from FIQ for a NMI.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
hw/arm/virt.c | 2 +
hw/intc/arm_gicv3.c | 61 ++++++++++++++++++++++++++----
hw/intc/arm_gicv3_common.c | 4 ++
hw/intc/arm_gicv3_cpuif.c | 57 ++++++++++++++++++++++++++--
hw/intc/arm_gicv3_dist.c | 39 +++++++++++++++++++
hw/intc/arm_gicv3_redist.c | 23 +++++++++++
hw/intc/gicv3_internal.h | 5 +++
include/hw/core/cpu.h | 1 +
include/hw/intc/arm_gic_common.h | 1 +
include/hw/intc/arm_gicv3_common.h | 6 +++
target/arm/helper.c | 5 ++-
11 files changed, 191 insertions(+), 13 deletions(-)
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 0af1943697..5f2683a553 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -848,6 +848,8 @@ static void create_gic(VirtMachineState *vms, MemoryRegion *mem)
qdev_get_gpio_in(cpudev, ARM_CPU_VIRQ));
sysbus_connect_irq(gicbusdev, i + 3 * smp_cpus,
qdev_get_gpio_in(cpudev, ARM_CPU_VFIQ));
+ sysbus_connect_irq(gicbusdev, i + 4 * smp_cpus,
+ qdev_get_gpio_in(cpudev, ARM_CPU_NMI));
}
fdt_add_gic_node(vms);
diff --git a/hw/intc/arm_gicv3.c b/hw/intc/arm_gicv3.c
index 0b8f79a122..e3281cffdc 100644
--- a/hw/intc/arm_gicv3.c
+++ b/hw/intc/arm_gicv3.c
@@ -21,7 +21,7 @@
#include "hw/intc/arm_gicv3.h"
#include "gicv3_internal.h"
-static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio)
+static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio, bool is_nmi)
{
/* Return true if this IRQ at this priority should take
* precedence over the current recorded highest priority
@@ -33,11 +33,21 @@ static bool irqbetter(GICv3CPUState *cs, int irq, uint8_t prio)
if (prio < cs->hppi.prio) {
return true;
}
+
+ /*
+ * Current highest prioirity pending interrupt is not a NMI
+ * and the new IRQ is a NMI with same priority.
+ */
+ if (prio == cs->hppi.prio && !cs->hppi.superprio && is_nmi) {
+ return true;
+ }
+
/* If multiple pending interrupts have the same priority then it is an
* IMPDEF choice which of them to signal to the CPU. We choose to
* signal the one with the lowest interrupt number.
*/
- if (prio == cs->hppi.prio && irq <= cs->hppi.irq) {
+ if (prio == cs->hppi.prio && !cs->hppi.superprio &&
+ !is_nmi && irq <= cs->hppi.irq) {
return true;
}
return false;
@@ -141,6 +151,8 @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
uint8_t prio;
int i;
uint32_t pend;
+ bool is_nmi = 0;
+ uint32_t superprio = 0;
/* Find out which redistributor interrupts are eligible to be
* signaled to the CPU interface.
@@ -152,10 +164,26 @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
if (!(pend & (1 << i))) {
continue;
}
- prio = cs->gicr_ipriorityr[i];
- if (irqbetter(cs, i, prio)) {
+ superprio = extract32(cs->gicr_isuperprio, i, 1);
+
+ /* NMI */
+ if (superprio) {
+ is_nmi = 1;
+
+ /* DS = 0 & Non-secure NMI */
+ if ((!(cs->gic->gicd_ctlr & GICD_CTLR_DS)) &&
+ extract32(cs->gicr_igroupr0, i, 1))
+ prio = 0x80;
+ else
+ prio = 0x0;
+ } else {
+ is_nmi = 0;
+ prio = cs->gicr_ipriorityr[i];
+ }
+ if (irqbetter(cs, i, prio, is_nmi)) {
cs->hppi.irq = i;
cs->hppi.prio = prio;
+ cs->hppi.superprio = is_nmi;
seenbetter = true;
}
}
@@ -168,7 +196,7 @@ static void gicv3_redist_update_noirqset(GICv3CPUState *cs)
if ((cs->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) && cs->gic->lpi_enable &&
(cs->gic->gicd_ctlr & GICD_CTLR_EN_GRP1NS) &&
(cs->hpplpi.prio != 0xff)) {
- if (irqbetter(cs, cs->hpplpi.irq, cs->hpplpi.prio)) {
+ if (irqbetter(cs, cs->hpplpi.irq, cs->hpplpi.prio, false)) {
cs->hppi.irq = cs->hpplpi.irq;
cs->hppi.prio = cs->hpplpi.prio;
cs->hppi.grp = cs->hpplpi.grp;
@@ -212,7 +240,9 @@ static void gicv3_update_noirqset(GICv3State *s, int start, int len)
{
int i;
uint8_t prio;
+ bool is_nmi = 0;
uint32_t pend = 0;
+ uint32_t superprio = 0;
assert(start >= GIC_INTERNAL);
assert(len > 0);
@@ -240,10 +270,27 @@ static void gicv3_update_noirqset(GICv3State *s, int start, int len)
*/
continue;
}
- prio = s->gicd_ipriority[i];
- if (irqbetter(cs, i, prio)) {
+
+ superprio = *gic_bmp_ptr32(s->superprio, i);
+ /* NMI */
+ if (superprio & (1 << (i & 0x1f))) {
+ is_nmi = 1;
+
+ /* DS = 0 & Non-secure NMI */
+ if ((!(s->gicd_ctlr & GICD_CTLR_DS)) &&
+ gicv3_gicd_group_test(s, i))
+ prio = 0x80;
+ else
+ prio = 0x0;
+ } else {
+ is_nmi = 0;
+ prio = s->gicd_ipriority[i];
+ }
+
+ if (irqbetter(cs, i, prio, is_nmi)) {
cs->hppi.irq = i;
cs->hppi.prio = prio;
+ cs->hppi.superprio = is_nmi;
cs->seenbetter = true;
}
}
diff --git a/hw/intc/arm_gicv3_common.c b/hw/intc/arm_gicv3_common.c
index cb55c72681..4a56140f4c 100644
--- a/hw/intc/arm_gicv3_common.c
+++ b/hw/intc/arm_gicv3_common.c
@@ -299,6 +299,9 @@ void gicv3_init_irqs_and_mmio(GICv3State *s, qemu_irq_handler handler,
for (i = 0; i < s->num_cpu; i++) {
sysbus_init_irq(sbd, &s->cpu[i].parent_vfiq);
}
+ for (i = 0; i < s->num_cpu; i++) {
+ sysbus_init_irq(sbd, &s->cpu[i].parent_nmi);
+ }
memory_region_init_io(&s->iomem_dist, OBJECT(s), ops, s,
"gicv3_dist", 0x10000);
@@ -563,6 +566,7 @@ static Property arm_gicv3_common_properties[] = {
DEFINE_PROP_UINT32("num-irq", GICv3State, num_irq, 32),
DEFINE_PROP_UINT32("revision", GICv3State, revision, 3),
DEFINE_PROP_BOOL("has-lpi", GICv3State, lpi_enable, 0),
+ DEFINE_PROP_BOOL("has-nmi", GICv3State, nmi_support, 1),
DEFINE_PROP_BOOL("has-security-extensions", GICv3State, security_extn, 0),
/*
* Compatibility property: force 8 bits of physical priority, even
diff --git a/hw/intc/arm_gicv3_cpuif.c b/hw/intc/arm_gicv3_cpuif.c
index e1a60d8c15..9b0ee0cfec 100644
--- a/hw/intc/arm_gicv3_cpuif.c
+++ b/hw/intc/arm_gicv3_cpuif.c
@@ -931,6 +931,7 @@ void gicv3_cpuif_update(GICv3CPUState *cs)
/* Tell the CPU about its highest priority pending interrupt */
int irqlevel = 0;
int fiqlevel = 0;
+ int nmilevel = 0;
ARMCPU *cpu = ARM_CPU(cs->cpu);
CPUARMState *env = &cpu->env;
@@ -967,7 +968,14 @@ void gicv3_cpuif_update(GICv3CPUState *cs)
g_assert_not_reached();
}
- if (isfiq) {
+ if (cs->hppi.superprio) {
+ nmilevel = 1;
+ if (isfiq) {
+ cs->cpu->nmi_irq = false;
+ } else {
+ cs->cpu->nmi_irq = true;
+ }
+ } else if (isfiq) {
fiqlevel = 1;
} else {
irqlevel = 1;
@@ -978,6 +986,7 @@ void gicv3_cpuif_update(GICv3CPUState *cs)
qemu_set_irq(cs->parent_fiq, fiqlevel);
qemu_set_irq(cs->parent_irq, irqlevel);
+ qemu_set_irq(cs->parent_nmi, nmilevel);
}
static uint64_t icc_pmr_read(CPUARMState *env, const ARMCPRegInfo *ri)
@@ -1097,7 +1106,8 @@ static uint64_t icc_hppir0_value(GICv3CPUState *cs, CPUARMState *env)
return cs->hppi.irq;
}
-static uint64_t icc_hppir1_value(GICv3CPUState *cs, CPUARMState *env)
+static uint64_t icc_hppir1_value(GICv3CPUState *cs, CPUARMState *env,
+ bool is_nmi, bool is_hppi)
{
/* Return the highest priority pending interrupt register value
* for group 1.
@@ -1108,6 +1118,16 @@ static uint64_t icc_hppir1_value(GICv3CPUState *cs, CPUARMState *env)
return INTID_SPURIOUS;
}
+ if (!is_hppi) {
+ if (is_nmi && (!cs->hppi.superprio)) {
+ return INTID_SPURIOUS;
+ }
+
+ if ((!is_nmi) && cs->hppi.superprio) {
+ return INTID_NMI;
+ }
+ }
+
/* Check whether we can return the interrupt or if we should return
* a special identifier, as per the CheckGroup1ForSpecialIdentifiers
* pseudocode. (We can simplify a little because for us ICC_SRE_EL1.RM
@@ -1168,7 +1188,30 @@ static uint64_t icc_iar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
if (!icc_hppi_can_preempt(cs)) {
intid = INTID_SPURIOUS;
} else {
- intid = icc_hppir1_value(cs, env);
+ intid = icc_hppir1_value(cs, env, false, false);
+ }
+
+ if (!gicv3_intid_is_special(intid)) {
+ icc_activate_irq(cs, intid);
+ }
+
+ trace_gicv3_icc_iar1_read(gicv3_redist_affid(cs), intid);
+ return intid;
+}
+
+static uint64_t icc_nmiar1_read(CPUARMState *env, const ARMCPRegInfo *ri)
+{
+ GICv3CPUState *cs = icc_cs_from_env(env);
+ uint64_t intid;
+
+ if (icv_access(env, HCR_IMO)) {
+ return icv_iar_read(env, ri);
+ }
+
+ if (!icc_hppi_can_preempt(cs)) {
+ intid = INTID_SPURIOUS;
+ } else {
+ intid = icc_hppir1_value(cs, env, true, false);
}
if (!gicv3_intid_is_special(intid)) {
@@ -1555,7 +1598,7 @@ static uint64_t icc_hppir1_read(CPUARMState *env, const ARMCPRegInfo *ri)
return icv_hppir_read(env, ri);
}
- value = icc_hppir1_value(cs, env);
+ value = icc_hppir1_value(cs, env, false, true);
trace_gicv3_icc_hppir1_read(gicv3_redist_affid(cs), value);
return value;
}
@@ -2344,6 +2387,12 @@ static const ARMCPRegInfo gicv3_cpuif_reginfo[] = {
.access = PL1_R, .accessfn = gicv3_irq_access,
.readfn = icc_iar1_read,
},
+ { .name = "ICC_NMIAR1_EL1", .state = ARM_CP_STATE_BOTH,
+ .opc0 = 3, .opc1 = 0, .crn = 12, .crm = 9, .opc2 = 5,
+ .type = ARM_CP_IO | ARM_CP_NO_RAW,
+ .access = PL1_R, .accessfn = gicv3_irq_access,
+ .readfn = icc_nmiar1_read,
+ },
{ .name = "ICC_EOIR1_EL1", .state = ARM_CP_STATE_BOTH,
.opc0 = 3, .opc1 = 0, .crn = 12, .crm = 12, .opc2 = 1,
.type = ARM_CP_IO | ARM_CP_NO_RAW,
diff --git a/hw/intc/arm_gicv3_dist.c b/hw/intc/arm_gicv3_dist.c
index 35e850685c..a234df44dd 100644
--- a/hw/intc/arm_gicv3_dist.c
+++ b/hw/intc/arm_gicv3_dist.c
@@ -89,6 +89,29 @@ static int gicd_ns_access(GICv3State *s, int irq)
return extract32(s->gicd_nsacr[irq / 16], (irq % 16) * 2, 2);
}
+static void gicd_write_bitmap_reg(GICv3State *s, MemTxAttrs attrs,
+ uint32_t *bmp, maskfn *maskfn,
+ int offset, uint32_t val)
+{
+ /*
+ * Helper routine to implement writing to a "set" register
+ * (GICD_INMIR, etc).
+ * Semantics implemented here:
+ * RAZ/WI for SGIs, PPIs, unimplemented IRQs
+ * Bits corresponding to Group 0 or Secure Group 1 interrupts RAZ/WI.
+ * offset should be the offset in bytes of the register from the start
+ * of its group.
+ */
+ int irq = offset * 8;
+
+ if (irq < GIC_INTERNAL || irq >= s->num_irq) {
+ return;
+ }
+ val &= mask_group_and_nsacr(s, attrs, maskfn, irq);
+ *gic_bmp_ptr32(bmp, irq) = val;
+ gicv3_update(s, irq, 32);
+}
+
static void gicd_write_set_bitmap_reg(GICv3State *s, MemTxAttrs attrs,
uint32_t *bmp,
maskfn *maskfn,
@@ -402,6 +425,7 @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
bool dvis = s->revision >= 4;
*data = (1 << 25) | (1 << 24) | (dvis << 18) | (sec_extn << 10) |
+ (s->nmi_support << GICD_TYPER_NMI_SHIFT) |
(s->lpi_enable << GICD_TYPER_LPIS_SHIFT) |
(0xf << 19) | itlinesnumber;
return true;
@@ -543,6 +567,14 @@ static bool gicd_readl(GICv3State *s, hwaddr offset,
/* RAZ/WI since affinity routing is always enabled */
*data = 0;
return true;
+ case GICD_INMIR ... GICD_INMIR + 0x7f:
+ if (!s->nmi_support) {
+ *data = 0;
+ return true;
+ }
+ *data = gicd_read_bitmap_reg(s, attrs, s->superprio, NULL,
+ offset - GICD_INMIR);
+ return true;
case GICD_IROUTER ... GICD_IROUTER + 0x1fdf:
{
uint64_t r;
@@ -752,6 +784,13 @@ static bool gicd_writel(GICv3State *s, hwaddr offset,
case GICD_SPENDSGIR ... GICD_SPENDSGIR + 0xf:
/* RAZ/WI since affinity routing is always enabled */
return true;
+ case GICD_INMIR ... GICD_INMIR + 0x7f:
+ if (!s->nmi_support) {
+ return true;
+ }
+ gicd_write_bitmap_reg(s, attrs, s->superprio, NULL,
+ offset - GICD_INMIR, value);
+ return true;
case GICD_IROUTER ... GICD_IROUTER + 0x1fdf:
{
uint64_t r;
diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
index 8153525849..87e7823f34 100644
--- a/hw/intc/arm_gicv3_redist.c
+++ b/hw/intc/arm_gicv3_redist.c
@@ -35,6 +35,15 @@ static int gicr_ns_access(GICv3CPUState *cs, int irq)
return extract32(cs->gicr_nsacr, irq * 2, 2);
}
+static void gicr_write_bitmap_reg(GICv3CPUState *cs, MemTxAttrs attrs,
+ uint32_t *reg, uint32_t val)
+{
+ /* Helper routine to implement writing to a "set" register */
+ val &= mask_group(cs, attrs);
+ *reg = val;
+ gicv3_redist_update(cs);
+}
+
static void gicr_write_set_bitmap_reg(GICv3CPUState *cs, MemTxAttrs attrs,
uint32_t *reg, uint32_t val)
{
@@ -406,6 +415,13 @@ static MemTxResult gicr_readl(GICv3CPUState *cs, hwaddr offset,
*data = value;
return MEMTX_OK;
}
+ case GICR_INMIR0:
+ if (!cs->gic->nmi_support) {
+ *data = 0;
+ return MEMTX_OK;
+ }
+ *data = gicr_read_bitmap_reg(cs, attrs, cs->gicr_isuperprio);
+ return MEMTX_OK;
case GICR_ICFGR0:
case GICR_ICFGR1:
{
@@ -555,6 +571,13 @@ static MemTxResult gicr_writel(GICv3CPUState *cs, hwaddr offset,
gicv3_redist_update(cs);
return MEMTX_OK;
}
+ case GICR_INMIR0:
+ if (!cs->gic->nmi_support) {
+ return MEMTX_OK;
+ }
+ gicr_write_bitmap_reg(cs, attrs, &cs->gicr_isuperprio, value);
+ return MEMTX_OK;
+
case GICR_ICFGR0:
/* Register is all RAZ/WI or RAO/WI bits */
return MEMTX_OK;
diff --git a/hw/intc/gicv3_internal.h b/hw/intc/gicv3_internal.h
index 29d5cdc1b6..93e56b3726 100644
--- a/hw/intc/gicv3_internal.h
+++ b/hw/intc/gicv3_internal.h
@@ -52,6 +52,8 @@
#define GICD_SGIR 0x0F00
#define GICD_CPENDSGIR 0x0F10
#define GICD_SPENDSGIR 0x0F20
+#define GICD_INMIR 0x0F80
+#define GICD_INMIRnE 0x3B00
#define GICD_IROUTER 0x6000
#define GICD_IDREGS 0xFFD0
@@ -68,6 +70,7 @@
#define GICD_CTLR_E1NWF (1U << 7)
#define GICD_CTLR_RWP (1U << 31)
+#define GICD_TYPER_NMI_SHIFT 9
#define GICD_TYPER_LPIS_SHIFT 17
/* 16 bits EventId */
@@ -109,6 +112,7 @@
#define GICR_ICFGR1 (GICR_SGI_OFFSET + 0x0C04)
#define GICR_IGRPMODR0 (GICR_SGI_OFFSET + 0x0D00)
#define GICR_NSACR (GICR_SGI_OFFSET + 0x0E00)
+#define GICR_INMIR0 (GICR_SGI_OFFSET + 0x0F80)
/* VLPI redistributor registers, offsets from VLPI_base */
#define GICR_VPROPBASER (GICR_VLPI_OFFSET + 0x70)
@@ -507,6 +511,7 @@ FIELD(VTE, RDBASE, 42, RDBASE_PROCNUM_LENGTH)
/* Special interrupt IDs */
#define INTID_SECURE 1020
#define INTID_NONSECURE 1021
+#define INTID_NMI 1022
#define INTID_SPURIOUS 1023
/* Functions internal to the emulated GICv3 */
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 4385ce54c9..6c9dc25e5b 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -541,6 +541,7 @@ struct CPUState {
uint32_t tcg_cflags;
uint32_t halted;
int32_t exception_index;
+ bool nmi_irq;
AccelCPUState *accel;
/* shared by kvm and hvf */
diff --git a/include/hw/intc/arm_gic_common.h b/include/hw/intc/arm_gic_common.h
index 7080375008..140f531758 100644
--- a/include/hw/intc/arm_gic_common.h
+++ b/include/hw/intc/arm_gic_common.h
@@ -69,6 +69,7 @@ struct GICState {
qemu_irq parent_irq[GIC_NCPU];
qemu_irq parent_fiq[GIC_NCPU];
+ qemu_irq parent_nmi[GIC_NCPU];
qemu_irq parent_virq[GIC_NCPU];
qemu_irq parent_vfiq[GIC_NCPU];
qemu_irq maintenance_irq[GIC_NCPU];
diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3_common.h
index 4e2fb518e7..8f9bcdfac9 100644
--- a/include/hw/intc/arm_gicv3_common.h
+++ b/include/hw/intc/arm_gicv3_common.h
@@ -146,6 +146,7 @@ typedef struct {
int irq;
uint8_t prio;
int grp;
+ bool superprio;
} PendingIrq;
struct GICv3CPUState {
@@ -155,6 +156,7 @@ struct GICv3CPUState {
qemu_irq parent_fiq;
qemu_irq parent_virq;
qemu_irq parent_vfiq;
+ qemu_irq parent_nmi;
/* Redistributor */
uint32_t level; /* Current IRQ level */
@@ -170,6 +172,7 @@ struct GICv3CPUState {
uint32_t gicr_ienabler0;
uint32_t gicr_ipendr0;
uint32_t gicr_iactiver0;
+ uint32_t gicr_isuperprio;
uint32_t edge_trigger; /* ICFGR0 and ICFGR1 even bits */
uint32_t gicr_igrpmodr0;
uint32_t gicr_nsacr;
@@ -247,6 +250,7 @@ struct GICv3State {
uint32_t num_irq;
uint32_t revision;
bool lpi_enable;
+ bool nmi_support;
bool security_extn;
bool force_8bit_prio;
bool irq_reset_nonsecure;
@@ -272,6 +276,7 @@ struct GICv3State {
GIC_DECLARE_BITMAP(active); /* GICD_ISACTIVER */
GIC_DECLARE_BITMAP(level); /* Current level */
GIC_DECLARE_BITMAP(edge_trigger); /* GICD_ICFGR even bits */
+ GIC_DECLARE_BITMAP(superprio); /* GICD_INMIR */
uint8_t gicd_ipriority[GICV3_MAXIRQ];
uint64_t gicd_irouter[GICV3_MAXIRQ];
/* Cached information: pointer to the cpu i/f for the CPUs specified
@@ -311,6 +316,7 @@ GICV3_BITMAP_ACCESSORS(pending)
GICV3_BITMAP_ACCESSORS(active)
GICV3_BITMAP_ACCESSORS(level)
GICV3_BITMAP_ACCESSORS(edge_trigger)
+GICV3_BITMAP_ACCESSORS(superprio)
#define TYPE_ARM_GICV3_COMMON "arm-gicv3-common"
typedef struct ARMGICv3CommonClass ARMGICv3CommonClass;
diff --git a/target/arm/helper.c b/target/arm/helper.c
index 8d525c6b82..9bf0073840 100644
--- a/target/arm/helper.c
+++ b/target/arm/helper.c
@@ -2023,7 +2023,7 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
ret |= CPSR_I;
}
- if (cs->interrupt_request & CPU_INTERRUPT_NMI) {
+ if ((cs->interrupt_request & CPU_INTERRUPT_NMI) && cs->nmi_irq) {
ret |= ISR_IS;
}
}
@@ -2037,7 +2037,7 @@ static uint64_t isr_read(CPUARMState *env, const ARMCPRegInfo *ri)
ret |= CPSR_F;
}
- if (cs->interrupt_request & CPU_INTERRUPT_NMI) {
+ if ((cs->interrupt_request & CPU_INTERRUPT_NMI) && !cs->nmi_irq) {
ret |= ISR_FS;
}
}
@@ -11452,6 +11452,7 @@ static void arm_cpu_do_interrupt_aarch64(CPUState *cs)
break;
case EXCP_IRQ:
case EXCP_VIRQ:
+ case EXCP_NMI:
addr += 0x80;
break;
case EXCP_FIQ:
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [RFC PATCH 1/3] target/arm: Implement FEAT_NMI to support Non-maskable Interrupt
2024-02-20 12:17 ` [RFC PATCH 1/3] target/arm: Implement FEAT_NMI to support Non-maskable Interrupt Jinjie Ruan via
@ 2024-02-20 12:31 ` Peter Maydell
2024-02-20 12:49 ` Jinjie Ruan via
0 siblings, 1 reply; 6+ messages in thread
From: Peter Maydell @ 2024-02-20 12:31 UTC (permalink / raw)
To: Jinjie Ruan
Cc: eduardo, marcel.apfelbaum, philmd, wangyanan55, qemu-devel,
qemu-arm
On Tue, 20 Feb 2024 at 12:19, Jinjie Ruan <ruanjinjie@huawei.com> wrote:
>
> Enable Non-maskable Interrupt feature.
>
> Enable HCRX register feature to support TALLINT read/write.
>
> Add support for enable/disable NMI at qemu startup as below:
>
> qemu-system-aarch64 -cpu cortex-a53/a57/a72/a76,nmi=[on/off]
>
> Add support for allint read/write as follow:
>
> mrs <xt>, ALLINT // read allint
> msr ALLINT, <xt> // write allint with imm
> msr ALLINT, #<imm> // write allint with 1 or 0
Can I ask you to break this patchset down into smaller
coherent pieces, please? When you write a commit message
that has this sort of "list of four different things the
patch does" structure, that's a sign that really it ought to
be multiple different patches that do one thing each.
Do we really need the command line option? Mostly we
don't add that for new CPU features, unless there's a
strong reason why users might need to turn it off: instead
we just implement it if the CPU type and/or the board has
the feature.
thanks
-- PMM
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [RFC PATCH 1/3] target/arm: Implement FEAT_NMI to support Non-maskable Interrupt
2024-02-20 12:31 ` Peter Maydell
@ 2024-02-20 12:49 ` Jinjie Ruan via
0 siblings, 0 replies; 6+ messages in thread
From: Jinjie Ruan via @ 2024-02-20 12:49 UTC (permalink / raw)
To: Peter Maydell
Cc: eduardo, marcel.apfelbaum, philmd, wangyanan55, qemu-devel,
qemu-arm
On 2024/2/20 20:31, Peter Maydell wrote:
> On Tue, 20 Feb 2024 at 12:19, Jinjie Ruan <ruanjinjie@huawei.com> wrote:
>>
>> Enable Non-maskable Interrupt feature.
>>
>> Enable HCRX register feature to support TALLINT read/write.
>>
>> Add support for enable/disable NMI at qemu startup as below:
>>
>> qemu-system-aarch64 -cpu cortex-a53/a57/a72/a76,nmi=[on/off]
>>
>> Add support for allint read/write as follow:
>>
>> mrs <xt>, ALLINT // read allint
>> msr ALLINT, <xt> // write allint with imm
>> msr ALLINT, #<imm> // write allint with 1 or 0
>
> Can I ask you to break this patchset down into smaller
> coherent pieces, please? When you write a commit message
> that has this sort of "list of four different things the
> patch does" structure, that's a sign that really it ought to
> be multiple different patches that do one thing each.
Thank you very much! I'll break up the patches so that each one does
only one thing.
>
> Do we really need the command line option? Mostly we
> don't add that for new CPU features, unless there's a
> strong reason why users might need to turn it off: instead
> we just implement it if the CPU type and/or the board has
> the feature.
Right! Thank you. I'll remove the command line option and just implement
it if the CPU has the feature.
>
> thanks
> -- PMM
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-02-20 14:14 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-20 12:17 [RFC PATCH 0/3] target/arm: Implement FEAT_NMI and FEAT_GICv3_NMI Jinjie Ruan via
2024-02-20 12:17 ` [RFC PATCH 1/3] target/arm: Implement FEAT_NMI to support Non-maskable Interrupt Jinjie Ruan via
2024-02-20 12:31 ` Peter Maydell
2024-02-20 12:49 ` Jinjie Ruan via
2024-02-20 12:17 ` [RFC PATCH 2/3] target/arm: Add NMI exception and handle PSTATE.ALLINT on taking an exception Jinjie Ruan via
2024-02-20 12:17 ` [RFC PATCH 3/3] hw/intc/arm_gicv3: Implement FEAT_GICv3_NMI feature to support FEAT_NMI Jinjie Ruan via
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).