* [PATCH 00/14] XIVE2 changes to support Group and Crowd operations
@ 2024-10-15 21:13 Michael Kowal
2024-10-15 21:13 ` [PATCH 01/14] ppc/xive2: Update NVP save/restore for group attributes Michael Kowal
` (13 more replies)
0 siblings, 14 replies; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
XIVE2 has the concepts of a Group of interrupts and a Crowd of interrupts
(where a crowd is a group of Groups). These patch sets are associated with:
- NVGC tables
- Group/Crowd level notification
- Incrementing backlog countets
- Backlog processing
- NVPG and NVC Bar MMIO operations
- Group/Crowd testing
- ESB Escalation
- Pool interrupt testing
Frederic Barrat (10):
ppc/xive2: Update NVP save/restore for group attributes
ppc/xive2: Add grouping level to notification
ppc/xive2: Support group-matching when looking for target
ppc/xive2: Add undelivered group interrupt to backlog
ppc/xive2: Process group backlog when pushing an OS context
ppc/xive2: Process group backlog when updating the CPPR
qtest/xive: Add group-interrupt test
Add support for MMIO operations on the NVPG/NVC BAR
ppc/xive2: Support crowd-matching when looking for target
ppc/xive2: Check crowd backlog when scanning group backlog
Glenn Miles (4):
pnv/xive: Only support crowd size of 0, 2, 4 and 16
pnv/xive: Support ESB Escalation
pnv/xive: Fix problem with treating NVGC as a NVP
qtest/xive: Add test of pool interrupts
include/hw/ppc/xive.h | 35 +-
include/hw/ppc/xive2.h | 19 +-
include/hw/ppc/xive2_regs.h | 25 +-
include/hw/ppc/xive_regs.h | 20 +-
tests/qtest/pnv-xive2-common.h | 1 +
hw/intc/pnv_xive.c | 5 +-
hw/intc/pnv_xive2.c | 161 +++++--
hw/intc/spapr_xive.c | 3 +-
hw/intc/xive.c | 182 +++++---
hw/intc/xive2.c | 741 +++++++++++++++++++++++++++----
hw/ppc/pnv.c | 31 +-
hw/ppc/spapr.c | 4 +-
tests/qtest/pnv-xive2-nvpg_bar.c | 154 +++++++
tests/qtest/pnv-xive2-test.c | 240 ++++++++++
hw/intc/trace-events | 6 +-
tests/qtest/meson.build | 3 +-
16 files changed, 1440 insertions(+), 190 deletions(-)
create mode 100644 tests/qtest/pnv-xive2-nvpg_bar.c
--
2.43.0
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH 01/14] ppc/xive2: Update NVP save/restore for group attributes
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-10-15 21:13 ` [PATCH 02/14] ppc/xive2: Add grouping level to notification Michael Kowal
` (12 subsequent siblings)
13 siblings, 0 replies; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
If the 'H' attribute is set on the NVP structure, the hardware
automatically saves and restores some attributes from the TIMA in the
NVP structure.
The group-specific attributes LSMFB, LGS and T have an extra flag to
individually control what is saved/restored.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2_regs.h | 5 +++++
hw/intc/xive2.c | 18 ++++++++++++++++--
2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 1d00c8df64..30868e8e09 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -152,6 +152,9 @@ typedef struct Xive2Nvp {
uint32_t w0;
#define NVP2_W0_VALID PPC_BIT32(0)
#define NVP2_W0_HW PPC_BIT32(7)
+#define NVP2_W0_L PPC_BIT32(8)
+#define NVP2_W0_G PPC_BIT32(9)
+#define NVP2_W0_T PPC_BIT32(10)
#define NVP2_W0_ESC_END PPC_BIT32(25) /* 'N' bit 0:ESB 1:END */
#define NVP2_W0_PGOFIRST PPC_BITMASK32(26, 31)
uint32_t w1;
@@ -163,6 +166,8 @@ typedef struct Xive2Nvp {
#define NVP2_W2_CPPR PPC_BITMASK32(0, 7)
#define NVP2_W2_IPB PPC_BITMASK32(8, 15)
#define NVP2_W2_LSMFB PPC_BITMASK32(16, 23)
+#define NVP2_W2_T PPC_BIT32(27)
+#define NVP2_W2_LGS PPC_BITMASK32(28, 31)
uint32_t w3;
uint32_t w4;
#define NVP2_W4_ESC_ESB_BLOCK PPC_BITMASK32(0, 3) /* N:0 */
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index d1df35e9b3..4adc3b6950 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -313,7 +313,19 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, regs[TM_IPB]);
nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, regs[TM_CPPR]);
- nvp.w2 = xive_set_field32(NVP2_W2_LSMFB, nvp.w2, regs[TM_LSMFB]);
+ if (nvp.w0 & NVP2_W0_L) {
+ /*
+ * Typically not used. If LSMFB is restored with 0, it will
+ * force a backlog rescan
+ */
+ nvp.w2 = xive_set_field32(NVP2_W2_LSMFB, nvp.w2, regs[TM_LSMFB]);
+ }
+ if (nvp.w0 & NVP2_W0_G) {
+ nvp.w2 = xive_set_field32(NVP2_W2_LGS, nvp.w2, regs[TM_LGS]);
+ }
+ if (nvp.w0 & NVP2_W0_T) {
+ nvp.w2 = xive_set_field32(NVP2_W2_T, nvp.w2, regs[TM_T]);
+ }
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
nvp.w1 = xive_set_field32(NVP2_W1_CO, nvp.w1, 0);
@@ -527,7 +539,9 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 2);
tctx->regs[TM_QW1_OS + TM_CPPR] = cppr;
- /* we don't model LSMFB */
+ tctx->regs[TM_QW1_OS + TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
+ tctx->regs[TM_QW1_OS + TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
+ tctx->regs[TM_QW1_OS + TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
nvp->w1 = xive_set_field32(NVP2_W1_CO, nvp->w1, 1);
nvp->w1 = xive_set_field32(NVP2_W1_CO_THRID_VALID, nvp->w1, 1);
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 02/14] ppc/xive2: Add grouping level to notification
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
2024-10-15 21:13 ` [PATCH 01/14] ppc/xive2: Update NVP save/restore for group attributes Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-11-19 2:08 ` Nicholas Piggin
2024-10-15 21:13 ` [PATCH 03/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
` (11 subsequent siblings)
13 siblings, 1 reply; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
The NSR has a (so far unused) grouping level field. When a interrupt
is presented, that field tells the hypervisor or OS if the interrupt
is for an individual VP or for a VP-group/crowd. This patch reworks
the presentation API to allow to set/unset the level when
raising/accepting an interrupt.
It also renames xive_tctx_ipb_update() to xive_tctx_pipr_update() as
the IPB is only used for VP-specific target, whereas the PIPR always
needs to be updated.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 19 +++++++-
include/hw/ppc/xive_regs.h | 20 +++++++--
hw/intc/xive.c | 90 +++++++++++++++++++++++---------------
hw/intc/xive2.c | 18 ++++----
hw/intc/trace-events | 2 +-
5 files changed, 100 insertions(+), 49 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 31242f0406..27ef6c1a17 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -510,6 +510,21 @@ static inline uint8_t xive_priority_to_ipb(uint8_t priority)
0 : 1 << (XIVE_PRIORITY_MAX - priority);
}
+static inline uint8_t xive_priority_to_pipr(uint8_t priority)
+{
+ return priority > XIVE_PRIORITY_MAX ? 0xFF : priority;
+}
+
+/*
+ * Convert an Interrupt Pending Buffer (IPB) register to a Pending
+ * Interrupt Priority Register (PIPR), which contains the priority of
+ * the most favored pending notification.
+ */
+static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
+{
+ return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
+}
+
/*
* XIVE Thread Interrupt Management Aera (TIMA)
*
@@ -532,8 +547,10 @@ void xive_tctx_pic_print_info(XiveTCTX *tctx, GString *buf);
Object *xive_tctx_create(Object *cpu, XivePresenter *xptr, Error **errp);
void xive_tctx_reset(XiveTCTX *tctx);
void xive_tctx_destroy(XiveTCTX *tctx);
-void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb);
+void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+ uint8_t group_level);
void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
+void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
/*
* KVM XIVE device helpers
diff --git a/include/hw/ppc/xive_regs.h b/include/hw/ppc/xive_regs.h
index 326327fc79..b455728c9c 100644
--- a/include/hw/ppc/xive_regs.h
+++ b/include/hw/ppc/xive_regs.h
@@ -146,7 +146,14 @@
#define TM_SPC_PULL_PHYS_CTX_OL 0xc38 /* Pull phys ctx to odd cache line */
/* XXX more... */
-/* NSR fields for the various QW ack types */
+/*
+ * NSR fields for the various QW ack types
+ *
+ * P10 has an extra bit in QW3 for the group level instead of the
+ * reserved 'i' bit. Since it is not used and we don't support group
+ * interrupts on P9, we use the P10 definition for the group level so
+ * that we can have common macros for the NSR
+ */
#define TM_QW0_NSR_EB PPC_BIT8(0)
#define TM_QW1_NSR_EO PPC_BIT8(0)
#define TM_QW3_NSR_HE PPC_BITMASK8(0, 1)
@@ -154,8 +161,15 @@
#define TM_QW3_NSR_HE_POOL 1
#define TM_QW3_NSR_HE_PHYS 2
#define TM_QW3_NSR_HE_LSI 3
-#define TM_QW3_NSR_I PPC_BIT8(2)
-#define TM_QW3_NSR_GRP_LVL PPC_BIT8(3, 7)
+#define TM_NSR_GRP_LVL PPC_BITMASK8(2, 7)
+/*
+ * On P10, the format of the 6-bit group level is: 2 bits for the
+ * crowd size and 4 bits for the group size. Since group/crowd size is
+ * always a power of 2, we encode the log. For example, group_level=4
+ * means crowd size = 0 and group size = 16 (2^4)
+ * Same encoding is used in the NVP and NVGC structures for
+ * PGoFirst and PGoNext fields
+ */
/*
* EAS (Event Assignment Structure)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index efcb63e8aa..bacf518fa6 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -27,16 +27,6 @@
* XIVE Thread Interrupt Management context
*/
-/*
- * Convert an Interrupt Pending Buffer (IPB) register to a Pending
- * Interrupt Priority Register (PIPR), which contains the priority of
- * the most favored pending notification.
- */
-static uint8_t ipb_to_pipr(uint8_t ibp)
-{
- return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
-}
-
static uint8_t exception_mask(uint8_t ring)
{
switch (ring) {
@@ -87,10 +77,17 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
regs[TM_CPPR] = cppr;
- /* Reset the pending buffer bit */
- alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
+ /*
+ * If the interrupt was for a specific VP, reset the pending
+ * buffer bit, otherwise clear the logical server indicator
+ */
+ if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
+ regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
+ } else {
+ alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
+ }
- /* Drop Exception bit */
+ /* Drop the exception bit */
regs[TM_NSR] &= ~mask;
trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
@@ -101,7 +98,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
return ((uint64_t)nsr << 8) | regs[TM_CPPR];
}
-static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring)
+void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
{
/* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
@@ -111,13 +108,13 @@ static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring)
if (alt_regs[TM_PIPR] < alt_regs[TM_CPPR]) {
switch (ring) {
case TM_QW1_OS:
- regs[TM_NSR] |= TM_QW1_NSR_EO;
+ regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
break;
case TM_QW2_HV_POOL:
- alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6);
+ alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
break;
case TM_QW3_HV_PHYS:
- regs[TM_NSR] |= (TM_QW3_NSR_HE_PHYS << 6);
+ regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
break;
default:
g_assert_not_reached();
@@ -159,7 +156,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
* Recompute the PIPR based on local pending interrupts. The PHYS
* ring must take the minimum of both the PHYS and POOL PIPR values.
*/
- pipr_min = ipb_to_pipr(regs[TM_IPB]);
+ pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
ring_min = ring;
/* PHYS updates also depend on POOL values */
@@ -169,7 +166,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
/* POOL values only matter if POOL ctx is valid */
if (pool_regs[TM_WORD2] & 0x80) {
- uint8_t pool_pipr = ipb_to_pipr(pool_regs[TM_IPB]);
+ uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
/*
* Determine highest priority interrupt and
@@ -185,17 +182,27 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
regs[TM_PIPR] = pipr_min;
/* CPPR has changed, check if we need to raise a pending exception */
- xive_tctx_notify(tctx, ring_min);
+ xive_tctx_notify(tctx, ring_min, 0);
}
-void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb)
-{
+void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+ uint8_t group_level)
+ {
+ /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
+ uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+ uint8_t *alt_regs = &tctx->regs[alt_ring];
uint8_t *regs = &tctx->regs[ring];
- regs[TM_IPB] |= ipb;
- regs[TM_PIPR] = ipb_to_pipr(regs[TM_IPB]);
- xive_tctx_notify(tctx, ring);
-}
+ if (group_level == 0) {
+ /* VP-specific */
+ regs[TM_IPB] |= xive_priority_to_ipb(priority);
+ alt_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
+ } else {
+ /* VP-group */
+ alt_regs[TM_PIPR] = xive_priority_to_pipr(priority);
+ }
+ xive_tctx_notify(tctx, ring, group_level);
+ }
/*
* XIVE Thread Interrupt Management Area (TIMA)
@@ -411,13 +418,13 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
}
/*
- * Adjust the IPB to allow a CPU to process event queues of other
+ * Adjust the PIPR to allow a CPU to process event queues of other
* priorities during one physical interrupt cycle.
*/
static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size)
{
- xive_tctx_ipb_update(tctx, TM_QW1_OS, xive_priority_to_ipb(value & 0xff));
+ xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0);
}
static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk,
@@ -495,16 +502,20 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
/* Reset the NVT value */
nvt.w4 = xive_set_field32(NVT_W4_IPB, nvt.w4, 0);
xive_router_write_nvt(xrtr, nvt_blk, nvt_idx, &nvt, 4);
- }
+
+ uint8_t *regs = &tctx->regs[TM_QW1_OS];
+ regs[TM_IPB] |= ipb;
+}
+
/*
- * Always call xive_tctx_ipb_update(). Even if there were no
+ * Always call xive_tctx_pipr_update(). Even if there were no
* escalation triggered, there could be a pending interrupt which
* was saved when the context was pulled and that we need to take
* into account by recalculating the PIPR (which is not
* saved/restored).
* It will also raise the External interrupt signal if needed.
*/
- xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
+ xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
}
/*
@@ -841,9 +852,9 @@ void xive_tctx_reset(XiveTCTX *tctx)
* CPPR is first set.
*/
tctx->regs[TM_QW1_OS + TM_PIPR] =
- ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
+ xive_ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
tctx->regs[TM_QW3_HV_PHYS + TM_PIPR] =
- ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
+ xive_ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
}
static void xive_tctx_realize(DeviceState *dev, Error **errp)
@@ -1660,6 +1671,12 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
}
+static uint8_t xive_get_group_level(uint32_t nvp_index)
+{
+ /* FIXME add crowd encoding */
+ return ctz32(~nvp_index) + 1;
+}
+
/*
* The thread context register words are in big-endian format.
*/
@@ -1745,6 +1762,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
{
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
+ uint8_t group_level;
int count;
/*
@@ -1758,9 +1776,9 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
/* handle CPU exception delivery */
if (count) {
- trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring);
- xive_tctx_ipb_update(match.tctx, match.ring,
- xive_priority_to_ipb(priority));
+ group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
+ trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
+ xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
}
return !!count;
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 4adc3b6950..db372f4b30 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -564,8 +564,10 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
{
+ uint8_t ipb, backlog_level;
+ uint8_t backlog_prio;
+ uint8_t *regs = &tctx->regs[TM_QW1_OS];
Xive2Nvp nvp;
- uint8_t ipb;
/*
* Grab the associated thread interrupt context registers in the
@@ -594,15 +596,15 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
}
+ regs[TM_IPB] = ipb;
+ backlog_prio = xive_ipb_to_pipr(ipb);
+ backlog_level = 0;
+
/*
- * Always call xive_tctx_ipb_update(). Even if there were no
- * escalation triggered, there could be a pending interrupt which
- * was saved when the context was pulled and that we need to take
- * into account by recalculating the PIPR (which is not
- * saved/restored).
- * It will also raise the External interrupt signal if needed.
+ * Compute the PIPR based on the restored state.
+ * It will raise the External interrupt signal if needed.
*/
- xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
+ xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
}
/*
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
index 3dcf147198..7435728c51 100644
--- a/hw/intc/trace-events
+++ b/hw/intc/trace-events
@@ -282,7 +282,7 @@ xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "EN
xive_router_end_escalate(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t end_data) "END 0x%02x/0x%04x -> escalate END 0x%02x/0x%04x data 0x%08x"
xive_tctx_tm_write(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
-xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring) "found NVT 0x%x/0x%x ring=0x%x"
+xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d"
xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64
# pnv_xive.c
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 03/14] ppc/xive2: Support group-matching when looking for target
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
2024-10-15 21:13 ` [PATCH 01/14] ppc/xive2: Update NVP save/restore for group attributes Michael Kowal
2024-10-15 21:13 ` [PATCH 02/14] ppc/xive2: Add grouping level to notification Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-11-19 3:22 ` Nicholas Piggin
2024-10-15 21:13 ` [PATCH 04/14] ppc/xive2: Add undelivered group interrupt to backlog Michael Kowal
` (10 subsequent siblings)
13 siblings, 1 reply; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
If an END has the 'i' bit set (ignore), then it targets a group of
VPs. The size of the group depends on the VP index of the target
(first 0 found when looking at the least significant bits of the
index) so a mask is applied on the VP index of a running thread to
know if we have a match.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 5 +++-
include/hw/ppc/xive2.h | 1 +
hw/intc/pnv_xive2.c | 33 ++++++++++++++-------
hw/intc/xive.c | 56 +++++++++++++++++++++++++-----------
hw/intc/xive2.c | 65 ++++++++++++++++++++++++++++++------------
5 files changed, 114 insertions(+), 46 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 27ef6c1a17..a177b75723 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -424,6 +424,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas);
typedef struct XiveTCTXMatch {
XiveTCTX *tctx;
uint8_t ring;
+ bool precluded;
} XiveTCTXMatch;
#define TYPE_XIVE_PRESENTER "xive-presenter"
@@ -452,7 +453,9 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint8_t priority,
- uint32_t logic_serv);
+ uint32_t logic_serv, bool *precluded);
+
+uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
/*
* XIVE Fabric (Interface between Interrupt Controller and Machine)
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 5bccf41159..17c31fcb4b 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -121,6 +121,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size);
void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
+bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 834d32287b..3fb466bb2c 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -660,21 +660,34 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
logic_serv);
}
- /*
- * Save the context and follow on to catch duplicates,
- * that we don't support yet.
- */
if (ring != -1) {
- if (match->tctx) {
+ /*
+ * For VP-specific match, finding more than one is a
+ * problem. For group notification, it's possible.
+ */
+ if (!cam_ignore && match->tctx) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a "
"thread context NVT %x/%x\n",
nvt_blk, nvt_idx);
- return false;
+ /* Should set a FIR if we ever model it */
+ return -1;
+ }
+ /*
+ * For a group notification, we need to know if the
+ * match is precluded first by checking the current
+ * thread priority. If the interrupt can be delivered,
+ * we always notify the first match (for now).
+ */
+ if (cam_ignore &&
+ xive2_tm_irq_precluded(tctx, ring, priority)) {
+ match->precluded = true;
+ } else {
+ if (!match->tctx) {
+ match->ring = ring;
+ match->tctx = tctx;
+ }
+ count++;
}
-
- match->ring = ring;
- match->tctx = tctx;
- count++;
}
}
}
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index bacf518fa6..8ffcac4f65 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1671,6 +1671,16 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
}
+uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
+{
+ /*
+ * Group size is a power of 2. The position of the first 0
+ * (starting with the least significant bits) in the NVP index
+ * gives the size of the group.
+ */
+ return 1 << (ctz32(~nvp_index) + 1);
+}
+
static uint8_t xive_get_group_level(uint32_t nvp_index)
{
/* FIXME add crowd encoding */
@@ -1743,30 +1753,39 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
/*
* This is our simple Xive Presenter Engine model. It is merged in the
* Router as it does not require an extra object.
- *
- * It receives notification requests sent by the IVRE to find one
- * matching NVT (or more) dispatched on the processor threads. In case
- * of a single NVT notification, the process is abbreviated and the
- * thread is signaled if a match is found. In case of a logical server
- * notification (bits ignored at the end of the NVT identifier), the
- * IVPE and IVRE select a winning thread using different filters. This
- * involves 2 or 3 exchanges on the PowerBus that the model does not
- * support.
- *
- * The parameters represent what is sent on the PowerBus
*/
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint8_t priority,
- uint32_t logic_serv)
+ uint32_t logic_serv, bool *precluded)
{
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
- XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
+ XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false };
uint8_t group_level;
int count;
/*
- * Ask the machine to scan the interrupt controllers for a match
+ * Ask the machine to scan the interrupt controllers for a match.
+ *
+ * For VP-specific notification, we expect at most one match and
+ * one call to the presenters is all we need (abbreviated notify
+ * sequence documented by the architecture).
+ *
+ * For VP-group notification, match_nvt() is the equivalent of the
+ * "histogram" and "poll" commands sent to the power bus to the
+ * presenters. 'count' could be more than one, but we always
+ * select the first match for now. 'precluded' tells if (at least)
+ * one thread matches but can't take the interrupt now because
+ * it's running at a more favored priority. We return the
+ * information to the router so that it can take appropriate
+ * actions (backlog, escalation, broadcast, etc...)
+ *
+ * If we were to implement a better way of dispatching the
+ * interrupt in case of multiple matches (instead of the first
+ * match), we would need a heuristic to elect a thread (for
+ * example, the hardware keeps track of an 'age' in the TIMA) and
+ * a new command to the presenters (the equivalent of the "assign"
+ * power bus command in the documented full notify sequence.
*/
count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
priority, logic_serv, &match);
@@ -1779,6 +1798,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
+ } else {
+ *precluded = match.precluded;
}
return !!count;
@@ -1818,7 +1839,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
uint8_t nvt_blk;
uint32_t nvt_idx;
XiveNVT nvt;
- bool found;
+ bool found, precluded;
uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
@@ -1901,8 +1922,9 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
xive_get_field32(END_W7_F0_IGNORE, end.w7),
priority,
- xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7));
-
+ xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
+ &precluded);
+ /* we don't support VP-group notification on P9, so precluded is not used */
/* TODO: Auto EOI. */
if (found) {
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index db372f4b30..2cb03c758e 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -739,6 +739,12 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
return xrc->write_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, nvgc);
}
+static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
+ uint32_t vp_mask)
+{
+ return (cam1 & vp_mask) == (cam2 & vp_mask);
+}
+
/*
* The thread context register words are in big-endian format.
*/
@@ -753,44 +759,50 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
- /*
- * TODO (PowerNV): ignore mode. The low order bits of the NVT
- * identifier are ignored in the "CAM" match.
- */
+ uint32_t vp_mask = 0xFFFFFFFF;
if (format == 0) {
- if (cam_ignore == true) {
- /*
- * F=0 & i=1: Logical server notification (bits ignored at
- * the end of the NVT identifier)
- */
- qemu_log_mask(LOG_UNIMP, "XIVE: no support for LS NVT %x/%x\n",
- nvt_blk, nvt_idx);
- return -1;
+ /*
+ * i=0: Specific NVT notification
+ * i=1: VP-group notification (bits ignored at the end of the
+ * NVT identifier)
+ */
+ if (cam_ignore) {
+ vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
}
- /* F=0 & i=0: Specific NVT notification */
+ /* For VP-group notifications, threads with LGS=0 are excluded */
/* PHYS ring */
if ((be32_to_cpu(qw3w2) & TM2_QW3W2_VT) &&
- cam == xive2_tctx_hw_cam_line(xptr, tctx)) {
+ !(cam_ignore && tctx->regs[TM_QW3_HV_PHYS + TM_LGS] == 0) &&
+ xive2_vp_match_mask(cam,
+ xive2_tctx_hw_cam_line(xptr, tctx),
+ vp_mask)) {
return TM_QW3_HV_PHYS;
}
/* HV POOL ring */
if ((be32_to_cpu(qw2w2) & TM2_QW2W2_VP) &&
- cam == xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2)) {
+ !(cam_ignore && tctx->regs[TM_QW2_HV_POOL + TM_LGS] == 0) &&
+ xive2_vp_match_mask(cam,
+ xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2),
+ vp_mask)) {
return TM_QW2_HV_POOL;
}
/* OS ring */
if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
- cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) {
+ !(cam_ignore && tctx->regs[TM_QW1_OS + TM_LGS] == 0) &&
+ xive2_vp_match_mask(cam,
+ xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2),
+ vp_mask)) {
return TM_QW1_OS;
}
} else {
/* F=1 : User level Event-Based Branch (EBB) notification */
+ /* FIXME: what if cam_ignore and LGS = 0 ? */
/* USER ring */
if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
(cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) &&
@@ -802,6 +814,22 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
return -1;
}
+bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
+{
+ uint8_t *regs = &tctx->regs[ring];
+
+ /*
+ * The xive2_presenter_tctx_match() above tells if there's a match
+ * but for VP-group notification, we still need to look at the
+ * priority to know if the thread can take the interrupt now or if
+ * it is precluded.
+ */
+ if (priority < regs[TM_CPPR]) {
+ return false;
+ }
+ return true;
+}
+
static void xive2_router_realize(DeviceState *dev, Error **errp)
{
Xive2Router *xrtr = XIVE2_ROUTER(dev);
@@ -841,7 +869,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
Xive2End end;
uint8_t priority;
uint8_t format;
- bool found;
+ bool found, precluded;
Xive2Nvp nvp;
uint8_t nvp_blk;
uint32_t nvp_idx;
@@ -922,7 +950,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx,
xive2_end_is_ignore(&end),
priority,
- xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7));
+ xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
+ &precluded);
/* TODO: Auto EOI. */
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 04/14] ppc/xive2: Add undelivered group interrupt to backlog
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (2 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 03/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-10-15 21:13 ` [PATCH 05/14] ppc/xive2: Process group backlog when pushing an OS context Michael Kowal
` (9 subsequent siblings)
13 siblings, 0 replies; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When a group interrupt cannot be delivered, we need to:
- increment the backlog counter for the group in the NVG table
(if the END is configured to keep a backlog).
- start a broadcast operation to set the LSMFB field on matching CPUs
which can't take the interrupt now because they're running at too
high a priority.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 5 ++
include/hw/ppc/xive2.h | 1 +
hw/intc/pnv_xive2.c | 42 +++++++++++++++++
hw/intc/xive2.c | 105 +++++++++++++++++++++++++++++++++++------
hw/ppc/pnv.c | 18 +++++++
5 files changed, 156 insertions(+), 15 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index a177b75723..7660578b20 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -444,6 +444,9 @@ struct XivePresenterClass {
uint32_t logic_serv, XiveTCTXMatch *match);
bool (*in_kernel)(const XivePresenter *xptr);
uint32_t (*get_config)(XivePresenter *xptr);
+ int (*broadcast)(XivePresenter *xptr,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority);
};
int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
@@ -474,6 +477,8 @@ struct XiveFabricClass {
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match);
+ int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority);
};
/*
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 17c31fcb4b..d88db05687 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -122,6 +122,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
+void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 3fb466bb2c..0482193fd7 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -706,6 +706,47 @@ static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
return cfg;
}
+static int pnv_xive2_broadcast(XivePresenter *xptr,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority)
+{
+ PnvXive2 *xive = PNV_XIVE2(xptr);
+ PnvChip *chip = xive->chip;
+ int i, j;
+ bool gen1_tima_os =
+ xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
+
+ for (i = 0; i < chip->nr_cores; i++) {
+ PnvCore *pc = chip->cores[i];
+ CPUCore *cc = CPU_CORE(pc);
+
+ for (j = 0; j < cc->nr_threads; j++) {
+ PowerPCCPU *cpu = pc->threads[j];
+ XiveTCTX *tctx;
+ int ring;
+
+ if (!pnv_xive2_is_cpu_enabled(xive, cpu)) {
+ continue;
+ }
+
+ tctx = XIVE_TCTX(pnv_cpu_state(cpu)->intc);
+
+ if (gen1_tima_os) {
+ ring = xive_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
+ nvt_idx, true, 0);
+ } else {
+ ring = xive2_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
+ nvt_idx, true, 0);
+ }
+
+ if (ring != -1) {
+ xive2_tm_set_lsmfb(tctx, ring, priority);
+ }
+ }
+ }
+ return 0;
+}
+
static uint8_t pnv_xive2_get_block_id(Xive2Router *xrtr)
{
return pnv_xive2_block_id(PNV_XIVE2(xrtr));
@@ -2446,6 +2487,7 @@ static void pnv_xive2_class_init(ObjectClass *klass, void *data)
xpc->match_nvt = pnv_xive2_match_nvt;
xpc->get_config = pnv_xive2_presenter_get_config;
+ xpc->broadcast = pnv_xive2_broadcast;
};
static const TypeInfo pnv_xive2_info = {
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 2cb03c758e..a6dc6d553f 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -63,6 +63,30 @@ static uint32_t xive2_nvgc_get_backlog(Xive2Nvgc *nvgc, uint8_t priority)
return val;
}
+static void xive2_nvgc_set_backlog(Xive2Nvgc *nvgc, uint8_t priority,
+ uint32_t val)
+{
+ uint8_t *ptr, i;
+ uint32_t shift;
+
+ if (priority > 7) {
+ return;
+ }
+
+ if (val > 0xFFFFFF) {
+ val = 0xFFFFFF;
+ }
+ /*
+ * The per-priority backlog counters are 24-bit and the structure
+ * is stored in big endian
+ */
+ ptr = (uint8_t *)&nvgc->w2 + priority * 3;
+ for (i = 0; i < 3; i++, ptr++) {
+ shift = 8 * (2 - i);
+ *ptr = (val >> shift) & 0xFF;
+ }
+}
+
void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
{
if (!xive2_eas_is_valid(eas)) {
@@ -830,6 +854,19 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
return true;
}
+void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority)
+{
+ uint8_t *regs = &tctx->regs[ring];
+
+ /*
+ * Called by the router during a VP-group notification when the
+ * thread matches but can't take the interrupt because it's
+ * already running at a more favored priority. It then stores the
+ * new interrupt priority in the LSMFB field.
+ */
+ regs[TM_LSMFB] = priority;
+}
+
static void xive2_router_realize(DeviceState *dev, Error **errp)
{
Xive2Router *xrtr = XIVE2_ROUTER(dev);
@@ -962,10 +999,9 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
/*
* If no matching NVP is dispatched on a HW thread :
* - specific VP: update the NVP structure if backlog is activated
- * - logical server : forward request to IVPE (not supported)
+ * - VP-group: update the backlog counter for that priority in the NVG
*/
if (xive2_end_is_backlog(&end)) {
- uint8_t ipb;
if (format == 1) {
qemu_log_mask(LOG_GUEST_ERROR,
@@ -974,19 +1010,58 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
return;
}
- /*
- * Record the IPB in the associated NVP structure for later
- * use. The presenter will resend the interrupt when the vCPU
- * is dispatched again on a HW thread.
- */
- ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) |
- xive_priority_to_ipb(priority);
- nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
- xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
-
- /*
- * On HW, follows a "Broadcast Backlog" to IVPEs
- */
+ if (!xive2_end_is_ignore(&end)) {
+ uint8_t ipb;
+ /*
+ * Record the IPB in the associated NVP structure for later
+ * use. The presenter will resend the interrupt when the vCPU
+ * is dispatched again on a HW thread.
+ */
+ ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) |
+ xive_priority_to_ipb(priority);
+ nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
+ xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
+ } else {
+ Xive2Nvgc nvg;
+ uint32_t backlog;
+
+ /* For groups, the per-priority backlog counters are in the NVG */
+ if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVG %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ if (!xive2_nvgc_is_valid(&nvg)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ /*
+ * Increment the backlog counter for that priority.
+ * For the precluded case, we only call broadcast the
+ * first time the counter is incremented. broadcast will
+ * set the LSMFB field of the TIMA of relevant threads so
+ * that they know an interrupt is pending.
+ */
+ backlog = xive2_nvgc_get_backlog(&nvg, priority) + 1;
+ xive2_nvgc_set_backlog(&nvg, priority, backlog);
+ xive2_router_write_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg);
+
+ if (precluded && backlog == 1) {
+ XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
+ xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, priority);
+
+ if (!xive2_end_is_precluded_escalation(&end)) {
+ /*
+ * The interrupt will be picked up when the
+ * matching thread lowers its priority level
+ */
+ return;
+ }
+ }
+ }
}
do_escalation:
diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
index 3526852685..9b42f47326 100644
--- a/hw/ppc/pnv.c
+++ b/hw/ppc/pnv.c
@@ -2610,6 +2610,23 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
return total_count;
}
+static int pnv10_xive_broadcast(XiveFabric *xfb,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority)
+{
+ PnvMachineState *pnv = PNV_MACHINE(xfb);
+ int i;
+
+ for (i = 0; i < pnv->num_chips; i++) {
+ Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
+ XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
+ XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
+
+ xpc->broadcast(xptr, nvt_blk, nvt_idx, priority);
+ }
+ return 0;
+}
+
static bool pnv_machine_get_big_core(Object *obj, Error **errp)
{
PnvMachineState *pnv = PNV_MACHINE(obj);
@@ -2743,6 +2760,7 @@ static void pnv_machine_p10_common_class_init(ObjectClass *oc, void *data)
pmc->dt_power_mgt = pnv_dt_power_mgt;
xfc->match_nvt = pnv10_xive_match_nvt;
+ xfc->broadcast = pnv10_xive_broadcast;
machine_class_allow_dynamic_sysbus_dev(mc, TYPE_PNV_PHB);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 05/14] ppc/xive2: Process group backlog when pushing an OS context
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (3 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 04/14] ppc/xive2: Add undelivered group interrupt to backlog Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-11-19 4:20 ` Nicholas Piggin
2024-10-15 21:13 ` [PATCH 06/14] ppc/xive2: Process group backlog when updating the CPPR Michael Kowal
` (8 subsequent siblings)
13 siblings, 1 reply; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When pushing an OS context, we were already checking if there was a
pending interrupt in the IPB and sending a notification if needed. We
also need to check if there is a pending group interrupt stored in the
NVG table. To avoid useless backlog scans, we only scan if the NVP
belongs to a group.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/xive2.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 97 insertions(+), 3 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index a6dc6d553f..7130892482 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -279,6 +279,85 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
}
+/*
+ * Scan the group chain and return the highest priority and group
+ * level of pending group interrupts.
+ */
+static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr,
+ uint8_t nvp_blk, uint32_t nvp_idx,
+ uint8_t first_group,
+ uint8_t *out_level)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint32_t nvgc_idx, mask;
+ uint32_t current_level, count;
+ uint8_t prio;
+ Xive2Nvgc nvgc;
+
+ for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) {
+ current_level = first_group & 0xF;
+
+ while (current_level) {
+ mask = (1 << current_level) - 1;
+ nvgc_idx = nvp_idx & ~mask;
+ nvgc_idx |= mask >> 1;
+ qemu_log("fxb %s checking backlog for prio %d group idx %x\n",
+ __func__, prio, nvgc_idx);
+
+ if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return 0xFF;
+ }
+ if (!xive2_nvgc_is_valid(&nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return 0xFF;
+ }
+
+ count = xive2_nvgc_get_backlog(&nvgc, prio);
+ if (count) {
+ *out_level = current_level;
+ return prio;
+ }
+ current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0xF;
+ }
+ }
+ return 0xFF;
+}
+
+static void xive2_presenter_backlog_decr(XivePresenter *xptr,
+ uint8_t nvp_blk, uint32_t nvp_idx,
+ uint8_t group_prio,
+ uint8_t group_level)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint32_t nvgc_idx, mask, count;
+ Xive2Nvgc nvgc;
+
+ group_level &= 0xF;
+ mask = (1 << group_level) - 1;
+ nvgc_idx = nvp_idx & ~mask;
+ nvgc_idx |= mask >> 1;
+
+ if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return;
+ }
+ if (!xive2_nvgc_is_valid(&nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return;
+ }
+ count = xive2_nvgc_get_backlog(&nvgc, group_prio);
+ if (!count) {
+ return;
+ }
+ xive2_nvgc_set_backlog(&nvgc, group_prio, count - 1);
+ xive2_router_write_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc);
+}
+
/*
* XIVE Thread Interrupt Management Area (TIMA) - Gen2 mode
*
@@ -588,8 +667,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
{
- uint8_t ipb, backlog_level;
- uint8_t backlog_prio;
+ XivePresenter *xptr = XIVE_PRESENTER(xrtr);
+ uint8_t ipb, backlog_level, group_level, first_group;
+ uint8_t backlog_prio, group_prio;
uint8_t *regs = &tctx->regs[TM_QW1_OS];
Xive2Nvp nvp;
@@ -624,8 +704,22 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
backlog_prio = xive_ipb_to_pipr(ipb);
backlog_level = 0;
+ first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
+ if (first_group && regs[TM_LSMFB] < backlog_prio) {
+ group_prio = xive2_presenter_backlog_check(xptr, nvp_blk, nvp_idx,
+ first_group, &group_level);
+ regs[TM_LSMFB] = group_prio;
+ if (regs[TM_LGS] && group_prio < backlog_prio) {
+ /* VP can take a group interrupt */
+ xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
+ group_prio, group_level);
+ backlog_prio = group_prio;
+ backlog_level = group_level;
+ }
+ }
+
/*
- * Compute the PIPR based on the restored state.
+ * Compute the PIPR based on the restored state.
* It will raise the External interrupt signal if needed.
*/
xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 06/14] ppc/xive2: Process group backlog when updating the CPPR
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (4 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 05/14] ppc/xive2: Process group backlog when pushing an OS context Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-11-19 4:34 ` Nicholas Piggin
2024-10-15 21:13 ` [PATCH 07/14] qtest/xive: Add group-interrupt test Michael Kowal
` (7 subsequent siblings)
13 siblings, 1 reply; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When the hypervisor or OS pushes a new value to the CPPR, if the LSMFB
value is lower than the new CPPR value, there could be a pending group
interrupt in the backlog, so it needs to be scanned.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2.h | 4 +
hw/intc/xive.c | 4 +-
hw/intc/xive2.c | 173 ++++++++++++++++++++++++++++++++++++++++-
3 files changed, 177 insertions(+), 4 deletions(-)
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index d88db05687..e61b978f37 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -115,6 +115,10 @@ typedef struct Xive2EndSource {
* XIVE2 Thread Interrupt Management Area (POWER10)
*/
+void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size);
+void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
uint64_t value, unsigned size);
uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 8ffcac4f65..2aa6e1fecc 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -603,7 +603,7 @@ static const XiveTmOp xive2_tm_operations[] = {
* MMIOs below 2K : raw values and special operations without side
* effects
*/
- { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive_tm_set_os_cppr,
+ { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive2_tm_set_os_cppr,
NULL },
{ XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive2_tm_push_os_ctx,
NULL },
@@ -611,7 +611,7 @@ static const XiveTmOp xive2_tm_operations[] = {
NULL },
{ XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, xive_tm_set_os_lgs,
NULL },
- { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive_tm_set_hv_cppr,
+ { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive2_tm_set_hv_cppr,
NULL },
{ XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
NULL },
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 7130892482..0c53f71879 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -18,6 +18,7 @@
#include "hw/ppc/xive.h"
#include "hw/ppc/xive2.h"
#include "hw/ppc/xive2_regs.h"
+#include "trace.h"
uint32_t xive2_router_get_config(Xive2Router *xrtr)
{
@@ -764,6 +765,172 @@ void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
}
}
+static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
+ uint32_t *nvp_blk, uint32_t *nvp_idx)
+{
+ uint32_t w2, cam;
+
+ w2 = xive_tctx_word2(&tctx->regs[ring]);
+ switch (ring) {
+ case TM_QW1_OS:
+ if (!(be32_to_cpu(w2) & TM2_QW1W2_VO)) {
+ return -1;
+ }
+ cam = xive_get_field32(TM2_QW1W2_OS_CAM, w2);
+ break;
+ case TM_QW2_HV_POOL:
+ if (!(be32_to_cpu(w2) & TM2_QW2W2_VP)) {
+ return -1;
+ }
+ cam = xive_get_field32(TM2_QW2W2_POOL_CAM, w2);
+ break;
+ case TM_QW3_HV_PHYS:
+ if (!(be32_to_cpu(w2) & TM2_QW3W2_VT)) {
+ return -1;
+ }
+ cam = xive2_tctx_hw_cam_line(tctx->xptr, tctx);
+ break;
+ default:
+ return -1;
+ }
+ *nvp_blk = xive2_nvp_blk(cam);
+ *nvp_idx = xive2_nvp_idx(cam);
+ return 0;
+}
+
+static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
+{
+ uint8_t *regs = &tctx->regs[ring];
+ Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
+ uint8_t old_cppr, backlog_prio, first_group, group_level = 0;
+ uint8_t pipr_min, lsmfb_min, ring_min;
+ bool group_enabled;
+ uint32_t nvp_blk, nvp_idx;
+ Xive2Nvp nvp;
+ int rc;
+
+ trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
+ regs[TM_IPB], regs[TM_PIPR],
+ cppr, regs[TM_NSR]);
+
+ if (cppr > XIVE_PRIORITY_MAX) {
+ cppr = 0xff;
+ }
+
+ old_cppr = regs[TM_CPPR];
+ regs[TM_CPPR] = cppr;
+
+ /*
+ * Recompute the PIPR based on local pending interrupts. It will
+ * be adjusted below if needed in case of pending group interrupts.
+ */
+ pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
+ group_enabled = !!regs[TM_LGS];
+ lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff;
+ ring_min = ring;
+
+ /* PHYS updates also depend on POOL values */
+ if (ring == TM_QW3_HV_PHYS) {
+ uint8_t *pregs = &tctx->regs[TM_QW2_HV_POOL];
+
+ /* POOL values only matter if POOL ctx is valid */
+ if (pregs[TM_WORD2] & 0x80) {
+
+ uint8_t pool_pipr = xive_ipb_to_pipr(pregs[TM_IPB]);
+ uint8_t pool_lsmfb = pregs[TM_LSMFB];
+
+ /*
+ * Determine highest priority interrupt and
+ * remember which ring has it.
+ */
+ if (pool_pipr < pipr_min) {
+ pipr_min = pool_pipr;
+ if (pool_pipr < lsmfb_min) {
+ ring_min = TM_QW2_HV_POOL;
+ }
+ }
+
+ /* Values needed for group priority calculation */
+ if (pregs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
+ group_enabled = true;
+ lsmfb_min = pool_lsmfb;
+ if (lsmfb_min < pipr_min) {
+ ring_min = TM_QW2_HV_POOL;
+ }
+ }
+ }
+ }
+ regs[TM_PIPR] = pipr_min;
+
+ rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
+ if (rc) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
+ return;
+ }
+
+ if (cppr < old_cppr) {
+ /*
+ * FIXME: check if there's a group interrupt being presented
+ * and if the new cppr prevents it. If so, then the group
+ * interrupt needs to be re-added to the backlog and
+ * re-triggered (see re-trigger END info in the NVGC
+ * structure)
+ */
+ }
+
+ if (group_enabled &&
+ lsmfb_min < cppr &&
+ lsmfb_min < regs[TM_PIPR]) {
+ /*
+ * Thread has seen a group interrupt with a higher priority
+ * than the new cppr or pending local interrupt. Check the
+ * backlog
+ */
+ if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ if (!xive2_nvp_is_valid(&nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
+ if (!first_group) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ backlog_prio = xive2_presenter_backlog_check(tctx->xptr,
+ nvp_blk, nvp_idx,
+ first_group, &group_level);
+ tctx->regs[ring_min + TM_LSMFB] = backlog_prio;
+ if (backlog_prio != 0xFF) {
+ xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
+ backlog_prio, group_level);
+ regs[TM_PIPR] = backlog_prio;
+ }
+ }
+ /* CPPR has changed, check if we need to raise a pending exception */
+ xive_tctx_notify(tctx, ring_min, group_level);
+}
+
+void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive2_tctx_set_cppr(tctx, TM_QW3_HV_PHYS, value & 0xff);
+}
+
+void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive2_tctx_set_cppr(tctx, TM_QW1_OS, value & 0xff);
+}
+
static void xive2_tctx_set_target(XiveTCTX *tctx, uint8_t ring, uint8_t target)
{
uint8_t *regs = &tctx->regs[ring];
@@ -934,7 +1101,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
{
- uint8_t *regs = &tctx->regs[ring];
+ /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
+ uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+ uint8_t *alt_regs = &tctx->regs[alt_ring];
/*
* The xive2_presenter_tctx_match() above tells if there's a match
@@ -942,7 +1111,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
* priority to know if the thread can take the interrupt now or if
* it is precluded.
*/
- if (priority < regs[TM_CPPR]) {
+ if (priority < alt_regs[TM_CPPR]) {
return false;
}
return true;
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 07/14] qtest/xive: Add group-interrupt test
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (5 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 06/14] ppc/xive2: Process group backlog when updating the CPPR Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-10-15 21:13 ` [PATCH 08/14] Add support for MMIO operations on the NVPG/NVC BAR Michael Kowal
` (6 subsequent siblings)
13 siblings, 0 replies; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
Add XIVE2 tests for group interrupts and group interrupts that have
been backlogged.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
tests/qtest/pnv-xive2-test.c | 160 +++++++++++++++++++++++++++++++++++
1 file changed, 160 insertions(+)
diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
index 4ec1cc1b0f..1705127da1 100644
--- a/tests/qtest/pnv-xive2-test.c
+++ b/tests/qtest/pnv-xive2-test.c
@@ -2,6 +2,8 @@
* QTest testcase for PowerNV 10 interrupt controller (xive2)
* - Test irq to hardware thread
* - Test 'Pull Thread Context to Odd Thread Reporting Line'
+ * - Test irq to hardware group
+ * - Test irq to hardware group going through backlog
*
* Copyright (c) 2024, IBM Corporation.
*
@@ -316,6 +318,158 @@ static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts)
word2 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD2);
g_assert_cmphex(xive_get_field32(TM_QW3W2_VT, word2), ==, 0);
}
+
+static void test_hw_group_irq(QTestState *qts)
+{
+ uint32_t irq = 100;
+ uint32_t irq_data = 0xdeadbeef;
+ uint32_t end_index = 23;
+ uint32_t chosen_one;
+ uint32_t target_nvp = 0x81; /* group size = 4 */
+ uint8_t priority = 6;
+ uint32_t reg32;
+ uint16_t reg16;
+ uint8_t pq, nsr, cppr;
+
+ printf("# ============================================================\n");
+ printf("# Testing irq %d to hardware group of size 4\n", irq);
+
+ /* irq config */
+ set_eas(qts, irq, end_index, irq_data);
+ set_end(qts, end_index, target_nvp, priority, true /* group */);
+
+ /* enable and trigger irq */
+ get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
+
+ /* check irq is raised on cpu */
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
+
+ /* find the targeted vCPU */
+ for (chosen_one = 0; chosen_one < SMT; chosen_one++) {
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ if (nsr == 0x82) {
+ break;
+ }
+ }
+ g_assert_cmphex(chosen_one, <, SMT);
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, 0xFF);
+
+ /* ack the irq */
+ reg16 = get_tima16(qts, chosen_one, TM_SPC_ACK_HV_REG);
+ nsr = reg16 >> 8;
+ cppr = reg16 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, priority);
+
+ /* check irq data is what was configured */
+ reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
+ g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
+
+ /* End Of Interrupt */
+ set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
+
+ /* reset CPPR */
+ set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x00);
+ g_assert_cmphex(cppr, ==, 0xFF);
+}
+
+static void test_hw_group_irq_backlog(QTestState *qts)
+{
+ uint32_t irq = 31;
+ uint32_t irq_data = 0x01234567;
+ uint32_t end_index = 129;
+ uint32_t target_nvp = 0x81; /* group size = 4 */
+ uint32_t chosen_one = 3;
+ uint8_t blocking_priority, priority = 3;
+ uint32_t reg32;
+ uint16_t reg16;
+ uint8_t pq, nsr, cppr, lsmfb, i;
+
+ printf("# ============================================================\n");
+ printf("# Testing irq %d to hardware group of size 4 going through " \
+ "backlog\n",
+ irq);
+
+ /*
+ * set current priority of all threads in the group to something
+ * higher than what we're about to trigger
+ */
+ blocking_priority = priority - 1;
+ for (i = 0; i < SMT; i++) {
+ set_tima8(qts, i, TM_QW3_HV_PHYS + TM_CPPR, blocking_priority);
+ }
+
+ /* irq config */
+ set_eas(qts, irq, end_index, irq_data);
+ set_end(qts, end_index, target_nvp, priority, true /* group */);
+
+ /* enable and trigger irq */
+ get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
+
+ /* check irq is raised on cpu */
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
+
+ /* check no interrupt is pending on the 2 possible targets */
+ for (i = 0; i < SMT; i++) {
+ reg32 = get_tima32(qts, i, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ lsmfb = reg32 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x0);
+ g_assert_cmphex(cppr, ==, blocking_priority);
+ g_assert_cmphex(lsmfb, ==, priority);
+ }
+
+ /* lower priority of one thread */
+ set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, priority + 1);
+
+ /* check backlogged interrupt is presented */
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, priority + 1);
+
+ /* ack the irq */
+ reg16 = get_tima16(qts, chosen_one, TM_SPC_ACK_HV_REG);
+ nsr = reg16 >> 8;
+ cppr = reg16 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, priority);
+
+ /* check irq data is what was configured */
+ reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
+ g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
+
+ /* End Of Interrupt */
+ set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
+
+ /* reset CPPR */
+ set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ lsmfb = reg32 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x00);
+ g_assert_cmphex(cppr, ==, 0xFF);
+ g_assert_cmphex(lsmfb, ==, 0xFF);
+}
+
static void test_xive(void)
{
QTestState *qts;
@@ -331,6 +485,12 @@ static void test_xive(void)
/* omit reset_state here and use settings from test_hw_irq */
test_pull_thread_ctx_to_odd_thread_cl(qts);
+ reset_state(qts);
+ test_hw_group_irq(qts);
+
+ reset_state(qts);
+ test_hw_group_irq_backlog(qts);
+
reset_state(qts);
test_flush_sync_inject(qts);
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 08/14] Add support for MMIO operations on the NVPG/NVC BAR
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (6 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 07/14] qtest/xive: Add group-interrupt test Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-10-15 21:13 ` [PATCH 09/14] ppc/xive2: Support crowd-matching when looking for target Michael Kowal
` (5 subsequent siblings)
13 siblings, 0 replies; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
Add support for the NVPG and NVC BARs. Access to the BAR pages will
cause backlog counter operations to either increment or decriment
the counter.
Also added qtests for the same.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2.h | 9 ++
include/hw/ppc/xive2_regs.h | 3 +
tests/qtest/pnv-xive2-common.h | 1 +
hw/intc/pnv_xive2.c | 80 +++++++++++++---
hw/intc/xive2.c | 87 +++++++++++++++++
tests/qtest/pnv-xive2-nvpg_bar.c | 154 +++++++++++++++++++++++++++++++
tests/qtest/pnv-xive2-test.c | 3 +
hw/intc/trace-events | 4 +
tests/qtest/meson.build | 3 +-
9 files changed, 329 insertions(+), 15 deletions(-)
create mode 100644 tests/qtest/pnv-xive2-nvpg_bar.c
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index e61b978f37..049028d2c2 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -92,6 +92,15 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint32_t logic_serv);
+uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset);
+
+uint64_t xive2_presenter_nvgc_backlog_op(XivePresenter *xptr,
+ bool crowd,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset, uint16_t val);
+
/*
* XIVE2 END ESBs (POWER10)
*/
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 30868e8e09..66a419441c 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -234,4 +234,7 @@ typedef struct Xive2Nvgc {
void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx,
GString *buf);
+#define NVx_BACKLOG_OP PPC_BITMASK(52, 53)
+#define NVx_BACKLOG_PRIO PPC_BITMASK(57, 59)
+
#endif /* PPC_XIVE2_REGS_H */
diff --git a/tests/qtest/pnv-xive2-common.h b/tests/qtest/pnv-xive2-common.h
index 2135b04d5b..910f0f512e 100644
--- a/tests/qtest/pnv-xive2-common.h
+++ b/tests/qtest/pnv-xive2-common.h
@@ -108,5 +108,6 @@ extern void set_end(QTestState *qts, uint32_t index, uint32_t nvp_index,
void test_flush_sync_inject(QTestState *qts);
+void test_nvpg_bar(QTestState *qts);
#endif /* TEST_PNV_XIVE2_COMMON_H */
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 0482193fd7..9736b623ba 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -2203,21 +2203,40 @@ static const MemoryRegionOps pnv_xive2_tm_ops = {
},
};
-static uint64_t pnv_xive2_nvc_read(void *opaque, hwaddr offset,
+static uint64_t pnv_xive2_nvc_read(void *opaque, hwaddr addr,
unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvpg_shift;
+ uint16_t op = addr & 0xFFF;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVC: invalid read @%"HWADDR_PRIx, offset);
- return -1;
+ if (size != 2) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvc load size %d\n",
+ size);
+ return -1;
+ }
+
+ return xive2_presenter_nvgc_backlog_op(xptr, true, blk, page, op, 1);
}
-static void pnv_xive2_nvc_write(void *opaque, hwaddr offset,
+static void pnv_xive2_nvc_write(void *opaque, hwaddr addr,
uint64_t val, unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvc_shift;
+ uint16_t op = addr & 0xFFF;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVC: invalid write @%"HWADDR_PRIx, offset);
+ if (size != 1) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvc write size %d\n",
+ size);
+ return;
+ }
+
+ (void)xive2_presenter_nvgc_backlog_op(xptr, true, blk, page, op, val);
}
static const MemoryRegionOps pnv_xive2_nvc_ops = {
@@ -2225,30 +2244,63 @@ static const MemoryRegionOps pnv_xive2_nvc_ops = {
.write = pnv_xive2_nvc_write,
.endianness = DEVICE_BIG_ENDIAN,
.valid = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
.impl = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
};
-static uint64_t pnv_xive2_nvpg_read(void *opaque, hwaddr offset,
+static uint64_t pnv_xive2_nvpg_read(void *opaque, hwaddr addr,
unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvpg_shift;
+ uint16_t op = addr & 0xFFF;
+ uint32_t index = page >> 1;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVPG: invalid read @%"HWADDR_PRIx, offset);
- return -1;
+ if (size != 2) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvpg load size %d\n",
+ size);
+ return -1;
+ }
+
+ if (page % 2) {
+ /* odd page - NVG */
+ return xive2_presenter_nvgc_backlog_op(xptr, false, blk, index, op, 1);
+ } else {
+ /* even page - NVP */
+ return xive2_presenter_nvp_backlog_op(xptr, blk, index, op);
+ }
}
-static void pnv_xive2_nvpg_write(void *opaque, hwaddr offset,
+static void pnv_xive2_nvpg_write(void *opaque, hwaddr addr,
uint64_t val, unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvpg_shift;
+ uint16_t op = addr & 0xFFF;
+ uint32_t index = page >> 1;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVPG: invalid write @%"HWADDR_PRIx, offset);
+ if (size != 1) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvpg write size %d\n",
+ size);
+ return;
+ }
+
+ if (page % 2) {
+ /* odd page - NVG */
+ (void)xive2_presenter_nvgc_backlog_op(xptr, false, blk, index, op, val);
+ } else {
+ /* even page - NVP */
+ (void)xive2_presenter_nvp_backlog_op(xptr, blk, index, op);
+ }
}
static const MemoryRegionOps pnv_xive2_nvpg_ops = {
@@ -2256,11 +2308,11 @@ static const MemoryRegionOps pnv_xive2_nvpg_ops = {
.write = pnv_xive2_nvpg_write,
.endianness = DEVICE_BIG_ENDIAN,
.valid = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
.impl = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
};
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 0c53f71879..b6f279e6a3 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -88,6 +88,93 @@ static void xive2_nvgc_set_backlog(Xive2Nvgc *nvgc, uint8_t priority,
}
}
+uint64_t xive2_presenter_nvgc_backlog_op(XivePresenter *xptr,
+ bool crowd,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset, uint16_t val)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint8_t priority = GETFIELD(NVx_BACKLOG_PRIO, offset);
+ uint8_t op = GETFIELD(NVx_BACKLOG_OP, offset);
+ Xive2Nvgc nvgc;
+ uint32_t count, old_count;
+
+ if (xive2_router_get_nvgc(xrtr, crowd, blk, idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No %s %x/%x\n",
+ crowd ? "NVC" : "NVG", blk, idx);
+ return -1;
+ }
+ if (!xive2_nvgc_is_valid(&nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n", blk, idx);
+ return -1;
+ }
+
+ old_count = xive2_nvgc_get_backlog(&nvgc, priority);
+ count = old_count;
+ /*
+ * op:
+ * 0b00 => increment
+ * 0b01 => decrement
+ * 0b1- => read
+ */
+ if (op == 0b00 || op == 0b01) {
+ if (op == 0b00) {
+ count += val;
+ } else {
+ if (count > val) {
+ count -= val;
+ } else {
+ count = 0;
+ }
+ }
+ xive2_nvgc_set_backlog(&nvgc, priority, count);
+ xive2_router_write_nvgc(xrtr, crowd, blk, idx, &nvgc);
+ }
+ trace_xive_nvgc_backlog_op(crowd, blk, idx, op, priority, old_count);
+ return old_count;
+}
+
+uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint8_t priority = GETFIELD(NVx_BACKLOG_PRIO, offset);
+ uint8_t op = GETFIELD(NVx_BACKLOG_OP, offset);
+ Xive2Nvp nvp;
+ uint8_t ipb, old_ipb, rc;
+
+ if (xive2_router_get_nvp(xrtr, blk, idx, &nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n", blk, idx);
+ return -1;
+ }
+ if (!xive2_nvp_is_valid(&nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVP %x/%x\n", blk, idx);
+ return -1;
+ }
+
+ old_ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
+ ipb = old_ipb;
+ /*
+ * op:
+ * 0b00 => set priority bit
+ * 0b01 => reset priority bit
+ * 0b1- => read
+ */
+ if (op == 0b00 || op == 0b01) {
+ if (op == 0b00) {
+ ipb |= xive_priority_to_ipb(priority);
+ } else {
+ ipb &= ~xive_priority_to_ipb(priority);
+ }
+ nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
+ xive2_router_write_nvp(xrtr, blk, idx, &nvp, 2);
+ }
+ rc = !!(old_ipb & xive_priority_to_ipb(priority));
+ trace_xive_nvp_backlog_op(blk, idx, op, priority, rc);
+ return rc;
+}
+
void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
{
if (!xive2_eas_is_valid(eas)) {
diff --git a/tests/qtest/pnv-xive2-nvpg_bar.c b/tests/qtest/pnv-xive2-nvpg_bar.c
new file mode 100644
index 0000000000..10d4962d1e
--- /dev/null
+++ b/tests/qtest/pnv-xive2-nvpg_bar.c
@@ -0,0 +1,154 @@
+/*
+ * QTest testcase for PowerNV 10 interrupt controller (xive2)
+ * - Test NVPG BAR MMIO operations
+ *
+ * Copyright (c) 2024, IBM Corporation.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later. See the COPYING file in the top-level directory.
+ */
+#include "qemu/osdep.h"
+#include "libqtest.h"
+
+#include "pnv-xive2-common.h"
+
+#define NVPG_BACKLOG_OP_SHIFT 10
+#define NVPG_BACKLOG_PRIO_SHIFT 4
+
+#define XIVE_PRIORITY_MAX 7
+
+enum NVx {
+ NVP,
+ NVG,
+ NVC
+};
+
+typedef enum {
+ INCR_STORE = 0b100,
+ INCR_LOAD = 0b000,
+ DECR_STORE = 0b101,
+ DECR_LOAD = 0b001,
+ READ_x = 0b010,
+ READ_y = 0b011,
+} backlog_op;
+
+static uint32_t nvpg_backlog_op(QTestState *qts, backlog_op op,
+ enum NVx type, uint64_t index,
+ uint8_t priority, uint8_t delta)
+{
+ uint64_t addr, offset;
+ uint32_t count = 0;
+
+ switch (type) {
+ case NVP:
+ addr = XIVE_NVPG_ADDR + (index << (XIVE_PAGE_SHIFT + 1));
+ break;
+ case NVG:
+ addr = XIVE_NVPG_ADDR + (index << (XIVE_PAGE_SHIFT + 1)) +
+ (1 << XIVE_PAGE_SHIFT);
+ break;
+ case NVC:
+ addr = XIVE_NVC_ADDR + (index << XIVE_PAGE_SHIFT);
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ offset = (op & 0b11) << NVPG_BACKLOG_OP_SHIFT;
+ offset |= priority << NVPG_BACKLOG_PRIO_SHIFT;
+ if (op >> 2) {
+ qtest_writeb(qts, addr + offset, delta);
+ } else {
+ count = qtest_readw(qts, addr + offset);
+ }
+ return count;
+}
+
+void test_nvpg_bar(QTestState *qts)
+{
+ uint32_t nvp_target = 0x11;
+ uint32_t group_target = 0x17; /* size 16 */
+ uint32_t vp_irq = 33, group_irq = 47;
+ uint32_t vp_end = 3, group_end = 97;
+ uint32_t vp_irq_data = 0x33333333;
+ uint32_t group_irq_data = 0x66666666;
+ uint8_t vp_priority = 0, group_priority = 5;
+ uint32_t vp_count[XIVE_PRIORITY_MAX + 1] = { 0 };
+ uint32_t group_count[XIVE_PRIORITY_MAX + 1] = { 0 };
+ uint32_t count, delta;
+ uint8_t i;
+
+ printf("# ============================================================\n");
+ printf("# Testing NVPG BAR operations\n");
+
+ set_nvg(qts, group_target, 0);
+ set_nvp(qts, nvp_target, 0x04);
+ set_nvp(qts, group_target, 0x04);
+
+ /*
+ * Setup: trigger a VP-specific interrupt and a group interrupt
+ * so that the backlog counters are initialized to something else
+ * than 0 for at least one priority level
+ */
+ set_eas(qts, vp_irq, vp_end, vp_irq_data);
+ set_end(qts, vp_end, nvp_target, vp_priority, false /* group */);
+
+ set_eas(qts, group_irq, group_end, group_irq_data);
+ set_end(qts, group_end, group_target, group_priority, true /* group */);
+
+ get_esb(qts, vp_irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, vp_irq, XIVE_TRIGGER_PAGE, 0, 0);
+ vp_count[vp_priority]++;
+
+ get_esb(qts, group_irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, group_irq, XIVE_TRIGGER_PAGE, 0, 0);
+ group_count[group_priority]++;
+
+ /* check the initial counters */
+ for (i = 0; i <= XIVE_PRIORITY_MAX; i++) {
+ count = nvpg_backlog_op(qts, READ_x, NVP, nvp_target, i, 0);
+ g_assert_cmpuint(count, ==, vp_count[i]);
+
+ count = nvpg_backlog_op(qts, READ_y, NVG, group_target, i, 0);
+ g_assert_cmpuint(count, ==, group_count[i]);
+ }
+
+ /* do a few ops on the VP. Counter can only be 0 and 1 */
+ vp_priority = 2;
+ delta = 7;
+ nvpg_backlog_op(qts, INCR_STORE, NVP, nvp_target, vp_priority, delta);
+ vp_count[vp_priority] = 1;
+ count = nvpg_backlog_op(qts, INCR_LOAD, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+ count = nvpg_backlog_op(qts, READ_y, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+
+ count = nvpg_backlog_op(qts, DECR_LOAD, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+ vp_count[vp_priority] = 0;
+ nvpg_backlog_op(qts, DECR_STORE, NVP, nvp_target, vp_priority, delta);
+ count = nvpg_backlog_op(qts, READ_x, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+
+ /* do a few ops on the group */
+ group_priority = 2;
+ delta = 9;
+ /* can't go negative */
+ nvpg_backlog_op(qts, DECR_STORE, NVG, group_target, group_priority, delta);
+ count = nvpg_backlog_op(qts, READ_y, NVG, group_target, group_priority, 0);
+ g_assert_cmpuint(count, ==, 0);
+ nvpg_backlog_op(qts, INCR_STORE, NVG, group_target, group_priority, delta);
+ group_count[group_priority] += delta;
+ count = nvpg_backlog_op(qts, INCR_LOAD, NVG, group_target,
+ group_priority, delta);
+ g_assert_cmpuint(count, ==, group_count[group_priority]);
+ group_count[group_priority]++;
+
+ count = nvpg_backlog_op(qts, DECR_LOAD, NVG, group_target,
+ group_priority, delta);
+ g_assert_cmpuint(count, ==, group_count[group_priority]);
+ group_count[group_priority]--;
+ count = nvpg_backlog_op(qts, READ_x, NVG, group_target, group_priority, 0);
+ g_assert_cmpuint(count, ==, group_count[group_priority]);
+}
+
diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
index 1705127da1..a6008bc053 100644
--- a/tests/qtest/pnv-xive2-test.c
+++ b/tests/qtest/pnv-xive2-test.c
@@ -494,6 +494,9 @@ static void test_xive(void)
reset_state(qts);
test_flush_sync_inject(qts);
+ reset_state(qts);
+ test_nvpg_bar(qts);
+
qtest_quit(qts);
}
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
index 7435728c51..7f362c38b0 100644
--- a/hw/intc/trace-events
+++ b/hw/intc/trace-events
@@ -285,6 +285,10 @@ xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t v
xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d"
xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64
+# xive2.c
+xive_nvp_backlog_op(uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint8_t rc) "NVP 0x%x/0x%x operation=%d priority=%d rc=%d"
+xive_nvgc_backlog_op(bool c, uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint32_t rc) "NVGC crowd=%d 0x%x/0x%x operation=%d priority=%d rc=%d"
+
# pnv_xive.c
pnv_xive_ic_hw_trigger(uint64_t addr, uint64_t val) "@0x%"PRIx64" val=0x%"PRIx64
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index d2af58800d..9ef9819450 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -337,7 +337,8 @@ qtests = {
'ivshmem-test': [rt, '../../contrib/ivshmem-server/ivshmem-server.c'],
'migration-test': migration_files,
'pxe-test': files('boot-sector.c'),
- 'pnv-xive2-test': files('pnv-xive2-common.c', 'pnv-xive2-flush-sync.c'),
+ 'pnv-xive2-test': files('pnv-xive2-common.c', 'pnv-xive2-flush-sync.c',
+ 'pnv-xive2-nvpg_bar.c'),
'qos-test': [chardev, io, qos_test_ss.apply({}).sources()],
'tpm-crb-swtpm-test': [io, tpmemu_files],
'tpm-crb-test': [io, tpmemu_files],
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 09/14] ppc/xive2: Support crowd-matching when looking for target
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (7 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 08/14] Add support for MMIO operations on the NVPG/NVC BAR Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-10-15 21:13 ` [PATCH 10/14] ppc/xive2: Check crowd backlog when scanning group backlog Michael Kowal
` (4 subsequent siblings)
13 siblings, 0 replies; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
If an END is defined with the 'crowd' bit set, then a target can be
running on different blocks. It means that some bits from the block
VP are masked when looking for a match. It is similar to groups, but
on the block instead of the VP index.
Most of the changes are due to passing the extra argument 'crowd' all
the way to the function checking for matches.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 10 +++---
include/hw/ppc/xive2.h | 3 +-
hw/intc/pnv_xive.c | 5 +--
hw/intc/pnv_xive2.c | 12 +++----
hw/intc/spapr_xive.c | 3 +-
hw/intc/xive.c | 21 ++++++++----
hw/intc/xive2.c | 78 +++++++++++++++++++++++++++++++++---------
hw/ppc/pnv.c | 15 ++++----
hw/ppc/spapr.c | 4 +--
9 files changed, 105 insertions(+), 46 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 7660578b20..c9070792ec 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -440,13 +440,13 @@ struct XivePresenterClass {
InterfaceClass parent;
int (*match_nvt)(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match);
bool (*in_kernel)(const XivePresenter *xptr);
uint32_t (*get_config)(XivePresenter *xptr);
int (*broadcast)(XivePresenter *xptr,
uint8_t nvt_blk, uint32_t nvt_idx,
- uint8_t priority);
+ bool crowd, bool cam_ignore, uint8_t priority);
};
int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
@@ -455,7 +455,7 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
bool cam_ignore, uint32_t logic_serv);
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, bool *precluded);
uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
@@ -475,10 +475,10 @@ struct XiveFabricClass {
InterfaceClass parent;
int (*match_nvt)(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match);
int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
- uint8_t priority);
+ bool crowd, bool cam_ignore, uint8_t priority);
};
/*
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 049028d2c2..37aca4d26a 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -90,7 +90,8 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint32_t logic_serv);
+ bool crowd, bool cam_ignore,
+ uint32_t logic_serv);
uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
uint8_t blk, uint32_t idx,
diff --git a/hw/intc/pnv_xive.c b/hw/intc/pnv_xive.c
index 5bacbce6a4..346549f32e 100644
--- a/hw/intc/pnv_xive.c
+++ b/hw/intc/pnv_xive.c
@@ -473,7 +473,7 @@ static bool pnv_xive_is_cpu_enabled(PnvXive *xive, PowerPCCPU *cpu)
static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
PnvXive *xive = PNV_XIVE(xptr);
@@ -500,7 +500,8 @@ static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
* Check the thread context CAM lines and record matches.
*/
ring = xive_presenter_tctx_match(xptr, tctx, format, nvt_blk,
- nvt_idx, cam_ignore, logic_serv);
+ nvt_idx, cam_ignore,
+ logic_serv);
/*
* Save the context and follow on to catch duplicates, that we
* don't support yet.
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 9736b623ba..236f9d7eb7 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -625,7 +625,7 @@ static bool pnv_xive2_is_cpu_enabled(PnvXive2 *xive, PowerPCCPU *cpu)
static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
PnvXive2 *xive = PNV_XIVE2(xptr);
@@ -656,8 +656,8 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
logic_serv);
} else {
ring = xive2_presenter_tctx_match(xptr, tctx, format, nvt_blk,
- nvt_idx, cam_ignore,
- logic_serv);
+ nvt_idx, crowd, cam_ignore,
+ logic_serv);
}
if (ring != -1) {
@@ -708,7 +708,7 @@ static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
static int pnv_xive2_broadcast(XivePresenter *xptr,
uint8_t nvt_blk, uint32_t nvt_idx,
- uint8_t priority)
+ bool crowd, bool ignore, uint8_t priority)
{
PnvXive2 *xive = PNV_XIVE2(xptr);
PnvChip *chip = xive->chip;
@@ -733,10 +733,10 @@ static int pnv_xive2_broadcast(XivePresenter *xptr,
if (gen1_tima_os) {
ring = xive_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
- nvt_idx, true, 0);
+ nvt_idx, ignore, 0);
} else {
ring = xive2_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
- nvt_idx, true, 0);
+ nvt_idx, crowd, ignore, 0);
}
if (ring != -1) {
diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
index 283a6b8fd2..41cfcab3b9 100644
--- a/hw/intc/spapr_xive.c
+++ b/hw/intc/spapr_xive.c
@@ -431,7 +431,8 @@ static int spapr_xive_write_nvt(XiveRouter *xrtr, uint8_t nvt_blk,
static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore,
+ uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
CPUState *cs;
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 2aa6e1fecc..d5fbd9bbd8 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1681,10 +1681,18 @@ uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
return 1 << (ctz32(~nvp_index) + 1);
}
-static uint8_t xive_get_group_level(uint32_t nvp_index)
+static uint8_t xive_get_group_level(bool crowd, bool ignore,
+ uint32_t nvp_blk, uint32_t nvp_index)
{
- /* FIXME add crowd encoding */
- return ctz32(~nvp_index) + 1;
+ uint8_t level = 0;
+
+ if (crowd) {
+ level = ((ctz32(~nvp_blk) + 1) & 0b11) << 4;
+ }
+ if (ignore) {
+ level |= (ctz32(~nvp_index) + 1) & 0b1111;
+ }
+ return level;
}
/*
@@ -1756,7 +1764,7 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
*/
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, bool *precluded)
{
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
@@ -1787,7 +1795,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
* a new command to the presenters (the equivalent of the "assign"
* power bus command in the documented full notify sequence.
*/
- count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
+ count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
priority, logic_serv, &match);
if (count < 0) {
return false;
@@ -1795,7 +1803,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
/* handle CPU exception delivery */
if (count) {
- group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
+ group_level = xive_get_group_level(crowd, cam_ignore, nvt_blk, nvt_idx);
trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
} else {
@@ -1920,6 +1928,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
}
found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
+ false /* crowd */,
xive_get_field32(END_W7_F0_IGNORE, end.w7),
priority,
xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index b6f279e6a3..1f2837104c 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1117,13 +1117,42 @@ static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
return (cam1 & vp_mask) == (cam2 & vp_mask);
}
+static uint8_t xive2_get_vp_block_mask(uint32_t nvt_blk, bool crowd)
+{
+ uint8_t size, block_mask = 0b1111;
+
+ /* 3 supported crowd sizes: 2, 4, 16 */
+ if (crowd) {
+ size = xive_get_vpgroup_size(nvt_blk);
+ if (size == 8) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid crowd size of 8n");
+ return block_mask;
+ }
+ block_mask = ~(size - 1);
+ block_mask &= 0b1111;
+ }
+ return block_mask;
+}
+
+static uint32_t xive2_get_vp_index_mask(uint32_t nvt_index, bool cam_ignore)
+{
+ uint32_t index_mask = 0xFFFFFF; /* 24 bits */
+
+ if (cam_ignore) {
+ index_mask = ~(xive_get_vpgroup_size(nvt_index) - 1);
+ index_mask &= 0xFFFFFF;
+ }
+ return index_mask;
+}
+
/*
* The thread context register words are in big-endian format.
*/
int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint32_t logic_serv)
+ bool crowd, bool cam_ignore,
+ uint32_t logic_serv)
{
uint32_t cam = xive2_nvp_cam_line(nvt_blk, nvt_idx);
uint32_t qw3w2 = xive_tctx_word2(&tctx->regs[TM_QW3_HV_PHYS]);
@@ -1131,7 +1160,8 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
- uint32_t vp_mask = 0xFFFFFFFF;
+ uint32_t index_mask, vp_mask;
+ uint8_t block_mask;
if (format == 0) {
/*
@@ -1139,9 +1169,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
* i=1: VP-group notification (bits ignored at the end of the
* NVT identifier)
*/
- if (cam_ignore) {
- vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
- }
+ block_mask = xive2_get_vp_block_mask(nvt_blk, crowd);
+ index_mask = xive2_get_vp_index_mask(nvt_idx, cam_ignore);
+ vp_mask = xive2_nvp_cam_line(block_mask, index_mask);
/* For VP-group notifications, threads with LGS=0 are excluded */
@@ -1274,6 +1304,12 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
return;
}
+ if (xive2_end_is_crowd(&end) & !xive2_end_is_ignore(&end)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "XIVE: invalid END, 'crowd' bit requires 'ignore' bit\n");
+ return;
+ }
+
if (xive2_end_is_enqueue(&end)) {
xive2_end_enqueue(&end, end_data);
/* Enqueuing event data modifies the EQ toggle and index */
@@ -1335,7 +1371,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
}
found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx,
- xive2_end_is_ignore(&end),
+ xive2_end_is_crowd(&end), xive2_end_is_ignore(&end),
priority,
xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
&precluded);
@@ -1372,17 +1408,24 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
} else {
- Xive2Nvgc nvg;
+ Xive2Nvgc nvgc;
uint32_t backlog;
+ bool crowd;
- /* For groups, the per-priority backlog counters are in the NVG */
- if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVG %x/%x\n",
- nvp_blk, nvp_idx);
+ crowd = xive2_end_is_crowd(&end);
+
+ /*
+ * For groups and crowds, the per-priority backlog
+ * counters are stored in the NVG/NVC structures
+ */
+ if (xive2_router_get_nvgc(xrtr, crowd,
+ nvp_blk, nvp_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n",
+ crowd ? "NVC" : "NVG", nvp_blk, nvp_idx);
return;
}
- if (!xive2_nvgc_is_valid(&nvg)) {
+ if (!xive2_nvgc_is_valid(&nvgc)) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n",
nvp_blk, nvp_idx);
return;
@@ -1395,13 +1438,16 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
* set the LSMFB field of the TIMA of relevant threads so
* that they know an interrupt is pending.
*/
- backlog = xive2_nvgc_get_backlog(&nvg, priority) + 1;
- xive2_nvgc_set_backlog(&nvg, priority, backlog);
- xive2_router_write_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg);
+ backlog = xive2_nvgc_get_backlog(&nvgc, priority) + 1;
+ xive2_nvgc_set_backlog(&nvgc, priority, backlog);
+ xive2_router_write_nvgc(xrtr, crowd, nvp_blk, nvp_idx, &nvgc);
if (precluded && backlog == 1) {
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
- xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, priority);
+ xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx,
+ xive2_end_is_crowd(&end),
+ xive2_end_is_ignore(&end),
+ priority);
if (!xive2_end_is_precluded_escalation(&end)) {
/*
diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
index 9b42f47326..3a86a6edda 100644
--- a/hw/ppc/pnv.c
+++ b/hw/ppc/pnv.c
@@ -2554,7 +2554,7 @@ static void pnv_pic_print_info(InterruptStatsProvider *obj, GString *buf)
static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv,
XiveTCTXMatch *match)
{
@@ -2568,8 +2568,8 @@ static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
int count;
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore,
- priority, logic_serv, match);
+ count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
+ cam_ignore, priority, logic_serv, match);
if (count < 0) {
return count;
@@ -2583,7 +2583,7 @@ static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv,
XiveTCTXMatch *match)
{
@@ -2597,8 +2597,8 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
int count;
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore,
- priority, logic_serv, match);
+ count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
+ cam_ignore, priority, logic_serv, match);
if (count < 0) {
return count;
@@ -2612,6 +2612,7 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
static int pnv10_xive_broadcast(XiveFabric *xfb,
uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore,
uint8_t priority)
{
PnvMachineState *pnv = PNV_MACHINE(xfb);
@@ -2622,7 +2623,7 @@ static int pnv10_xive_broadcast(XiveFabric *xfb,
XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
- xpc->broadcast(xptr, nvt_blk, nvt_idx, priority);
+ xpc->broadcast(xptr, nvt_blk, nvt_idx, crowd, cam_ignore, priority);
}
return 0;
}
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 8aa3ce7449..35a7bf8cce 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -4539,7 +4539,7 @@ static void spapr_pic_print_info(InterruptStatsProvider *obj, GString *buf)
*/
static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
SpaprMachineState *spapr = SPAPR_MACHINE(xfb);
@@ -4547,7 +4547,7 @@ static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
int count;
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore,
+ count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
priority, logic_serv, match);
if (count < 0) {
return count;
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 10/14] ppc/xive2: Check crowd backlog when scanning group backlog
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (8 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 09/14] ppc/xive2: Support crowd-matching when looking for target Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-10-15 21:13 ` [PATCH 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16 Michael Kowal
` (3 subsequent siblings)
13 siblings, 0 replies; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When processing a backlog scan for group interrupts, also take
into account crowd interrupts.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2_regs.h | 4 ++
hw/intc/xive2.c | 82 +++++++++++++++++++++++++------------
2 files changed, 60 insertions(+), 26 deletions(-)
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 66a419441c..89236b9aaf 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -237,4 +237,8 @@ void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx,
#define NVx_BACKLOG_OP PPC_BITMASK(52, 53)
#define NVx_BACKLOG_PRIO PPC_BITMASK(57, 59)
+/* split the 6-bit crowd/group level */
+#define NVx_CROWD_LVL(level) ((level >> 4) & 0b11)
+#define NVx_GROUP_LVL(level) (level & 0b1111)
+
#endif /* PPC_XIVE2_REGS_H */
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 1f2837104c..41d689eaab 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -367,6 +367,35 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
}
+static void xive2_pgofnext(uint8_t *nvgc_blk, uint32_t *nvgc_idx,
+ uint8_t next_level)
+{
+ uint32_t mask, next_idx;
+ uint8_t next_blk;
+
+ /*
+ * Adjust the block and index of a VP for the next group/crowd
+ * size (PGofFirst/PGofNext field in the NVP and NVGC structures).
+ *
+ * The 6-bit group level is split into a 2-bit crowd and 4-bit
+ * group levels. Encoding is similar. However, we don't support
+ * crowd size of 8. So a crowd level of 0b11 is bumped to a crowd
+ * size of 16.
+ */
+ next_blk = NVx_CROWD_LVL(next_level);
+ if (next_blk == 3) {
+ next_blk = 4;
+ }
+ mask = (1 << next_blk) - 1;
+ *nvgc_blk &= ~mask;
+ *nvgc_blk |= mask >> 1;
+
+ next_idx = NVx_GROUP_LVL(next_level);
+ mask = (1 << next_idx) - 1;
+ *nvgc_idx &= ~mask;
+ *nvgc_idx |= mask >> 1;
+}
+
/*
* Scan the group chain and return the highest priority and group
* level of pending group interrupts.
@@ -377,29 +406,28 @@ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr,
uint8_t *out_level)
{
Xive2Router *xrtr = XIVE2_ROUTER(xptr);
- uint32_t nvgc_idx, mask;
+ uint32_t nvgc_idx;
uint32_t current_level, count;
- uint8_t prio;
+ uint8_t nvgc_blk, prio;
Xive2Nvgc nvgc;
for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) {
- current_level = first_group & 0xF;
+ current_level = first_group & 0x3F;
+ nvgc_blk = nvp_blk;
+ nvgc_idx = nvp_idx;
while (current_level) {
- mask = (1 << current_level) - 1;
- nvgc_idx = nvp_idx & ~mask;
- nvgc_idx |= mask >> 1;
- qemu_log("fxb %s checking backlog for prio %d group idx %x\n",
- __func__, prio, nvgc_idx);
-
- if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ xive2_pgofnext(&nvgc_blk, &nvgc_idx, current_level);
+
+ if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(current_level),
+ nvgc_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return 0xFF;
}
if (!xive2_nvgc_is_valid(&nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return 0xFF;
}
@@ -408,7 +436,7 @@ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr,
*out_level = current_level;
return prio;
}
- current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0xF;
+ current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0x3F;
}
}
return 0xFF;
@@ -420,22 +448,23 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
uint8_t group_level)
{
Xive2Router *xrtr = XIVE2_ROUTER(xptr);
- uint32_t nvgc_idx, mask, count;
+ uint32_t nvgc_idx, count;
+ uint8_t nvgc_blk;
Xive2Nvgc nvgc;
- group_level &= 0xF;
- mask = (1 << group_level) - 1;
- nvgc_idx = nvp_idx & ~mask;
- nvgc_idx |= mask >> 1;
+ nvgc_blk = nvp_blk;
+ nvgc_idx = nvp_idx;
+ xive2_pgofnext(&nvgc_blk, &nvgc_idx, group_level);
- if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(group_level),
+ nvgc_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return;
}
if (!xive2_nvgc_is_valid(&nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return;
}
count = xive2_nvgc_get_backlog(&nvgc, group_prio);
@@ -443,7 +472,8 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
return;
}
xive2_nvgc_set_backlog(&nvgc, group_prio, count - 1);
- xive2_router_write_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc);
+ xive2_router_write_nvgc(xrtr, NVx_CROWD_LVL(group_level),
+ nvgc_blk, nvgc_idx, &nvgc);
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (9 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 10/14] ppc/xive2: Check crowd backlog when scanning group backlog Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-11-19 2:31 ` Nicholas Piggin
2024-10-15 21:13 ` [PATCH 12/14] pnv/xive: Support ESB Escalation Michael Kowal
` (2 subsequent siblings)
13 siblings, 1 reply; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Glenn Miles <milesg@linux.vnet.ibm.com>
XIVE crowd sizes are encoded into a 2-bit field as follows:
0: 0b00
2: 0b01
4: 0b10
16: 0b11
A crowd size of 8 is not supported.
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/xive.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index d5fbd9bbd8..565f0243bd 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1687,7 +1687,26 @@ static uint8_t xive_get_group_level(bool crowd, bool ignore,
uint8_t level = 0;
if (crowd) {
- level = ((ctz32(~nvp_blk) + 1) & 0b11) << 4;
+ /* crowd level is bit position of first 0 from the right in nvp_blk */
+ level = ctz32(~nvp_blk) + 1;
+
+ /*
+ * Supported crowd sizes are 2^1, 2^2, and 2^4. 2^3 is not supported.
+ * HW will encode level 4 as the value 3. See xive2_pgofnext().
+ */
+ switch (level) {
+ case 1:
+ case 2:
+ break;
+ case 4:
+ level = 3;
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ /* Crowd level bits reside in upper 2 bits of the 6 bit group level */
+ level <<= 4;
}
if (ignore) {
level |= (ctz32(~nvp_index) + 1) & 0b1111;
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 12/14] pnv/xive: Support ESB Escalation
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (10 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16 Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-11-19 5:00 ` Nicholas Piggin
2024-10-15 21:13 ` [PATCH 13/14] pnv/xive: Fix problem with treating NVGC as a NVP Michael Kowal
2024-10-15 21:13 ` [PATCH 14/14] qtest/xive: Add test of pool interrupts Michael Kowal
13 siblings, 1 reply; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Glenn Miles <milesg@linux.vnet.ibm.com>
END notification processing has an escalation path. The escalation is
not always an END escalation but can be an ESB escalation.
Also added a check for 'resume' processing which log a message stating it
needs to be implemented. This is not needed at the time but is part of
the END notification processing.
This change was taken from a patch provided by Michael Kowal
Suggested-by: Michael Kowal <kowal@us.ibm.com>
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2.h | 1 +
include/hw/ppc/xive2_regs.h | 13 +++++---
hw/intc/xive2.c | 61 +++++++++++++++++++++++++++++--------
3 files changed, 58 insertions(+), 17 deletions(-)
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 37aca4d26a..b17cc21ca6 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -82,6 +82,7 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
uint32_t xive2_router_get_config(Xive2Router *xrtr);
void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
+void xive2_notify(Xive2Router *xrtr, uint32_t lisn, bool pq_checked);
/*
* XIVE2 Presenter (POWER10)
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 89236b9aaf..42cdc91452 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -40,15 +40,18 @@
typedef struct Xive2Eas {
uint64_t w;
-#define EAS2_VALID PPC_BIT(0)
-#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
-#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
-#define EAS2_MASKED PPC_BIT(32) /* Masked */
-#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
+#define EAS2_VALID PPC_BIT(0)
+#define EAS2_QOS PPC_BIT(1, 2) /* Quality of Service(unimp) */
+#define EAS2_RESUME PPC_BIT(3) /* END Resume(unimp) */
+#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
+#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
+#define EAS2_MASKED PPC_BIT(32) /* Masked */
+#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
} Xive2Eas;
#define xive2_eas_is_valid(eas) (be64_to_cpu((eas)->w) & EAS2_VALID)
#define xive2_eas_is_masked(eas) (be64_to_cpu((eas)->w) & EAS2_MASKED)
+#define xive2_eas_is_resume(eas) (be64_to_cpu((eas)->w) & EAS2_RESUME)
void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf);
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 41d689eaab..f812ba9624 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1511,18 +1511,39 @@ do_escalation:
}
}
- /*
- * The END trigger becomes an Escalation trigger
- */
- xive2_router_end_notify(xrtr,
- xive_get_field32(END2_W4_END_BLOCK, end.w4),
- xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
- xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
+ if (xive2_end_is_escalate_end(&end)) {
+ /*
+ * Perform END Adaptive escalation processing
+ * The END trigger becomes an Escalation trigger
+ */
+ xive2_router_end_notify(xrtr,
+ xive_get_field32(END2_W4_END_BLOCK, end.w4),
+ xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
+ xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
+ } /* end END adaptive escalation */
+
+ else {
+ uint32_t lisn; /* Logical Interrupt Source Number */
+
+ /*
+ * Perform ESB escalation processing
+ * E[N] == 1 --> N
+ * Req[Block] <- E[ESB_Block]
+ * Req[Index] <- E[ESB_Index]
+ * Req[Offset] <- 0x000
+ * Execute <ESB Store> Req command
+ */
+ lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
+ xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
+
+ xive2_notify(xrtr, lisn, true /* pq_checked */);
+ }
+
+ return;
}
-void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
+void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
{
- Xive2Router *xrtr = XIVE2_ROUTER(xn);
uint8_t eas_blk = XIVE_EAS_BLOCK(lisn);
uint32_t eas_idx = XIVE_EAS_INDEX(lisn);
Xive2Eas eas;
@@ -1565,13 +1586,29 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
return;
}
+ /* TODO: add support for EAS resume if ever needed */
+ if (xive2_eas_is_resume(&eas)) {
+ qemu_log_mask(LOG_UNIMP,
+ "XIVE: EAS resume processing unimplemented - LISN %x\n",
+ lisn);
+ return;
+ }
+
/*
* The event trigger becomes an END trigger
*/
xive2_router_end_notify(xrtr,
- xive_get_field64(EAS2_END_BLOCK, eas.w),
- xive_get_field64(EAS2_END_INDEX, eas.w),
- xive_get_field64(EAS2_END_DATA, eas.w));
+ xive_get_field64(EAS2_END_BLOCK, eas.w),
+ xive_get_field64(EAS2_END_INDEX, eas.w),
+ xive_get_field64(EAS2_END_DATA, eas.w));
+}
+
+void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xn);
+
+ xive2_notify(xrtr, lisn, pq_checked);
+ return;
}
static Property xive2_router_properties[] = {
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 13/14] pnv/xive: Fix problem with treating NVGC as a NVP
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (11 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 12/14] pnv/xive: Support ESB Escalation Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-11-19 5:04 ` Nicholas Piggin
2024-10-15 21:13 ` [PATCH 14/14] qtest/xive: Add test of pool interrupts Michael Kowal
13 siblings, 1 reply; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Glenn Miles <milesg@linux.ibm.com>
When booting with PHYP, the blk/index for a NVGC was being
mistakenly treated as the blk/index for a NVP. Renamed
nvp_blk/nvp_idx throughout the code to nvx_blk/nvx_idx to prevent
confusion in the future and now we delay loading the NVP until
the point where we know that the block and index actually point to
a NVP.
Suggested-by: Michael Kowal <kowal@us.ibm.com>
Fixes: 6d4c4f70262 ("ppc/xive2: Support crowd-matching when looking for target")
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/xive2.c | 78 ++++++++++++++++++++++++-------------------------
1 file changed, 39 insertions(+), 39 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index f812ba9624..8abccd2f4b 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -226,8 +226,8 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf)
uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
uint32_t qentries = 1 << (qsize + 10);
- uint32_t nvp_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6);
- uint32_t nvp_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6);
+ uint32_t nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6);
+ uint32_t nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6);
uint8_t priority = xive_get_field32(END2_W7_F0_PRIORITY, end->w7);
uint8_t pq;
@@ -256,7 +256,7 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf)
xive2_end_is_firmware2(end) ? 'F' : '-',
xive2_end_is_ignore(end) ? 'i' : '-',
xive2_end_is_crowd(end) ? 'c' : '-',
- priority, nvp_blk, nvp_idx);
+ priority, nvx_blk, nvx_idx);
if (qaddr_base) {
g_string_append_printf(buf, " eq:@%08"PRIx64"% 6d/%5d ^%d",
@@ -401,7 +401,7 @@ static void xive2_pgofnext(uint8_t *nvgc_blk, uint32_t *nvgc_idx,
* level of pending group interrupts.
*/
static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr,
- uint8_t nvp_blk, uint32_t nvp_idx,
+ uint8_t nvx_blk, uint32_t nvx_idx,
uint8_t first_group,
uint8_t *out_level)
{
@@ -413,8 +413,8 @@ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr,
for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) {
current_level = first_group & 0x3F;
- nvgc_blk = nvp_blk;
- nvgc_idx = nvp_idx;
+ nvgc_blk = nvx_blk;
+ nvgc_idx = nvx_idx;
while (current_level) {
xive2_pgofnext(&nvgc_blk, &nvgc_idx, current_level);
@@ -443,7 +443,7 @@ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr,
}
static void xive2_presenter_backlog_decr(XivePresenter *xptr,
- uint8_t nvp_blk, uint32_t nvp_idx,
+ uint8_t nvx_blk, uint32_t nvx_idx,
uint8_t group_prio,
uint8_t group_level)
{
@@ -452,8 +452,8 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
uint8_t nvgc_blk;
Xive2Nvgc nvgc;
- nvgc_blk = nvp_blk;
- nvgc_idx = nvp_idx;
+ nvgc_blk = nvx_blk;
+ nvgc_idx = nvx_idx;
xive2_pgofnext(&nvgc_blk, &nvgc_idx, group_level);
if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(group_level),
@@ -1317,9 +1317,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
uint8_t priority;
uint8_t format;
bool found, precluded;
- Xive2Nvp nvp;
- uint8_t nvp_blk;
- uint32_t nvp_idx;
+ uint8_t nvx_blk;
+ uint32_t nvx_idx;
/* END cache lookup */
if (xive2_router_get_end(xrtr, end_blk, end_idx, &end)) {
@@ -1384,23 +1383,10 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
/*
* Follows IVPE notification
*/
- nvp_blk = xive_get_field32(END2_W6_VP_BLOCK, end.w6);
- nvp_idx = xive_get_field32(END2_W6_VP_OFFSET, end.w6);
-
- /* NVP cache lookup */
- if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVP %x/%x\n",
- nvp_blk, nvp_idx);
- return;
- }
-
- if (!xive2_nvp_is_valid(&nvp)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVP %x/%x is invalid\n",
- nvp_blk, nvp_idx);
- return;
- }
+ nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end.w6);
+ nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end.w6);
- found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx,
+ found = xive_presenter_notify(xrtr->xfb, format, nvx_blk, nvx_idx,
xive2_end_is_crowd(&end), xive2_end_is_ignore(&end),
priority,
xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
@@ -1428,6 +1414,21 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
if (!xive2_end_is_ignore(&end)) {
uint8_t ipb;
+ Xive2Nvp nvp;
+
+ /* NVP cache lookup */
+ if (xive2_router_get_nvp(xrtr, nvx_blk, nvx_idx, &nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVP %x/%x\n",
+ nvx_blk, nvx_idx);
+ return;
+ }
+
+ if (!xive2_nvp_is_valid(&nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVP %x/%x is invalid\n",
+ nvx_blk, nvx_idx);
+ return;
+ }
+
/*
* Record the IPB in the associated NVP structure for later
* use. The presenter will resend the interrupt when the vCPU
@@ -1436,7 +1437,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) |
xive_priority_to_ipb(priority);
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
- xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
+ xive2_router_write_nvp(xrtr, nvx_blk, nvx_idx, &nvp, 2);
} else {
Xive2Nvgc nvgc;
uint32_t backlog;
@@ -1449,32 +1450,31 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
* counters are stored in the NVG/NVC structures
*/
if (xive2_router_get_nvgc(xrtr, crowd,
- nvp_blk, nvp_idx, &nvgc)) {
+ nvx_blk, nvx_idx, &nvgc)) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n",
- crowd ? "NVC" : "NVG", nvp_blk, nvp_idx);
+ crowd ? "NVC" : "NVG", nvx_blk, nvx_idx);
return;
}
if (!xive2_nvgc_is_valid(&nvgc)) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n",
- nvp_blk, nvp_idx);
+ nvx_blk, nvx_idx);
return;
}
/*
* Increment the backlog counter for that priority.
- * For the precluded case, we only call broadcast the
- * first time the counter is incremented. broadcast will
- * set the LSMFB field of the TIMA of relevant threads so
- * that they know an interrupt is pending.
+ * We only call broadcast the first time the counter is
+ * incremented. broadcast will set the LSMFB field of the TIMA of
+ * relevant threads so that they know an interrupt is pending.
*/
backlog = xive2_nvgc_get_backlog(&nvgc, priority) + 1;
xive2_nvgc_set_backlog(&nvgc, priority, backlog);
- xive2_router_write_nvgc(xrtr, crowd, nvp_blk, nvp_idx, &nvgc);
+ xive2_router_write_nvgc(xrtr, crowd, nvx_blk, nvx_idx, &nvgc);
- if (precluded && backlog == 1) {
+ if (backlog == 1) {
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
- xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx,
+ xfc->broadcast(xrtr->xfb, nvx_blk, nvx_idx,
xive2_end_is_crowd(&end),
xive2_end_is_ignore(&end),
priority);
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 14/14] qtest/xive: Add test of pool interrupts
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (12 preceding siblings ...)
2024-10-15 21:13 ` [PATCH 13/14] pnv/xive: Fix problem with treating NVGC as a NVP Michael Kowal
@ 2024-10-15 21:13 ` Michael Kowal
2024-10-16 8:33 ` Thomas Huth
13 siblings, 1 reply; 29+ messages in thread
From: Michael Kowal @ 2024-10-15 21:13 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, thuth, lvivier, pbonzini
From: Glenn Miles <milesg@linux.ibm.com>
Added new test for pool interrupts.
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
tests/qtest/pnv-xive2-test.c | 77 ++++++++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)
diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
index a6008bc053..6e7e7f0d9b 100644
--- a/tests/qtest/pnv-xive2-test.c
+++ b/tests/qtest/pnv-xive2-test.c
@@ -4,6 +4,7 @@
* - Test 'Pull Thread Context to Odd Thread Reporting Line'
* - Test irq to hardware group
* - Test irq to hardware group going through backlog
+ * - Test irq to pool thread
*
* Copyright (c) 2024, IBM Corporation.
*
@@ -267,6 +268,79 @@ static void test_hw_irq(QTestState *qts)
g_assert_cmphex(cppr, ==, 0xFF);
}
+static void test_pool_irq(QTestState *qts)
+{
+ uint32_t irq = 2;
+ uint32_t irq_data = 0x600d0d06;
+ uint32_t end_index = 5;
+ uint32_t target_pir = 1;
+ uint32_t target_nvp = 0x100 + target_pir;
+ uint8_t priority = 5;
+ uint32_t reg32;
+ uint16_t reg16;
+ uint8_t pq, nsr, cppr, ipb;
+
+ printf("# ============================================================\n");
+ printf("# Testing irq %d to pool thread %d\n", irq, target_pir);
+
+ /* irq config */
+ set_eas(qts, irq, end_index, irq_data);
+ set_end(qts, end_index, target_nvp, priority, false /* group */);
+
+ /* enable and trigger irq */
+ get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
+
+ /* check irq is raised on cpu */
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
+
+ /* check TIMA values in the PHYS ring (shared by POOL ring) */
+ reg32 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x40);
+ g_assert_cmphex(cppr, ==, 0xFF);
+
+ /* check TIMA values in the POOL ring */
+ reg32 = get_tima32(qts, target_pir, TM_QW2_HV_POOL + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ ipb = (reg32 >> 8) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0);
+ g_assert_cmphex(cppr, ==, 0);
+ g_assert_cmphex(ipb, ==, 0x80 >> priority);
+
+ /* ack the irq */
+ reg16 = get_tima16(qts, target_pir, TM_SPC_ACK_HV_REG);
+ nsr = reg16 >> 8;
+ cppr = reg16 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x40);
+ g_assert_cmphex(cppr, ==, priority);
+
+ /* check irq data is what was configured */
+ reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
+ g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
+
+ /* check IPB is cleared in the POOL ring */
+ reg32 = get_tima32(qts, target_pir, TM_QW2_HV_POOL + TM_WORD0);
+ ipb = (reg32 >> 8) & 0xFF;
+ g_assert_cmphex(ipb, ==, 0);
+
+ /* End Of Interrupt */
+ set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
+
+ /* reset CPPR */
+ set_tima8(qts, target_pir, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
+ reg32 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x00);
+ g_assert_cmphex(cppr, ==, 0xFF);
+}
+
#define XIVE_ODD_CL 0x80
static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts)
{
@@ -485,6 +559,9 @@ static void test_xive(void)
/* omit reset_state here and use settings from test_hw_irq */
test_pull_thread_ctx_to_odd_thread_cl(qts);
+ reset_state(qts);
+ test_pool_irq(qts);
+
reset_state(qts);
test_hw_group_irq(qts);
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 14/14] qtest/xive: Add test of pool interrupts
2024-10-15 21:13 ` [PATCH 14/14] qtest/xive: Add test of pool interrupts Michael Kowal
@ 2024-10-16 8:33 ` Thomas Huth
2024-10-16 15:41 ` Mike Kowal
0 siblings, 1 reply; 29+ messages in thread
From: Thomas Huth @ 2024-10-16 8:33 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, lvivier, pbonzini, Fabiano Rosas
On 15/10/2024 23.13, Michael Kowal wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Added new test for pool interrupts.
>
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> tests/qtest/pnv-xive2-test.c | 77 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 77 insertions(+)
>
> diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
> index a6008bc053..6e7e7f0d9b 100644
> --- a/tests/qtest/pnv-xive2-test.c
> +++ b/tests/qtest/pnv-xive2-test.c
> @@ -4,6 +4,7 @@
> * - Test 'Pull Thread Context to Odd Thread Reporting Line'
> * - Test irq to hardware group
> * - Test irq to hardware group going through backlog
> + * - Test irq to pool thread
> *
> * Copyright (c) 2024, IBM Corporation.
> *
> @@ -267,6 +268,79 @@ static void test_hw_irq(QTestState *qts)
> g_assert_cmphex(cppr, ==, 0xFF);
> }
>
> +static void test_pool_irq(QTestState *qts)
> +{
> + uint32_t irq = 2;
> + uint32_t irq_data = 0x600d0d06;
> + uint32_t end_index = 5;
> + uint32_t target_pir = 1;
> + uint32_t target_nvp = 0x100 + target_pir;
> + uint8_t priority = 5;
> + uint32_t reg32;
> + uint16_t reg16;
> + uint8_t pq, nsr, cppr, ipb;
> +
> + printf("# ============================================================\n");
> + printf("# Testing irq %d to pool thread %d\n", irq, target_pir);
Please don't use direct printfs in the qtest framework. If you really have
to log stuff, use g_test_message() instead.
Thomas
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 14/14] qtest/xive: Add test of pool interrupts
2024-10-16 8:33 ` Thomas Huth
@ 2024-10-16 15:41 ` Mike Kowal
0 siblings, 0 replies; 29+ messages in thread
From: Mike Kowal @ 2024-10-16 15:41 UTC (permalink / raw)
To: Thomas Huth, qemu-devel
Cc: qemu-ppc, clg, fbarrat, npiggin, milesg, danielhb413, david,
harshpb, lvivier, pbonzini, Fabiano Rosas
On 10/16/2024 3:33 AM, Thomas Huth wrote:
> On 15/10/2024 23.13, Michael Kowal wrote:
>> From: Glenn Miles <milesg@linux.ibm.com>
>>
>> Added new test for pool interrupts.
>>
>> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
>> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
>> ---
>> tests/qtest/pnv-xive2-test.c | 77 ++++++++++++++++++++++++++++++++++++
>> 1 file changed, 77 insertions(+)
>>
>> diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
>> index a6008bc053..6e7e7f0d9b 100644
>> --- a/tests/qtest/pnv-xive2-test.c
>> +++ b/tests/qtest/pnv-xive2-test.c
>> @@ -4,6 +4,7 @@
>> * - Test 'Pull Thread Context to Odd Thread Reporting Line'
>> * - Test irq to hardware group
>> * - Test irq to hardware group going through backlog
>> + * - Test irq to pool thread
>> *
>> * Copyright (c) 2024, IBM Corporation.
>> *
Just an FYI that I forgot to rebase the the Group 3 XIVE qtest changes
into these patch sets... and will be done for version 2.
MAK
>> @@ -267,6 +268,79 @@ static void test_hw_irq(QTestState *qts)
>> g_assert_cmphex(cppr, ==, 0xFF);
>> }
>> +static void test_pool_irq(QTestState *qts)
>> +{
>> + uint32_t irq = 2;
>> + uint32_t irq_data = 0x600d0d06;
>> + uint32_t end_index = 5;
>> + uint32_t target_pir = 1;
>> + uint32_t target_nvp = 0x100 + target_pir;
>> + uint8_t priority = 5;
>> + uint32_t reg32;
>> + uint16_t reg16;
>> + uint8_t pq, nsr, cppr, ipb;
>> +
>> + printf("#
>> ============================================================\n");
>> + printf("# Testing irq %d to pool thread %d\n", irq, target_pir);
>
> Please don't use direct printfs in the qtest framework. If you really
> have to log stuff, use g_test_message() instead.
>
> Thomas
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 02/14] ppc/xive2: Add grouping level to notification
2024-10-15 21:13 ` [PATCH 02/14] ppc/xive2: Add grouping level to notification Michael Kowal
@ 2024-11-19 2:08 ` Nicholas Piggin
2024-11-21 22:31 ` Mike Kowal
0 siblings, 1 reply; 29+ messages in thread
From: Nicholas Piggin @ 2024-11-19 2:08 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> The NSR has a (so far unused) grouping level field. When a interrupt
> is presented, that field tells the hypervisor or OS if the interrupt
> is for an individual VP or for a VP-group/crowd. This patch reworks
> the presentation API to allow to set/unset the level when
> raising/accepting an interrupt.
>
> It also renames xive_tctx_ipb_update() to xive_tctx_pipr_update() as
> the IPB is only used for VP-specific target, whereas the PIPR always
> needs to be updated.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> include/hw/ppc/xive.h | 19 +++++++-
> include/hw/ppc/xive_regs.h | 20 +++++++--
> hw/intc/xive.c | 90 +++++++++++++++++++++++---------------
> hw/intc/xive2.c | 18 ++++----
> hw/intc/trace-events | 2 +-
> 5 files changed, 100 insertions(+), 49 deletions(-)
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 31242f0406..27ef6c1a17 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -510,6 +510,21 @@ static inline uint8_t xive_priority_to_ipb(uint8_t priority)
> 0 : 1 << (XIVE_PRIORITY_MAX - priority);
> }
>
> +static inline uint8_t xive_priority_to_pipr(uint8_t priority)
> +{
> + return priority > XIVE_PRIORITY_MAX ? 0xFF : priority;
> +}
> +
> +/*
> + * Convert an Interrupt Pending Buffer (IPB) register to a Pending
> + * Interrupt Priority Register (PIPR), which contains the priority of
> + * the most favored pending notification.
> + */
> +static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
> +{
> + return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
> +}
> +
> /*
> * XIVE Thread Interrupt Management Aera (TIMA)
> *
> @@ -532,8 +547,10 @@ void xive_tctx_pic_print_info(XiveTCTX *tctx, GString *buf);
> Object *xive_tctx_create(Object *cpu, XivePresenter *xptr, Error **errp);
> void xive_tctx_reset(XiveTCTX *tctx);
> void xive_tctx_destroy(XiveTCTX *tctx);
> -void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb);
> +void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> + uint8_t group_level);
> void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
> +void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
>
> /*
> * KVM XIVE device helpers
> diff --git a/include/hw/ppc/xive_regs.h b/include/hw/ppc/xive_regs.h
> index 326327fc79..b455728c9c 100644
> --- a/include/hw/ppc/xive_regs.h
> +++ b/include/hw/ppc/xive_regs.h
> @@ -146,7 +146,14 @@
> #define TM_SPC_PULL_PHYS_CTX_OL 0xc38 /* Pull phys ctx to odd cache line */
> /* XXX more... */
>
> -/* NSR fields for the various QW ack types */
> +/*
> + * NSR fields for the various QW ack types
> + *
> + * P10 has an extra bit in QW3 for the group level instead of the
> + * reserved 'i' bit. Since it is not used and we don't support group
> + * interrupts on P9, we use the P10 definition for the group level so
> + * that we can have common macros for the NSR
> + */
> #define TM_QW0_NSR_EB PPC_BIT8(0)
> #define TM_QW1_NSR_EO PPC_BIT8(0)
> #define TM_QW3_NSR_HE PPC_BITMASK8(0, 1)
> @@ -154,8 +161,15 @@
> #define TM_QW3_NSR_HE_POOL 1
> #define TM_QW3_NSR_HE_PHYS 2
> #define TM_QW3_NSR_HE_LSI 3
> -#define TM_QW3_NSR_I PPC_BIT8(2)
> -#define TM_QW3_NSR_GRP_LVL PPC_BIT8(3, 7)
> +#define TM_NSR_GRP_LVL PPC_BITMASK8(2, 7)
> +/*
> + * On P10, the format of the 6-bit group level is: 2 bits for the
> + * crowd size and 4 bits for the group size. Since group/crowd size is
> + * always a power of 2, we encode the log. For example, group_level=4
> + * means crowd size = 0 and group size = 16 (2^4)
> + * Same encoding is used in the NVP and NVGC structures for
> + * PGoFirst and PGoNext fields
> + */
>
> /*
> * EAS (Event Assignment Structure)
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index efcb63e8aa..bacf518fa6 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -27,16 +27,6 @@
> * XIVE Thread Interrupt Management context
> */
>
> -/*
> - * Convert an Interrupt Pending Buffer (IPB) register to a Pending
> - * Interrupt Priority Register (PIPR), which contains the priority of
> - * the most favored pending notification.
> - */
> -static uint8_t ipb_to_pipr(uint8_t ibp)
> -{
> - return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
> -}
> -
> static uint8_t exception_mask(uint8_t ring)
> {
> switch (ring) {
> @@ -87,10 +77,17 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
>
> regs[TM_CPPR] = cppr;
>
> - /* Reset the pending buffer bit */
> - alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> + /*
> + * If the interrupt was for a specific VP, reset the pending
> + * buffer bit, otherwise clear the logical server indicator
> + */
> + if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
> + regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
> + } else {
> + alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> + }
>
> - /* Drop Exception bit */
> + /* Drop the exception bit */
> regs[TM_NSR] &= ~mask;
NSR can just be set to 0 directly instead of clearing masks.
>
> trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
> @@ -101,7 +98,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> return ((uint64_t)nsr << 8) | regs[TM_CPPR];
> }
>
> -static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring)
> +void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
> {
> /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> @@ -111,13 +108,13 @@ static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring)
> if (alt_regs[TM_PIPR] < alt_regs[TM_CPPR]) {
> switch (ring) {
> case TM_QW1_OS:
> - regs[TM_NSR] |= TM_QW1_NSR_EO;
> + regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
> break;
> case TM_QW2_HV_POOL:
> - alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6);
> + alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
> break;
> case TM_QW3_HV_PHYS:
> - regs[TM_NSR] |= (TM_QW3_NSR_HE_PHYS << 6);
> + regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
> break;
> default:
> g_assert_not_reached();
The big difference between presenting group and VP directed is that
VP can just be queued up in IPB, whereas group can not be, and must
be redistributed before they are precluded by a different interrupt.
So I wonder if we should assert if there is an existing group interrupt
in NSR being overwritten at this point.
Also should we be masking the group level here? Maybe just assert the
top 2 bits are clear, otherwise something has gone wrong if this is
chopping off bits here.
> @@ -159,7 +156,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> * Recompute the PIPR based on local pending interrupts. The PHYS
> * ring must take the minimum of both the PHYS and POOL PIPR values.
> */
> - pipr_min = ipb_to_pipr(regs[TM_IPB]);
> + pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
> ring_min = ring;
>
> /* PHYS updates also depend on POOL values */
> @@ -169,7 +166,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> /* POOL values only matter if POOL ctx is valid */
> if (pool_regs[TM_WORD2] & 0x80) {
>
> - uint8_t pool_pipr = ipb_to_pipr(pool_regs[TM_IPB]);
> + uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
>
> /*
> * Determine highest priority interrupt and
Moving this function and changing ipb->pipr (before adding group) could
be split into its own patch, since the mechanical changes seem to be
the biggest part, would make the group change simpler to see.
> @@ -185,17 +182,27 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> regs[TM_PIPR] = pipr_min;
>
> /* CPPR has changed, check if we need to raise a pending exception */
> - xive_tctx_notify(tctx, ring_min);
> + xive_tctx_notify(tctx, ring_min, 0);
> }
>
> -void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb)
> -{
> +void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> + uint8_t group_level)
> + {
> + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> + uint8_t *alt_regs = &tctx->regs[alt_ring];
> uint8_t *regs = &tctx->regs[ring];
>
> - regs[TM_IPB] |= ipb;
> - regs[TM_PIPR] = ipb_to_pipr(regs[TM_IPB]);
> - xive_tctx_notify(tctx, ring);
> -}
> + if (group_level == 0) {
> + /* VP-specific */
> + regs[TM_IPB] |= xive_priority_to_ipb(priority);
> + alt_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> + } else {
> + /* VP-group */
> + alt_regs[TM_PIPR] = xive_priority_to_pipr(priority);
> + }
> + xive_tctx_notify(tctx, ring, group_level);
> + }
>
> /*
> * XIVE Thread Interrupt Management Area (TIMA)
> @@ -411,13 +418,13 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
> }
>
> /*
> - * Adjust the IPB to allow a CPU to process event queues of other
> + * Adjust the PIPR to allow a CPU to process event queues of other
> * priorities during one physical interrupt cycle.
> */
> static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size)
> {
> - xive_tctx_ipb_update(tctx, TM_QW1_OS, xive_priority_to_ipb(value & 0xff));
> + xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0);
> }
>
> static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk,
> @@ -495,16 +502,20 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
> /* Reset the NVT value */
> nvt.w4 = xive_set_field32(NVT_W4_IPB, nvt.w4, 0);
> xive_router_write_nvt(xrtr, nvt_blk, nvt_idx, &nvt, 4);
> - }
> +
> + uint8_t *regs = &tctx->regs[TM_QW1_OS];
> + regs[TM_IPB] |= ipb;
> +}
> +
Whitespace damage here?
> /*
> - * Always call xive_tctx_ipb_update(). Even if there were no
> + * Always call xive_tctx_pipr_update(). Even if there were no
> * escalation triggered, there could be a pending interrupt which
> * was saved when the context was pulled and that we need to take
> * into account by recalculating the PIPR (which is not
> * saved/restored).
> * It will also raise the External interrupt signal if needed.
> */
> - xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
> + xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
I don't understand what's going on here. Why not ipb_to_pipr(ipb)?
> }
>
> /*
> @@ -841,9 +852,9 @@ void xive_tctx_reset(XiveTCTX *tctx)
> * CPPR is first set.
> */
> tctx->regs[TM_QW1_OS + TM_PIPR] =
> - ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
> + xive_ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
> tctx->regs[TM_QW3_HV_PHYS + TM_PIPR] =
> - ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
> + xive_ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
> }
>
> static void xive_tctx_realize(DeviceState *dev, Error **errp)
> @@ -1660,6 +1671,12 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
> return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
> }
>
> +static uint8_t xive_get_group_level(uint32_t nvp_index)
> +{
> + /* FIXME add crowd encoding */
> + return ctz32(~nvp_index) + 1;
> +}
> +
> /*
> * The thread context register words are in big-endian format.
> */
> @@ -1745,6 +1762,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> {
> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
> XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
> + uint8_t group_level;
> int count;
>
> /*
> @@ -1758,9 +1776,9 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
>
> /* handle CPU exception delivery */
> if (count) {
> - trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring);
> - xive_tctx_ipb_update(match.tctx, match.ring,
> - xive_priority_to_ipb(priority));
> + group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
> + trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
> + xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
> }
>
> return !!count;
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 4adc3b6950..db372f4b30 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -564,8 +564,10 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
> {
> + uint8_t ipb, backlog_level;
> + uint8_t backlog_prio;
> + uint8_t *regs = &tctx->regs[TM_QW1_OS];
> Xive2Nvp nvp;
> - uint8_t ipb;
Put the uint8_ts all on the same line or keep them all on different
lines?
Thanks,
Nick
>
> /*
> * Grab the associated thread interrupt context registers in the
> @@ -594,15 +596,15 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
> }
> + regs[TM_IPB] = ipb;
> + backlog_prio = xive_ipb_to_pipr(ipb);
> + backlog_level = 0;
> +
> /*
> - * Always call xive_tctx_ipb_update(). Even if there were no
> - * escalation triggered, there could be a pending interrupt which
> - * was saved when the context was pulled and that we need to take
> - * into account by recalculating the PIPR (which is not
> - * saved/restored).
> - * It will also raise the External interrupt signal if needed.
> + * Compute the PIPR based on the restored state.
> + * It will raise the External interrupt signal if needed.
> */
> - xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
> + xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
> }
>
> /*
> diff --git a/hw/intc/trace-events b/hw/intc/trace-events
> index 3dcf147198..7435728c51 100644
> --- a/hw/intc/trace-events
> +++ b/hw/intc/trace-events
> @@ -282,7 +282,7 @@ xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "EN
> xive_router_end_escalate(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t end_data) "END 0x%02x/0x%04x -> escalate END 0x%02x/0x%04x data 0x%08x"
> xive_tctx_tm_write(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
> xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
> -xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring) "found NVT 0x%x/0x%x ring=0x%x"
> +xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d"
> xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64
>
> # pnv_xive.c
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16
2024-10-15 21:13 ` [PATCH 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16 Michael Kowal
@ 2024-11-19 2:31 ` Nicholas Piggin
0 siblings, 0 replies; 29+ messages in thread
From: Nicholas Piggin @ 2024-11-19 2:31 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
> From: Glenn Miles <milesg@linux.vnet.ibm.com>
>
> XIVE crowd sizes are encoded into a 2-bit field as follows:
> 0: 0b00
> 2: 0b01
> 4: 0b10
> 16: 0b11
>
> A crowd size of 8 is not supported.
Squash this into patch 9 as a fix? xive2_pgofnext() is introduced in
patch 10, but that's not enough to worry about changing the comment.
Thanks,
Nick
>
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/xive.c | 21 ++++++++++++++++++++-
> 1 file changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index d5fbd9bbd8..565f0243bd 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -1687,7 +1687,26 @@ static uint8_t xive_get_group_level(bool crowd, bool ignore,
> uint8_t level = 0;
>
> if (crowd) {
> - level = ((ctz32(~nvp_blk) + 1) & 0b11) << 4;
> + /* crowd level is bit position of first 0 from the right in nvp_blk */
> + level = ctz32(~nvp_blk) + 1;
> +
> + /*
> + * Supported crowd sizes are 2^1, 2^2, and 2^4. 2^3 is not supported.
> + * HW will encode level 4 as the value 3. See xive2_pgofnext().
> + */
> + switch (level) {
> + case 1:
> + case 2:
> + break;
> + case 4:
> + level = 3;
> + break;
> + default:
> + g_assert_not_reached();
> + }
> +
> + /* Crowd level bits reside in upper 2 bits of the 6 bit group level */
> + level <<= 4;
> }
> if (ignore) {
> level |= (ctz32(~nvp_index) + 1) & 0b1111;
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 03/14] ppc/xive2: Support group-matching when looking for target
2024-10-15 21:13 ` [PATCH 03/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
@ 2024-11-19 3:22 ` Nicholas Piggin
2024-11-21 22:56 ` Mike Kowal
0 siblings, 1 reply; 29+ messages in thread
From: Nicholas Piggin @ 2024-11-19 3:22 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> If an END has the 'i' bit set (ignore), then it targets a group of
> VPs. The size of the group depends on the VP index of the target
> (first 0 found when looking at the least significant bits of the
> index) so a mask is applied on the VP index of a running thread to
> know if we have a match.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> include/hw/ppc/xive.h | 5 +++-
> include/hw/ppc/xive2.h | 1 +
> hw/intc/pnv_xive2.c | 33 ++++++++++++++-------
> hw/intc/xive.c | 56 +++++++++++++++++++++++++-----------
> hw/intc/xive2.c | 65 ++++++++++++++++++++++++++++++------------
> 5 files changed, 114 insertions(+), 46 deletions(-)
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 27ef6c1a17..a177b75723 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -424,6 +424,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas);
> typedef struct XiveTCTXMatch {
> XiveTCTX *tctx;
> uint8_t ring;
> + bool precluded;
> } XiveTCTXMatch;
>
> #define TYPE_XIVE_PRESENTER "xive-presenter"
> @@ -452,7 +453,9 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv);
> + uint32_t logic_serv, bool *precluded);
> +
> +uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
>
> /*
> * XIVE Fabric (Interface between Interrupt Controller and Machine)
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 5bccf41159..17c31fcb4b 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -121,6 +121,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size);
> void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> +bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
> void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 834d32287b..3fb466bb2c 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -660,21 +660,34 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> logic_serv);
> }
>
> - /*
> - * Save the context and follow on to catch duplicates,
> - * that we don't support yet.
> - */
> if (ring != -1) {
> - if (match->tctx) {
> + /*
> + * For VP-specific match, finding more than one is a
> + * problem. For group notification, it's possible.
> + */
> + if (!cam_ignore && match->tctx) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a "
> "thread context NVT %x/%x\n",
> nvt_blk, nvt_idx);
> - return false;
> + /* Should set a FIR if we ever model it */
> + return -1;
> + }
> + /*
> + * For a group notification, we need to know if the
> + * match is precluded first by checking the current
> + * thread priority. If the interrupt can be delivered,
> + * we always notify the first match (for now).
> + */
> + if (cam_ignore &&
> + xive2_tm_irq_precluded(tctx, ring, priority)) {
> + match->precluded = true;
> + } else {
> + if (!match->tctx) {
> + match->ring = ring;
> + match->tctx = tctx;
> + }
> + count++;
Multiple matches logic is a bit shoehorned into the match code here.
"Return any best match" would be okay, but match->precluded could be set
to true for a non-precluded match if a different match was precluded.
And for VP directed interrupts, we can get a match from here which
*is* precluded, but has precluded = false!
It's a bit confusing.
typedef struct XiveTCTXMatch {
XiveTCTX *tctx;
uint8_t ring;
bool precluded;
} XiveTCTXMatch;
What if this was changed to be more clear it doesn't refer to a single
tctx? Something like -
XiveNVTMatches {
XiveTCTX *best_tctx;
uint8_t best_ring;
int match_count;
int precluded_group_match_count;
}
> }
> -
> - match->ring = ring;
> - match->tctx = tctx;
> - count++;
> }
> }
> }
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index bacf518fa6..8ffcac4f65 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -1671,6 +1671,16 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
> return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
> }
>
> +uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
> +{
> + /*
> + * Group size is a power of 2. The position of the first 0
> + * (starting with the least significant bits) in the NVP index
> + * gives the size of the group.
> + */
> + return 1 << (ctz32(~nvp_index) + 1);
> +}
> +
> static uint8_t xive_get_group_level(uint32_t nvp_index)
> {
> /* FIXME add crowd encoding */
> @@ -1743,30 +1753,39 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> /*
> * This is our simple Xive Presenter Engine model. It is merged in the
> * Router as it does not require an extra object.
> - *
> - * It receives notification requests sent by the IVRE to find one
> - * matching NVT (or more) dispatched on the processor threads. In case
> - * of a single NVT notification, the process is abbreviated and the
> - * thread is signaled if a match is found. In case of a logical server
> - * notification (bits ignored at the end of the NVT identifier), the
> - * IVPE and IVRE select a winning thread using different filters. This
> - * involves 2 or 3 exchanges on the PowerBus that the model does not
> - * support.
> - *
> - * The parameters represent what is sent on the PowerBus
> */
> bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv)
> + uint32_t logic_serv, bool *precluded)
> {
> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
> - XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
> + XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false };
> uint8_t group_level;
> int count;
>
> /*
> - * Ask the machine to scan the interrupt controllers for a match
> + * Ask the machine to scan the interrupt controllers for a match.
> + *
> + * For VP-specific notification, we expect at most one match and
> + * one call to the presenters is all we need (abbreviated notify
> + * sequence documented by the architecture).
> + *
> + * For VP-group notification, match_nvt() is the equivalent of the
> + * "histogram" and "poll" commands sent to the power bus to the
> + * presenters. 'count' could be more than one, but we always
> + * select the first match for now. 'precluded' tells if (at least)
> + * one thread matches but can't take the interrupt now because
> + * it's running at a more favored priority. We return the
> + * information to the router so that it can take appropriate
> + * actions (backlog, escalation, broadcast, etc...)
> + *
> + * If we were to implement a better way of dispatching the
> + * interrupt in case of multiple matches (instead of the first
> + * match), we would need a heuristic to elect a thread (for
> + * example, the hardware keeps track of an 'age' in the TIMA) and
> + * a new command to the presenters (the equivalent of the "assign"
> + * power bus command in the documented full notify sequence.
> */
> count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
> priority, logic_serv, &match);
> @@ -1779,6 +1798,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
> trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
> xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
> + } else {
> + *precluded = match.precluded;
> }
>
> return !!count;
> @@ -1818,7 +1839,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> uint8_t nvt_blk;
> uint32_t nvt_idx;
> XiveNVT nvt;
> - bool found;
> + bool found, precluded;
>
> uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
> uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
> @@ -1901,8 +1922,9 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
> xive_get_field32(END_W7_F0_IGNORE, end.w7),
> priority,
> - xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7));
> -
> + xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
> + &precluded);
> + /* we don't support VP-group notification on P9, so precluded is not used */
> /* TODO: Auto EOI. */
>
> if (found) {
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index db372f4b30..2cb03c758e 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -739,6 +739,12 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
> return xrc->write_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, nvgc);
> }
>
> +static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
> + uint32_t vp_mask)
> +{
> + return (cam1 & vp_mask) == (cam2 & vp_mask);
> +}
> +
> /*
> * The thread context register words are in big-endian format.
> */
> @@ -753,44 +759,50 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
> uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
>
> - /*
> - * TODO (PowerNV): ignore mode. The low order bits of the NVT
> - * identifier are ignored in the "CAM" match.
> - */
> + uint32_t vp_mask = 0xFFFFFFFF;
>
> if (format == 0) {
> - if (cam_ignore == true) {
> - /*
> - * F=0 & i=1: Logical server notification (bits ignored at
> - * the end of the NVT identifier)
> - */
> - qemu_log_mask(LOG_UNIMP, "XIVE: no support for LS NVT %x/%x\n",
> - nvt_blk, nvt_idx);
> - return -1;
> + /*
> + * i=0: Specific NVT notification
> + * i=1: VP-group notification (bits ignored at the end of the
> + * NVT identifier)
> + */
> + if (cam_ignore) {
> + vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
> }
>
> - /* F=0 & i=0: Specific NVT notification */
> + /* For VP-group notifications, threads with LGS=0 are excluded */
>
> /* PHYS ring */
> if ((be32_to_cpu(qw3w2) & TM2_QW3W2_VT) &&
> - cam == xive2_tctx_hw_cam_line(xptr, tctx)) {
> + !(cam_ignore && tctx->regs[TM_QW3_HV_PHYS + TM_LGS] == 0) &&
> + xive2_vp_match_mask(cam,
> + xive2_tctx_hw_cam_line(xptr, tctx),
> + vp_mask)) {
> return TM_QW3_HV_PHYS;
> }
>
> /* HV POOL ring */
> if ((be32_to_cpu(qw2w2) & TM2_QW2W2_VP) &&
> - cam == xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2)) {
> + !(cam_ignore && tctx->regs[TM_QW2_HV_POOL + TM_LGS] == 0) &&
> + xive2_vp_match_mask(cam,
> + xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2),
> + vp_mask)) {
> return TM_QW2_HV_POOL;
> }
>
> /* OS ring */
> if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
> - cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) {
> + !(cam_ignore && tctx->regs[TM_QW1_OS + TM_LGS] == 0) &&
> + xive2_vp_match_mask(cam,
> + xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2),
> + vp_mask)) {
> return TM_QW1_OS;
> }
> } else {
> /* F=1 : User level Event-Based Branch (EBB) notification */
>
> + /* FIXME: what if cam_ignore and LGS = 0 ? */
> /* USER ring */
> if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
> (cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) &&
> @@ -802,6 +814,22 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> return -1;
> }
>
> +bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> +{
> + uint8_t *regs = &tctx->regs[ring];
> +
> + /*
> + * The xive2_presenter_tctx_match() above tells if there's a match
> + * but for VP-group notification, we still need to look at the
> + * priority to know if the thread can take the interrupt now or if
> + * it is precluded.
> + */
> + if (priority < regs[TM_CPPR]) {
Should this also test PIPR?
I'm not sure about CPPR and PIPR relationship exactly. Does hardware
set PIPR for pending IPB interrupts even if they are not < CPPR? Or
does it always reflect the presented interrupt?
Thanks,
Nick
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 05/14] ppc/xive2: Process group backlog when pushing an OS context
2024-10-15 21:13 ` [PATCH 05/14] ppc/xive2: Process group backlog when pushing an OS context Michael Kowal
@ 2024-11-19 4:20 ` Nicholas Piggin
0 siblings, 0 replies; 29+ messages in thread
From: Nicholas Piggin @ 2024-11-19 4:20 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> When pushing an OS context, we were already checking if there was a
> pending interrupt in the IPB and sending a notification if needed. We
> also need to check if there is a pending group interrupt stored in the
> NVG table. To avoid useless backlog scans, we only scan if the NVP
> belongs to a group.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/xive2.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 97 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index a6dc6d553f..7130892482 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -279,6 +279,85 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
> end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
> }
>
> +/*
> + * Scan the group chain and return the highest priority and group
> + * level of pending group interrupts.
> + */
> +static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr,
> + uint8_t nvp_blk, uint32_t nvp_idx,
> + uint8_t first_group,
> + uint8_t *out_level)
Could we call that xive2_presenter_backlog_scan(), which I think
matches how the specification refers to it.
Thanks,
Nick
> +{
> + Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> + uint32_t nvgc_idx, mask;
> + uint32_t current_level, count;
> + uint8_t prio;
> + Xive2Nvgc nvgc;
> +
> + for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) {
> + current_level = first_group & 0xF;
> +
> + while (current_level) {
> + mask = (1 << current_level) - 1;
> + nvgc_idx = nvp_idx & ~mask;
> + nvgc_idx |= mask >> 1;
> + qemu_log("fxb %s checking backlog for prio %d group idx %x\n",
> + __func__, prio, nvgc_idx);
> +
> + if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
> + nvp_blk, nvgc_idx);
> + return 0xFF;
> + }
> + if (!xive2_nvgc_is_valid(&nvgc)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
> + nvp_blk, nvgc_idx);
> + return 0xFF;
> + }
> +
> + count = xive2_nvgc_get_backlog(&nvgc, prio);
> + if (count) {
> + *out_level = current_level;
> + return prio;
> + }
> + current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0xF;
> + }
> + }
> + return 0xFF;
> +}
> +
> +static void xive2_presenter_backlog_decr(XivePresenter *xptr,
> + uint8_t nvp_blk, uint32_t nvp_idx,
> + uint8_t group_prio,
> + uint8_t group_level)
> +{
> + Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> + uint32_t nvgc_idx, mask, count;
> + Xive2Nvgc nvgc;
> +
> + group_level &= 0xF;
> + mask = (1 << group_level) - 1;
> + nvgc_idx = nvp_idx & ~mask;
> + nvgc_idx |= mask >> 1;
> +
> + if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
> + nvp_blk, nvgc_idx);
> + return;
> + }
> + if (!xive2_nvgc_is_valid(&nvgc)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
> + nvp_blk, nvgc_idx);
> + return;
> + }
> + count = xive2_nvgc_get_backlog(&nvgc, group_prio);
> + if (!count) {
> + return;
> + }
> + xive2_nvgc_set_backlog(&nvgc, group_prio, count - 1);
> + xive2_router_write_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc);
> +}
> +
> /*
> * XIVE Thread Interrupt Management Area (TIMA) - Gen2 mode
> *
> @@ -588,8 +667,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
> {
> - uint8_t ipb, backlog_level;
> - uint8_t backlog_prio;
> + XivePresenter *xptr = XIVE_PRESENTER(xrtr);
> + uint8_t ipb, backlog_level, group_level, first_group;
> + uint8_t backlog_prio, group_prio;
> uint8_t *regs = &tctx->regs[TM_QW1_OS];
> Xive2Nvp nvp;
>
> @@ -624,8 +704,22 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> backlog_prio = xive_ipb_to_pipr(ipb);
> backlog_level = 0;
>
> + first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
> + if (first_group && regs[TM_LSMFB] < backlog_prio) {
> + group_prio = xive2_presenter_backlog_check(xptr, nvp_blk, nvp_idx,
> + first_group, &group_level);
> + regs[TM_LSMFB] = group_prio;
> + if (regs[TM_LGS] && group_prio < backlog_prio) {
> + /* VP can take a group interrupt */
> + xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
> + group_prio, group_level);
> + backlog_prio = group_prio;
> + backlog_level = group_level;
> + }
> + }
> +
> /*
> - * Compute the PIPR based on the restored state.
> + * Compute the PIPR based on the restored state.
> * It will raise the External interrupt signal if needed.
> */
> xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 06/14] ppc/xive2: Process group backlog when updating the CPPR
2024-10-15 21:13 ` [PATCH 06/14] ppc/xive2: Process group backlog when updating the CPPR Michael Kowal
@ 2024-11-19 4:34 ` Nicholas Piggin
2024-11-21 23:12 ` Mike Kowal
0 siblings, 1 reply; 29+ messages in thread
From: Nicholas Piggin @ 2024-11-19 4:34 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> When the hypervisor or OS pushes a new value to the CPPR, if the LSMFB
> value is lower than the new CPPR value, there could be a pending group
> interrupt in the backlog, so it needs to be scanned.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> include/hw/ppc/xive2.h | 4 +
> hw/intc/xive.c | 4 +-
> hw/intc/xive2.c | 173 ++++++++++++++++++++++++++++++++++++++++-
> 3 files changed, 177 insertions(+), 4 deletions(-)
>
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index d88db05687..e61b978f37 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -115,6 +115,10 @@ typedef struct Xive2EndSource {
> * XIVE2 Thread Interrupt Management Area (POWER10)
> */
>
> +void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size);
> +void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> uint64_t value, unsigned size);
> uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 8ffcac4f65..2aa6e1fecc 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -603,7 +603,7 @@ static const XiveTmOp xive2_tm_operations[] = {
> * MMIOs below 2K : raw values and special operations without side
> * effects
> */
> - { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive_tm_set_os_cppr,
> + { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive2_tm_set_os_cppr,
> NULL },
> { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive2_tm_push_os_ctx,
> NULL },
> @@ -611,7 +611,7 @@ static const XiveTmOp xive2_tm_operations[] = {
> NULL },
> { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, xive_tm_set_os_lgs,
> NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive_tm_set_hv_cppr,
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive2_tm_set_hv_cppr,
> NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
> NULL },
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 7130892482..0c53f71879 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -18,6 +18,7 @@
> #include "hw/ppc/xive.h"
> #include "hw/ppc/xive2.h"
> #include "hw/ppc/xive2_regs.h"
> +#include "trace.h"
>
> uint32_t xive2_router_get_config(Xive2Router *xrtr)
> {
> @@ -764,6 +765,172 @@ void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> }
> }
>
> +static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
> + uint32_t *nvp_blk, uint32_t *nvp_idx)
> +{
> + uint32_t w2, cam;
> +
> + w2 = xive_tctx_word2(&tctx->regs[ring]);
> + switch (ring) {
> + case TM_QW1_OS:
> + if (!(be32_to_cpu(w2) & TM2_QW1W2_VO)) {
> + return -1;
> + }
> + cam = xive_get_field32(TM2_QW1W2_OS_CAM, w2);
> + break;
> + case TM_QW2_HV_POOL:
> + if (!(be32_to_cpu(w2) & TM2_QW2W2_VP)) {
> + return -1;
> + }
> + cam = xive_get_field32(TM2_QW2W2_POOL_CAM, w2);
> + break;
> + case TM_QW3_HV_PHYS:
> + if (!(be32_to_cpu(w2) & TM2_QW3W2_VT)) {
> + return -1;
> + }
> + cam = xive2_tctx_hw_cam_line(tctx->xptr, tctx);
> + break;
> + default:
> + return -1;
> + }
> + *nvp_blk = xive2_nvp_blk(cam);
> + *nvp_idx = xive2_nvp_idx(cam);
> + return 0;
> +}
> +
> +static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
Some of the xive1 code kind of has placeholder for group code or routes
group stuff through to xive2 code, so I wonder if this duplication is
really necessary or it can just be added to xive1?
I kind of hoped we could unify xive1 and 2 more, but maybe it's too late
without a lot more work, and all new development is going to go into
xive2...
Okay for now I guess, we could look at unification one day maybe.
> +{
> + uint8_t *regs = &tctx->regs[ring];
> + Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> + uint8_t old_cppr, backlog_prio, first_group, group_level = 0;
> + uint8_t pipr_min, lsmfb_min, ring_min;
> + bool group_enabled;
> + uint32_t nvp_blk, nvp_idx;
> + Xive2Nvp nvp;
> + int rc;
> +
> + trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
> + regs[TM_IPB], regs[TM_PIPR],
> + cppr, regs[TM_NSR]);
> +
> + if (cppr > XIVE_PRIORITY_MAX) {
> + cppr = 0xff;
> + }
> +
> + old_cppr = regs[TM_CPPR];
> + regs[TM_CPPR] = cppr;
If CPPR remains the same, can return early.
If CPPR is being increased, this scanning is not required (a
redistribution of group interrupt if it became precluded is
required as noted in the TODO, but no scanning should be needed
so that TODO should be moved up here).
If there is an interrupt already presented and CPPR is being
lowered, nothing needs to be done either (because the presented
interrupt should already be the most favoured).
> +
> + /*
> + * Recompute the PIPR based on local pending interrupts. It will
> + * be adjusted below if needed in case of pending group interrupts.
> + */
> + pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
> + group_enabled = !!regs[TM_LGS];
> + lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff;
> + ring_min = ring;
> +
> + /* PHYS updates also depend on POOL values */
> + if (ring == TM_QW3_HV_PHYS) {
> + uint8_t *pregs = &tctx->regs[TM_QW2_HV_POOL];
> +
> + /* POOL values only matter if POOL ctx is valid */
> + if (pregs[TM_WORD2] & 0x80) {
> +
> + uint8_t pool_pipr = xive_ipb_to_pipr(pregs[TM_IPB]);
> + uint8_t pool_lsmfb = pregs[TM_LSMFB];
> +
> + /*
> + * Determine highest priority interrupt and
> + * remember which ring has it.
> + */
> + if (pool_pipr < pipr_min) {
> + pipr_min = pool_pipr;
> + if (pool_pipr < lsmfb_min) {
> + ring_min = TM_QW2_HV_POOL;
> + }
> + }
> +
> + /* Values needed for group priority calculation */
> + if (pregs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
> + group_enabled = true;
> + lsmfb_min = pool_lsmfb;
> + if (lsmfb_min < pipr_min) {
> + ring_min = TM_QW2_HV_POOL;
> + }
> + }
> + }
> + }
> + regs[TM_PIPR] = pipr_min;
> +
> + rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> + if (rc) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
> + return;
> + }
> +
> + if (cppr < old_cppr) {
> + /*
> + * FIXME: check if there's a group interrupt being presented
> + * and if the new cppr prevents it. If so, then the group
> + * interrupt needs to be re-added to the backlog and
> + * re-triggered (see re-trigger END info in the NVGC
> + * structure)
> + */
> + }
> +
> + if (group_enabled &&
> + lsmfb_min < cppr &&
> + lsmfb_min < regs[TM_PIPR]) {
> + /*
> + * Thread has seen a group interrupt with a higher priority
> + * than the new cppr or pending local interrupt. Check the
> + * backlog
> + */
> + if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
> + nvp_blk, nvp_idx);
> + return;
> + }
> +
> + if (!xive2_nvp_is_valid(&nvp)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
> + nvp_blk, nvp_idx);
> + return;
> + }
> +
> + first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
> + if (!first_group) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
> + nvp_blk, nvp_idx);
> + return;
> + }
> +
> + backlog_prio = xive2_presenter_backlog_check(tctx->xptr,
> + nvp_blk, nvp_idx,
> + first_group, &group_level);
> + tctx->regs[ring_min + TM_LSMFB] = backlog_prio;
LSMFB may not be the same as lsmfb_min, so you can't present
unconditionally.
I think after updating, it should test
if (lsmfb_min != backlog_prio) {
goto scan_again;
}
Where scan_again: goes back to recomputing min priorities and scanning.
Thanks,
Nick
> + if (backlog_prio != 0xFF) {
> + xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
> + backlog_prio, group_level);
> + regs[TM_PIPR] = backlog_prio;
> + }
> + }
> + /* CPPR has changed, check if we need to raise a pending exception */
> + xive_tctx_notify(tctx, ring_min, group_level);
> +}
> +
> +void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive2_tctx_set_cppr(tctx, TM_QW3_HV_PHYS, value & 0xff);
> +}
> +
> +void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive2_tctx_set_cppr(tctx, TM_QW1_OS, value & 0xff);
> +}
> +
> static void xive2_tctx_set_target(XiveTCTX *tctx, uint8_t ring, uint8_t target)
> {
> uint8_t *regs = &tctx->regs[ring];
> @@ -934,7 +1101,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
>
> bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> {
> - uint8_t *regs = &tctx->regs[ring];
> + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> + uint8_t *alt_regs = &tctx->regs[alt_ring];
>
> /*
> * The xive2_presenter_tctx_match() above tells if there's a match
> @@ -942,7 +1111,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> * priority to know if the thread can take the interrupt now or if
> * it is precluded.
> */
> - if (priority < regs[TM_CPPR]) {
> + if (priority < alt_regs[TM_CPPR]) {
> return false;
> }
> return true;
These last two are logically separate patch for enabling group for POOL?
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 12/14] pnv/xive: Support ESB Escalation
2024-10-15 21:13 ` [PATCH 12/14] pnv/xive: Support ESB Escalation Michael Kowal
@ 2024-11-19 5:00 ` Nicholas Piggin
2024-11-21 23:22 ` Mike Kowal
0 siblings, 1 reply; 29+ messages in thread
From: Nicholas Piggin @ 2024-11-19 5:00 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
> From: Glenn Miles <milesg@linux.vnet.ibm.com>
>
> END notification processing has an escalation path. The escalation is
> not always an END escalation but can be an ESB escalation.
>
> Also added a check for 'resume' processing which log a message stating it
> needs to be implemented. This is not needed at the time but is part of
> the END notification processing.
This patch is orthogonal to group support, right?
>
> This change was taken from a patch provided by Michael Kowal
>
> Suggested-by: Michael Kowal <kowal@us.ibm.com>
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> include/hw/ppc/xive2.h | 1 +
> include/hw/ppc/xive2_regs.h | 13 +++++---
> hw/intc/xive2.c | 61 +++++++++++++++++++++++++++++--------
> 3 files changed, 58 insertions(+), 17 deletions(-)
>
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 37aca4d26a..b17cc21ca6 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -82,6 +82,7 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
> uint32_t xive2_router_get_config(Xive2Router *xrtr);
>
> void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
> +void xive2_notify(Xive2Router *xrtr, uint32_t lisn, bool pq_checked);
>
> /*
> * XIVE2 Presenter (POWER10)
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index 89236b9aaf..42cdc91452 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -40,15 +40,18 @@
>
> typedef struct Xive2Eas {
> uint64_t w;
> -#define EAS2_VALID PPC_BIT(0)
> -#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
> -#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
> -#define EAS2_MASKED PPC_BIT(32) /* Masked */
> -#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
> +#define EAS2_VALID PPC_BIT(0)
> +#define EAS2_QOS PPC_BIT(1, 2) /* Quality of Service(unimp) */
> +#define EAS2_RESUME PPC_BIT(3) /* END Resume(unimp) */
> +#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
> +#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
> +#define EAS2_MASKED PPC_BIT(32) /* Masked */
> +#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
> } Xive2Eas;
>
> #define xive2_eas_is_valid(eas) (be64_to_cpu((eas)->w) & EAS2_VALID)
> #define xive2_eas_is_masked(eas) (be64_to_cpu((eas)->w) & EAS2_MASKED)
> +#define xive2_eas_is_resume(eas) (be64_to_cpu((eas)->w) & EAS2_RESUME)
>
> void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf);
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 41d689eaab..f812ba9624 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1511,18 +1511,39 @@ do_escalation:
> }
> }
>
> - /*
> - * The END trigger becomes an Escalation trigger
> - */
> - xive2_router_end_notify(xrtr,
> - xive_get_field32(END2_W4_END_BLOCK, end.w4),
> - xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> - xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + if (xive2_end_is_escalate_end(&end)) {
> + /*
> + * Perform END Adaptive escalation processing
> + * The END trigger becomes an Escalation trigger
> + */
> + xive2_router_end_notify(xrtr,
> + xive_get_field32(END2_W4_END_BLOCK, end.w4),
> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> + xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + } /* end END adaptive escalation */
> +
> + else {
> + uint32_t lisn; /* Logical Interrupt Source Number */
> +
> + /*
> + * Perform ESB escalation processing
> + * E[N] == 1 --> N
> + * Req[Block] <- E[ESB_Block]
> + * Req[Index] <- E[ESB_Index]
> + * Req[Offset] <- 0x000
> + * Execute <ESB Store> Req command
> + */
> + lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
> +
> + xive2_notify(xrtr, lisn, true /* pq_checked */);
> + }
> +
> + return;
Don't need returns at the end of void functions.
> }
>
> -void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> +void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
Can it be static?
Thanks,
Nick
> {
> - Xive2Router *xrtr = XIVE2_ROUTER(xn);
> uint8_t eas_blk = XIVE_EAS_BLOCK(lisn);
> uint32_t eas_idx = XIVE_EAS_INDEX(lisn);
> Xive2Eas eas;
> @@ -1565,13 +1586,29 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> return;
> }
>
> + /* TODO: add support for EAS resume if ever needed */
> + if (xive2_eas_is_resume(&eas)) {
> + qemu_log_mask(LOG_UNIMP,
> + "XIVE: EAS resume processing unimplemented - LISN %x\n",
> + lisn);
> + return;
> + }
> +
> /*
> * The event trigger becomes an END trigger
> */
> xive2_router_end_notify(xrtr,
> - xive_get_field64(EAS2_END_BLOCK, eas.w),
> - xive_get_field64(EAS2_END_INDEX, eas.w),
> - xive_get_field64(EAS2_END_DATA, eas.w));
> + xive_get_field64(EAS2_END_BLOCK, eas.w),
> + xive_get_field64(EAS2_END_INDEX, eas.w),
> + xive_get_field64(EAS2_END_DATA, eas.w));
> +}
> +
> +void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> +{
> + Xive2Router *xrtr = XIVE2_ROUTER(xn);
> +
> + xive2_notify(xrtr, lisn, pq_checked);
> + return;
> }
>
> static Property xive2_router_properties[] = {
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 13/14] pnv/xive: Fix problem with treating NVGC as a NVP
2024-10-15 21:13 ` [PATCH 13/14] pnv/xive: Fix problem with treating NVGC as a NVP Michael Kowal
@ 2024-11-19 5:04 ` Nicholas Piggin
0 siblings, 0 replies; 29+ messages in thread
From: Nicholas Piggin @ 2024-11-19 5:04 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> When booting with PHYP, the blk/index for a NVGC was being
> mistakenly treated as the blk/index for a NVP. Renamed
> nvp_blk/nvp_idx throughout the code to nvx_blk/nvx_idx to prevent
> confusion in the future and now we delay loading the NVP until
> the point where we know that the block and index actually point to
> a NVP.
>
> Suggested-by: Michael Kowal <kowal@us.ibm.com>
> Fixes: 6d4c4f70262 ("ppc/xive2: Support crowd-matching when looking for target")
Mechanical renaming should be moved to the start of the series,
and the fix should be merged into patch 3. Commit tags should only
be used if they exist in upstream repo.
Thanks,
Nick
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 02/14] ppc/xive2: Add grouping level to notification
2024-11-19 2:08 ` Nicholas Piggin
@ 2024-11-21 22:31 ` Mike Kowal
0 siblings, 0 replies; 29+ messages in thread
From: Mike Kowal @ 2024-11-21 22:31 UTC (permalink / raw)
To: Nicholas Piggin, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
On 11/18/2024 8:08 PM, Nicholas Piggin wrote:
> On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
>> From: Frederic Barrat <fbarrat@linux.ibm.com>
>>
>> The NSR has a (so far unused) grouping level field. When a interrupt
>> is presented, that field tells the hypervisor or OS if the interrupt
>> is for an individual VP or for a VP-group/crowd. This patch reworks
>> the presentation API to allow to set/unset the level when
>> raising/accepting an interrupt.
>>
>> It also renames xive_tctx_ipb_update() to xive_tctx_pipr_update() as
>> the IPB is only used for VP-specific target, whereas the PIPR always
>> needs to be updated.
>>
>> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
>> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
>> ---
>> include/hw/ppc/xive.h | 19 +++++++-
>> include/hw/ppc/xive_regs.h | 20 +++++++--
>> hw/intc/xive.c | 90 +++++++++++++++++++++++---------------
>> hw/intc/xive2.c | 18 ++++----
>> hw/intc/trace-events | 2 +-
>> 5 files changed, 100 insertions(+), 49 deletions(-)
>>
>> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
>> index 31242f0406..27ef6c1a17 100644
>> --- a/include/hw/ppc/xive.h
>> +++ b/include/hw/ppc/xive.h
>> @@ -510,6 +510,21 @@ static inline uint8_t xive_priority_to_ipb(uint8_t priority)
>> 0 : 1 << (XIVE_PRIORITY_MAX - priority);
>> }
>>
>> +static inline uint8_t xive_priority_to_pipr(uint8_t priority)
>> +{
>> + return priority > XIVE_PRIORITY_MAX ? 0xFF : priority;
>> +}
>> +
>> +/*
>> + * Convert an Interrupt Pending Buffer (IPB) register to a Pending
>> + * Interrupt Priority Register (PIPR), which contains the priority of
>> + * the most favored pending notification.
>> + */
>> +static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
>> +{
>> + return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
>> +}
>> +
>> /*
>> * XIVE Thread Interrupt Management Aera (TIMA)
>> *
>> @@ -532,8 +547,10 @@ void xive_tctx_pic_print_info(XiveTCTX *tctx, GString *buf);
>> Object *xive_tctx_create(Object *cpu, XivePresenter *xptr, Error **errp);
>> void xive_tctx_reset(XiveTCTX *tctx);
>> void xive_tctx_destroy(XiveTCTX *tctx);
>> -void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb);
>> +void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
>> + uint8_t group_level);
>> void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
>> +void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
>>
>> /*
>> * KVM XIVE device helpers
>> diff --git a/include/hw/ppc/xive_regs.h b/include/hw/ppc/xive_regs.h
>> index 326327fc79..b455728c9c 100644
>> --- a/include/hw/ppc/xive_regs.h
>> +++ b/include/hw/ppc/xive_regs.h
>> @@ -146,7 +146,14 @@
>> #define TM_SPC_PULL_PHYS_CTX_OL 0xc38 /* Pull phys ctx to odd cache line */
>> /* XXX more... */
>>
>> -/* NSR fields for the various QW ack types */
>> +/*
>> + * NSR fields for the various QW ack types
>> + *
>> + * P10 has an extra bit in QW3 for the group level instead of the
>> + * reserved 'i' bit. Since it is not used and we don't support group
>> + * interrupts on P9, we use the P10 definition for the group level so
>> + * that we can have common macros for the NSR
>> + */
>> #define TM_QW0_NSR_EB PPC_BIT8(0)
>> #define TM_QW1_NSR_EO PPC_BIT8(0)
>> #define TM_QW3_NSR_HE PPC_BITMASK8(0, 1)
>> @@ -154,8 +161,15 @@
>> #define TM_QW3_NSR_HE_POOL 1
>> #define TM_QW3_NSR_HE_PHYS 2
>> #define TM_QW3_NSR_HE_LSI 3
>> -#define TM_QW3_NSR_I PPC_BIT8(2)
>> -#define TM_QW3_NSR_GRP_LVL PPC_BIT8(3, 7)
>> +#define TM_NSR_GRP_LVL PPC_BITMASK8(2, 7)
>> +/*
>> + * On P10, the format of the 6-bit group level is: 2 bits for the
>> + * crowd size and 4 bits for the group size. Since group/crowd size is
>> + * always a power of 2, we encode the log. For example, group_level=4
>> + * means crowd size = 0 and group size = 16 (2^4)
>> + * Same encoding is used in the NVP and NVGC structures for
>> + * PGoFirst and PGoNext fields
>> + */
>>
>> /*
>> * EAS (Event Assignment Structure)
>> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
>> index efcb63e8aa..bacf518fa6 100644
>> --- a/hw/intc/xive.c
>> +++ b/hw/intc/xive.c
>> @@ -27,16 +27,6 @@
>> * XIVE Thread Interrupt Management context
>> */
>>
>> -/*
>> - * Convert an Interrupt Pending Buffer (IPB) register to a Pending
>> - * Interrupt Priority Register (PIPR), which contains the priority of
>> - * the most favored pending notification.
>> - */
>> -static uint8_t ipb_to_pipr(uint8_t ibp)
>> -{
>> - return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
>> -}
>> -
>> static uint8_t exception_mask(uint8_t ring)
>> {
>> switch (ring) {
>> @@ -87,10 +77,17 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
>>
>> regs[TM_CPPR] = cppr;
>>
>> - /* Reset the pending buffer bit */
>> - alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
>> + /*
>> + * If the interrupt was for a specific VP, reset the pending
>> + * buffer bit, otherwise clear the logical server indicator
>> + */
>> + if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
>> + regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
>> + } else {
>> + alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
>> + }
>>
>> - /* Drop Exception bit */
>> + /* Drop the exception bit */
>> regs[TM_NSR] &= ~mask;
> NSR can just be set to 0 directly instead of clearing masks.
There are other fields in the NSR so maybe that is why he started this
way. But yes, whenever an exception is inactive, all of the other
fields should be cleared too.
>
>>
>> trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
>> @@ -101,7 +98,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
>> return ((uint64_t)nsr << 8) | regs[TM_CPPR];
>> }
>>
>> -static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring)
>> +void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
>> {
>> /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
>> uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
>> @@ -111,13 +108,13 @@ static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring)
>> if (alt_regs[TM_PIPR] < alt_regs[TM_CPPR]) {
>> switch (ring) {
>> case TM_QW1_OS:
>> - regs[TM_NSR] |= TM_QW1_NSR_EO;
>> + regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
>> break;
>> case TM_QW2_HV_POOL:
>> - alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6);
>> + alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
>> break;
>> case TM_QW3_HV_PHYS:
>> - regs[TM_NSR] |= (TM_QW3_NSR_HE_PHYS << 6);
>> + regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
>> break;
>> default:
>> g_assert_not_reached();
>
> The big difference between presenting group and VP directed is that
> VP can just be queued up in IPB, whereas group can not be, and must
> be redistributed before they are precluded by a different interrupt.
> So I wonder if we should assert if there is an existing group interrupt
> in NSR being overwritten at this point.
If we do the check below, assert if the exception bit(s) are not 0, then
we will know the group/crowd is 0 and has not been set.
>
> Also should we be masking the group level here? Maybe just assert the
> top 2 bits are clear, otherwise something has gone wrong if this is
> chopping off bits here.
Do you mean masking off the group/crowd such that we ensure the
exception bit(s) are 0?
>
>> @@ -159,7 +156,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
>> * Recompute the PIPR based on local pending interrupts. The PHYS
>> * ring must take the minimum of both the PHYS and POOL PIPR values.
>> */
>> - pipr_min = ipb_to_pipr(regs[TM_IPB]);
>> + pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
>> ring_min = ring;
>>
>> /* PHYS updates also depend on POOL values */
>> @@ -169,7 +166,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
>> /* POOL values only matter if POOL ctx is valid */
>> if (pool_regs[TM_WORD2] & 0x80) {
>>
>> - uint8_t pool_pipr = ipb_to_pipr(pool_regs[TM_IPB]);
>> + uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
>>
>> /*
>> * Determine highest priority interrupt and
> Moving this function and changing ipb->pipr (before adding group) could
> be split into its own patch, since the mechanical changes seem to be
> the biggest part, would make the group change simpler to see.
I can do that.
>
>> @@ -185,17 +182,27 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
>> regs[TM_PIPR] = pipr_min;
>>
>> /* CPPR has changed, check if we need to raise a pending exception */
>> - xive_tctx_notify(tctx, ring_min);
>> + xive_tctx_notify(tctx, ring_min, 0);
>> }
>>
>> -void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb)
>> -{
>> +void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
>> + uint8_t group_level)
>> + {
>> + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
>> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
>> + uint8_t *alt_regs = &tctx->regs[alt_ring];
>> uint8_t *regs = &tctx->regs[ring];
>>
>> - regs[TM_IPB] |= ipb;
>> - regs[TM_PIPR] = ipb_to_pipr(regs[TM_IPB]);
>> - xive_tctx_notify(tctx, ring);
>> -}
>> + if (group_level == 0) {
>> + /* VP-specific */
>> + regs[TM_IPB] |= xive_priority_to_ipb(priority);
>> + alt_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
>> + } else {
>> + /* VP-group */
>> + alt_regs[TM_PIPR] = xive_priority_to_pipr(priority);
>> + }
>> + xive_tctx_notify(tctx, ring, group_level);
>> + }
>>
>> /*
>> * XIVE Thread Interrupt Management Area (TIMA)
>> @@ -411,13 +418,13 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
>> }
>>
>> /*
>> - * Adjust the IPB to allow a CPU to process event queues of other
>> + * Adjust the PIPR to allow a CPU to process event queues of other
>> * priorities during one physical interrupt cycle.
>> */
>> static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
>> hwaddr offset, uint64_t value, unsigned size)
>> {
>> - xive_tctx_ipb_update(tctx, TM_QW1_OS, xive_priority_to_ipb(value & 0xff));
>> + xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0);
>> }
>>
>> static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk,
>> @@ -495,16 +502,20 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
>> /* Reset the NVT value */
>> nvt.w4 = xive_set_field32(NVT_W4_IPB, nvt.w4, 0);
>> xive_router_write_nvt(xrtr, nvt_blk, nvt_idx, &nvt, 4);
>> - }
>> +
>> + uint8_t *regs = &tctx->regs[TM_QW1_OS];
>> + regs[TM_IPB] |= ipb;
>> +}
>> +
> Whitespace damage here?
>
>> /*
>> - * Always call xive_tctx_ipb_update(). Even if there were no
>> + * Always call xive_tctx_pipr_update(). Even if there were no
>> * escalation triggered, there could be a pending interrupt which
>> * was saved when the context was pulled and that we need to take
>> * into account by recalculating the PIPR (which is not
>> * saved/restored).
>> * It will also raise the External interrupt signal if needed.
>> */
>> - xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
>> + xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
> I don't understand what's going on here. Why not ipb_to_pipr(ipb)?
>
>> }
>>
>> /*
>> @@ -841,9 +852,9 @@ void xive_tctx_reset(XiveTCTX *tctx)
>> * CPPR is first set.
>> */
>> tctx->regs[TM_QW1_OS + TM_PIPR] =
>> - ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
>> + xive_ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
>> tctx->regs[TM_QW3_HV_PHYS + TM_PIPR] =
>> - ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
>> + xive_ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
>> }
>>
>> static void xive_tctx_realize(DeviceState *dev, Error **errp)
>> @@ -1660,6 +1671,12 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
>> return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
>> }
>>
>> +static uint8_t xive_get_group_level(uint32_t nvp_index)
>> +{
>> + /* FIXME add crowd encoding */
>> + return ctz32(~nvp_index) + 1;
>> +}
>> +
>> /*
>> * The thread context register words are in big-endian format.
>> */
>> @@ -1745,6 +1762,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
>> {
>> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
>> XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
>> + uint8_t group_level;
>> int count;
>>
>> /*
>> @@ -1758,9 +1776,9 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
>>
>> /* handle CPU exception delivery */
>> if (count) {
>> - trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring);
>> - xive_tctx_ipb_update(match.tctx, match.ring,
>> - xive_priority_to_ipb(priority));
>> + group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
>> + trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
>> + xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
>> }
>>
>> return !!count;
>> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
>> index 4adc3b6950..db372f4b30 100644
>> --- a/hw/intc/xive2.c
>> +++ b/hw/intc/xive2.c
>> @@ -564,8 +564,10 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
>> uint8_t nvp_blk, uint32_t nvp_idx,
>> bool do_restore)
>> {
>> + uint8_t ipb, backlog_level;
>> + uint8_t backlog_prio;
>> + uint8_t *regs = &tctx->regs[TM_QW1_OS];
>> Xive2Nvp nvp;
>> - uint8_t ipb;
> Put the uint8_ts all on the same line or keep them all on different
> lines?
>
> Thanks,
> Nick
>
>>
>> /*
>> * Grab the associated thread interrupt context registers in the
>> @@ -594,15 +596,15 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
>> nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
>> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
>> }
>> + regs[TM_IPB] = ipb;
>> + backlog_prio = xive_ipb_to_pipr(ipb);
>> + backlog_level = 0;
>> +
>> /*
>> - * Always call xive_tctx_ipb_update(). Even if there were no
>> - * escalation triggered, there could be a pending interrupt which
>> - * was saved when the context was pulled and that we need to take
>> - * into account by recalculating the PIPR (which is not
>> - * saved/restored).
>> - * It will also raise the External interrupt signal if needed.
>> + * Compute the PIPR based on the restored state.
>> + * It will raise the External interrupt signal if needed.
>> */
>> - xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
>> + xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
>> }
>>
>> /*
>> diff --git a/hw/intc/trace-events b/hw/intc/trace-events
>> index 3dcf147198..7435728c51 100644
>> --- a/hw/intc/trace-events
>> +++ b/hw/intc/trace-events
>> @@ -282,7 +282,7 @@ xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "EN
>> xive_router_end_escalate(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t end_data) "END 0x%02x/0x%04x -> escalate END 0x%02x/0x%04x data 0x%08x"
>> xive_tctx_tm_write(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
>> xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
>> -xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring) "found NVT 0x%x/0x%x ring=0x%x"
>> +xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d"
>> xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64
>>
>> # pnv_xive.c
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 03/14] ppc/xive2: Support group-matching when looking for target
2024-11-19 3:22 ` Nicholas Piggin
@ 2024-11-21 22:56 ` Mike Kowal
2024-12-02 22:08 ` Mike Kowal
0 siblings, 1 reply; 29+ messages in thread
From: Mike Kowal @ 2024-11-21 22:56 UTC (permalink / raw)
To: Nicholas Piggin, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
On 11/18/2024 9:22 PM, Nicholas Piggin wrote:
> On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
>> From: Frederic Barrat <fbarrat@linux.ibm.com>
>>
>> If an END has the 'i' bit set (ignore), then it targets a group of
>> VPs. The size of the group depends on the VP index of the target
>> (first 0 found when looking at the least significant bits of the
>> index) so a mask is applied on the VP index of a running thread to
>> know if we have a match.
>>
>> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
>> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
>> ---
>> include/hw/ppc/xive.h | 5 +++-
>> include/hw/ppc/xive2.h | 1 +
>> hw/intc/pnv_xive2.c | 33 ++++++++++++++-------
>> hw/intc/xive.c | 56 +++++++++++++++++++++++++-----------
>> hw/intc/xive2.c | 65 ++++++++++++++++++++++++++++++------------
>> 5 files changed, 114 insertions(+), 46 deletions(-)
>>
>> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
>> index 27ef6c1a17..a177b75723 100644
>> --- a/include/hw/ppc/xive.h
>> +++ b/include/hw/ppc/xive.h
>> @@ -424,6 +424,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas);
>> typedef struct XiveTCTXMatch {
>> XiveTCTX *tctx;
>> uint8_t ring;
>> + bool precluded;
>> } XiveTCTXMatch;
>>
>> #define TYPE_XIVE_PRESENTER "xive-presenter"
>> @@ -452,7 +453,9 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
>> bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
>> uint8_t nvt_blk, uint32_t nvt_idx,
>> bool cam_ignore, uint8_t priority,
>> - uint32_t logic_serv);
>> + uint32_t logic_serv, bool *precluded);
>> +
>> +uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
>>
>> /*
>> * XIVE Fabric (Interface between Interrupt Controller and Machine)
>> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
>> index 5bccf41159..17c31fcb4b 100644
>> --- a/include/hw/ppc/xive2.h
>> +++ b/include/hw/ppc/xive2.h
>> @@ -121,6 +121,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>> hwaddr offset, unsigned size);
>> void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
>> hwaddr offset, uint64_t value, unsigned size);
>> +bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
>> void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
>> hwaddr offset, uint64_t value, unsigned size);
>> void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
>> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
>> index 834d32287b..3fb466bb2c 100644
>> --- a/hw/intc/pnv_xive2.c
>> +++ b/hw/intc/pnv_xive2.c
>> @@ -660,21 +660,34 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
>> logic_serv);
>> }
>>
>> - /*
>> - * Save the context and follow on to catch duplicates,
>> - * that we don't support yet.
>> - */
>> if (ring != -1) {
>> - if (match->tctx) {
>> + /*
>> + * For VP-specific match, finding more than one is a
>> + * problem. For group notification, it's possible.
>> + */
>> + if (!cam_ignore && match->tctx) {
>> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a "
>> "thread context NVT %x/%x\n",
>> nvt_blk, nvt_idx);
>> - return false;
>> + /* Should set a FIR if we ever model it */
>> + return -1;
>> + }
>> + /*
>> + * For a group notification, we need to know if the
>> + * match is precluded first by checking the current
>> + * thread priority. If the interrupt can be delivered,
>> + * we always notify the first match (for now).
>> + */
>> + if (cam_ignore &&
>> + xive2_tm_irq_precluded(tctx, ring, priority)) {
>> + match->precluded = true;
>> + } else {
>> + if (!match->tctx) {
>> + match->ring = ring;
>> + match->tctx = tctx;
>> + }
>> + count++;
> Multiple matches logic is a bit shoehorned into the match code here.
>
> "Return any best match" would be okay, but match->precluded could be set
> to true for a non-precluded match if a different match was precluded.
> And for VP directed interrupts, we can get a match from here which
> *is* precluded, but has precluded = false!
>
> It's a bit confusing.
>
> typedef struct XiveTCTXMatch {
> XiveTCTX *tctx;
> uint8_t ring;
> bool precluded;
> } XiveTCTXMatch;
>
> What if this was changed to be more clear it doesn't refer to a single
> tctx? Something like -
>
> XiveNVTMatches {
> XiveTCTX *best_tctx;
> uint8_t best_ring;
> int match_count;
> int precluded_group_match_count;
> }
I'll have to wrap my head around this one... 😳 If I can not figure it
out, I will et back to you.
>> }
>> -
>> - match->ring = ring;
>> - match->tctx = tctx;
>> - count++;
>> }
>> }
>> }
>> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
>> index bacf518fa6..8ffcac4f65 100644
>> --- a/hw/intc/xive.c
>> +++ b/hw/intc/xive.c
>> @@ -1671,6 +1671,16 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
>> return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
>> }
>>
>> +uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
>> +{
>> + /*
>> + * Group size is a power of 2. The position of the first 0
>> + * (starting with the least significant bits) in the NVP index
>> + * gives the size of the group.
>> + */
>> + return 1 << (ctz32(~nvp_index) + 1);
>> +}
>> +
>> static uint8_t xive_get_group_level(uint32_t nvp_index)
>> {
>> /* FIXME add crowd encoding */
>> @@ -1743,30 +1753,39 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
>> /*
>> * This is our simple Xive Presenter Engine model. It is merged in the
>> * Router as it does not require an extra object.
>> - *
>> - * It receives notification requests sent by the IVRE to find one
>> - * matching NVT (or more) dispatched on the processor threads. In case
>> - * of a single NVT notification, the process is abbreviated and the
>> - * thread is signaled if a match is found. In case of a logical server
>> - * notification (bits ignored at the end of the NVT identifier), the
>> - * IVPE and IVRE select a winning thread using different filters. This
>> - * involves 2 or 3 exchanges on the PowerBus that the model does not
>> - * support.
>> - *
>> - * The parameters represent what is sent on the PowerBus
>> */
>> bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
>> uint8_t nvt_blk, uint32_t nvt_idx,
>> bool cam_ignore, uint8_t priority,
>> - uint32_t logic_serv)
>> + uint32_t logic_serv, bool *precluded)
>> {
>> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
>> - XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
>> + XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false };
>> uint8_t group_level;
>> int count;
>>
>> /*
>> - * Ask the machine to scan the interrupt controllers for a match
>> + * Ask the machine to scan the interrupt controllers for a match.
>> + *
>> + * For VP-specific notification, we expect at most one match and
>> + * one call to the presenters is all we need (abbreviated notify
>> + * sequence documented by the architecture).
>> + *
>> + * For VP-group notification, match_nvt() is the equivalent of the
>> + * "histogram" and "poll" commands sent to the power bus to the
>> + * presenters. 'count' could be more than one, but we always
>> + * select the first match for now. 'precluded' tells if (at least)
>> + * one thread matches but can't take the interrupt now because
>> + * it's running at a more favored priority. We return the
>> + * information to the router so that it can take appropriate
>> + * actions (backlog, escalation, broadcast, etc...)
>> + *
>> + * If we were to implement a better way of dispatching the
>> + * interrupt in case of multiple matches (instead of the first
>> + * match), we would need a heuristic to elect a thread (for
>> + * example, the hardware keeps track of an 'age' in the TIMA) and
>> + * a new command to the presenters (the equivalent of the "assign"
>> + * power bus command in the documented full notify sequence.
>> */
>> count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
>> priority, logic_serv, &match);
>> @@ -1779,6 +1798,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
>> group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
>> trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
>> xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
>> + } else {
>> + *precluded = match.precluded;
>> }
>>
>> return !!count;
>> @@ -1818,7 +1839,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
>> uint8_t nvt_blk;
>> uint32_t nvt_idx;
>> XiveNVT nvt;
>> - bool found;
>> + bool found, precluded;
>>
>> uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
>> uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
>> @@ -1901,8 +1922,9 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
>> found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
>> xive_get_field32(END_W7_F0_IGNORE, end.w7),
>> priority,
>> - xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7));
>> -
>> + xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
>> + &precluded);
>> + /* we don't support VP-group notification on P9, so precluded is not used */
>> /* TODO: Auto EOI. */
>>
>> if (found) {
>> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
>> index db372f4b30..2cb03c758e 100644
>> --- a/hw/intc/xive2.c
>> +++ b/hw/intc/xive2.c
>> @@ -739,6 +739,12 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
>> return xrc->write_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, nvgc);
>> }
>>
>> +static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
>> + uint32_t vp_mask)
>> +{
>> + return (cam1 & vp_mask) == (cam2 & vp_mask);
>> +}
>> +
>> /*
>> * The thread context register words are in big-endian format.
>> */
>> @@ -753,44 +759,50 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
>> uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
>> uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
>>
>> - /*
>> - * TODO (PowerNV): ignore mode. The low order bits of the NVT
>> - * identifier are ignored in the "CAM" match.
>> - */
>> + uint32_t vp_mask = 0xFFFFFFFF;
>>
>> if (format == 0) {
>> - if (cam_ignore == true) {
>> - /*
>> - * F=0 & i=1: Logical server notification (bits ignored at
>> - * the end of the NVT identifier)
>> - */
>> - qemu_log_mask(LOG_UNIMP, "XIVE: no support for LS NVT %x/%x\n",
>> - nvt_blk, nvt_idx);
>> - return -1;
>> + /*
>> + * i=0: Specific NVT notification
>> + * i=1: VP-group notification (bits ignored at the end of the
>> + * NVT identifier)
>> + */
>> + if (cam_ignore) {
>> + vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
>> }
>>
>> - /* F=0 & i=0: Specific NVT notification */
>> + /* For VP-group notifications, threads with LGS=0 are excluded */
>>
>> /* PHYS ring */
>> if ((be32_to_cpu(qw3w2) & TM2_QW3W2_VT) &&
>> - cam == xive2_tctx_hw_cam_line(xptr, tctx)) {
>> + !(cam_ignore && tctx->regs[TM_QW3_HV_PHYS + TM_LGS] == 0) &&
>> + xive2_vp_match_mask(cam,
>> + xive2_tctx_hw_cam_line(xptr, tctx),
>> + vp_mask)) {
>> return TM_QW3_HV_PHYS;
>> }
>>
>> /* HV POOL ring */
>> if ((be32_to_cpu(qw2w2) & TM2_QW2W2_VP) &&
>> - cam == xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2)) {
>> + !(cam_ignore && tctx->regs[TM_QW2_HV_POOL + TM_LGS] == 0) &&
>> + xive2_vp_match_mask(cam,
>> + xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2),
>> + vp_mask)) {
>> return TM_QW2_HV_POOL;
>> }
>>
>> /* OS ring */
>> if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
>> - cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) {
>> + !(cam_ignore && tctx->regs[TM_QW1_OS + TM_LGS] == 0) &&
>> + xive2_vp_match_mask(cam,
>> + xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2),
>> + vp_mask)) {
>> return TM_QW1_OS;
>> }
>> } else {
>> /* F=1 : User level Event-Based Branch (EBB) notification */
>>
>> + /* FIXME: what if cam_ignore and LGS = 0 ? */
>> /* USER ring */
>> if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
>> (cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) &&
>> @@ -802,6 +814,22 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
>> return -1;
>> }
>>
>> +bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
>> +{
>> + uint8_t *regs = &tctx->regs[ring];
>> +
>> + /*
>> + * The xive2_presenter_tctx_match() above tells if there's a match
>> + * but for VP-group notification, we still need to look at the
>> + * priority to know if the thread can take the interrupt now or if
>> + * it is precluded.
>> + */
>> + if (priority < regs[TM_CPPR]) {
> Should this also test PIPR?
>
> I'm not sure about CPPR and PIPR relationship exactly. Does hardware
> set PIPR for pending IPB interrupts even if they are not < CPPR? Or
> does it always reflect the presented interrupt
I am not sure. I will dig into the simulation models, and the
architecture process flows which they followed, and find out.
>
> Thanks,
> Nick
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 06/14] ppc/xive2: Process group backlog when updating the CPPR
2024-11-19 4:34 ` Nicholas Piggin
@ 2024-11-21 23:12 ` Mike Kowal
0 siblings, 0 replies; 29+ messages in thread
From: Mike Kowal @ 2024-11-21 23:12 UTC (permalink / raw)
To: Nicholas Piggin, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
[-- Attachment #1: Type: text/plain, Size: 12453 bytes --]
On 11/18/2024 10:34 PM, Nicholas Piggin wrote:
> On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
>> From: Frederic Barrat<fbarrat@linux.ibm.com>
>>
>> When the hypervisor or OS pushes a new value to the CPPR, if the LSMFB
>> value is lower than the new CPPR value, there could be a pending group
>> interrupt in the backlog, so it needs to be scanned.
>>
>> Signed-off-by: Frederic Barrat<fbarrat@linux.ibm.com>
>> Signed-off-by: Michael Kowal<kowal@linux.ibm.com>
>> ---
>> include/hw/ppc/xive2.h | 4 +
>> hw/intc/xive.c | 4 +-
>> hw/intc/xive2.c | 173 ++++++++++++++++++++++++++++++++++++++++-
>> 3 files changed, 177 insertions(+), 4 deletions(-)
>>
>> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
>> index d88db05687..e61b978f37 100644
>> --- a/include/hw/ppc/xive2.h
>> +++ b/include/hw/ppc/xive2.h
>> @@ -115,6 +115,10 @@ typedef struct Xive2EndSource {
>> * XIVE2 Thread Interrupt Management Area (POWER10)
>> */
>>
>> +void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
>> + hwaddr offset, uint64_t value, unsigned size);
>> +void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
>> + hwaddr offset, uint64_t value, unsigned size);
>> void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
>> uint64_t value, unsigned size);
>> uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
>> index 8ffcac4f65..2aa6e1fecc 100644
>> --- a/hw/intc/xive.c
>> +++ b/hw/intc/xive.c
>> @@ -603,7 +603,7 @@ static const XiveTmOp xive2_tm_operations[] = {
>> * MMIOs below 2K : raw values and special operations without side
>> * effects
>> */
>> - { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive_tm_set_os_cppr,
>> + { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive2_tm_set_os_cppr,
>> NULL },
>> { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive2_tm_push_os_ctx,
>> NULL },
>> @@ -611,7 +611,7 @@ static const XiveTmOp xive2_tm_operations[] = {
>> NULL },
>> { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, xive_tm_set_os_lgs,
>> NULL },
>> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive_tm_set_hv_cppr,
>> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive2_tm_set_hv_cppr,
>> NULL },
>> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
>> NULL },
>> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
>> index 7130892482..0c53f71879 100644
>> --- a/hw/intc/xive2.c
>> +++ b/hw/intc/xive2.c
>> @@ -18,6 +18,7 @@
>> #include "hw/ppc/xive.h"
>> #include "hw/ppc/xive2.h"
>> #include "hw/ppc/xive2_regs.h"
>> +#include "trace.h"
>>
>> uint32_t xive2_router_get_config(Xive2Router *xrtr)
>> {
>> @@ -764,6 +765,172 @@ void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>> }
>> }
>>
>> +static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
>> + uint32_t *nvp_blk, uint32_t *nvp_idx)
>> +{
>> + uint32_t w2, cam;
>> +
>> + w2 = xive_tctx_word2(&tctx->regs[ring]);
>> + switch (ring) {
>> + case TM_QW1_OS:
>> + if (!(be32_to_cpu(w2) & TM2_QW1W2_VO)) {
>> + return -1;
>> + }
>> + cam = xive_get_field32(TM2_QW1W2_OS_CAM, w2);
>> + break;
>> + case TM_QW2_HV_POOL:
>> + if (!(be32_to_cpu(w2) & TM2_QW2W2_VP)) {
>> + return -1;
>> + }
>> + cam = xive_get_field32(TM2_QW2W2_POOL_CAM, w2);
>> + break;
>> + case TM_QW3_HV_PHYS:
>> + if (!(be32_to_cpu(w2) & TM2_QW3W2_VT)) {
>> + return -1;
>> + }
>> + cam = xive2_tctx_hw_cam_line(tctx->xptr, tctx);
>> + break;
>> + default:
>> + return -1;
>> + }
>> + *nvp_blk = xive2_nvp_blk(cam);
>> + *nvp_idx = xive2_nvp_idx(cam);
>> + return 0;
>> +}
>> +
>> +static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> Some of the xive1 code kind of has placeholder for group code or routes
> group stuff through to xive2 code, so I wonder if this duplication is
> really necessary or it can just be added to xive1?
>
> I kind of hoped we could unify xive1 and 2 more, but maybe it's too late
> without a lot more work, and all new development is going to go into
> xive2...
I think that ship sailed long before I got involved. Our other sim
models are totally independent, gen 1 or gen2. Nor did we support gen 2
running n gen 1 mode. Trying to move much of this function back into
xive 1 would be difficult and possible break existing platforms and models.
>
> Okay for now I guess, we could look at unification one day maybe.
You can have Caleb add it to the plan if you still think it would be
beneficial. I think it would be a very low priority item. Contact me
directly to discuss.
>
>> +{
>> + uint8_t *regs = &tctx->regs[ring];
>> + Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
>> + uint8_t old_cppr, backlog_prio, first_group, group_level = 0;
>> + uint8_t pipr_min, lsmfb_min, ring_min;
>> + bool group_enabled;
>> + uint32_t nvp_blk, nvp_idx;
>> + Xive2Nvp nvp;
>> + int rc;
>> +
>> + trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
>> + regs[TM_IPB], regs[TM_PIPR],
>> + cppr, regs[TM_NSR]);
>> +
>> + if (cppr > XIVE_PRIORITY_MAX) {
>> + cppr = 0xff;
>> + }
>> +
>> + old_cppr = regs[TM_CPPR];
>> + regs[TM_CPPR] = cppr;
> If CPPR remains the same, can return early.
>
> If CPPR is being increased, this scanning is not required (a
> redistribution of group interrupt if it became precluded is
> required as noted in the TODO, but no scanning should be needed
> so that TODO should be moved up here).
>
> If there is an interrupt already presented and CPPR is being
> lowered, nothing needs to be done either (because the presented
> interrupt should already be the most favoured).
xive2_tctx_set_cppr() has gone though a couple of iterations since this
patch set was done on Oct 2023. Some of your points above have already
been addressed and will be included in group5. For specifics, see the
following in ponq-4:
ppc/xive2: PIPR not updated correctly with CPPR updates.
4de83cd1a9fab774b1ab95aba804afa3c0159ebf
>> +
>> + /*
>> + * Recompute the PIPR based on local pending interrupts. It will
>> + * be adjusted below if needed in case of pending group interrupts.
>> + */
>> + pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
>> + group_enabled = !!regs[TM_LGS];
>> + lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff;
>> + ring_min = ring;
>> +
>> + /* PHYS updates also depend on POOL values */
>> + if (ring == TM_QW3_HV_PHYS) {
>> + uint8_t *pregs = &tctx->regs[TM_QW2_HV_POOL];
>> +
>> + /* POOL values only matter if POOL ctx is valid */
>> + if (pregs[TM_WORD2] & 0x80) {
>> +
>> + uint8_t pool_pipr = xive_ipb_to_pipr(pregs[TM_IPB]);
>> + uint8_t pool_lsmfb = pregs[TM_LSMFB];
>> +
>> + /*
>> + * Determine highest priority interrupt and
>> + * remember which ring has it.
>> + */
>> + if (pool_pipr < pipr_min) {
>> + pipr_min = pool_pipr;
>> + if (pool_pipr < lsmfb_min) {
>> + ring_min = TM_QW2_HV_POOL;
>> + }
>> + }
>> +
>> + /* Values needed for group priority calculation */
>> + if (pregs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
>> + group_enabled = true;
>> + lsmfb_min = pool_lsmfb;
>> + if (lsmfb_min < pipr_min) {
>> + ring_min = TM_QW2_HV_POOL;
>> + }
>> + }
>> + }
>> + }
>> + regs[TM_PIPR] = pipr_min;
>> +
>> + rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
>> + if (rc) {
>> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
>> + return;
>> + }
>> +
>> + if (cppr < old_cppr) {
>> + /*
>> + * FIXME: check if there's a group interrupt being presented
>> + * and if the new cppr prevents it. If so, then the group
>> + * interrupt needs to be re-added to the backlog and
>> + * re-triggered (see re-trigger END info in the NVGC
>> + * structure)
>> + */
>> + }
>> +
>> + if (group_enabled &&
>> + lsmfb_min < cppr &&
>> + lsmfb_min < regs[TM_PIPR]) {
>> + /*
>> + * Thread has seen a group interrupt with a higher priority
>> + * than the new cppr or pending local interrupt. Check the
>> + * backlog
>> + */
>> + if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
>> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
>> + nvp_blk, nvp_idx);
>> + return;
>> + }
>> +
>> + if (!xive2_nvp_is_valid(&nvp)) {
>> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
>> + nvp_blk, nvp_idx);
>> + return;
>> + }
>> +
>> + first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
>> + if (!first_group) {
>> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
>> + nvp_blk, nvp_idx);
>> + return;
>> + }
>> +
>> + backlog_prio = xive2_presenter_backlog_check(tctx->xptr,
>> + nvp_blk, nvp_idx,
>> + first_group, &group_level);
>> + tctx->regs[ring_min + TM_LSMFB] = backlog_prio;
> LSMFB may not be the same as lsmfb_min, so you can't present
> unconditionally.
>
> I think after updating, it should test
>
> if (lsmfb_min != backlog_prio) {
> goto scan_again;
> }
>
> Where scan_again: goes back to recomputing min priorities and scanning.
Ditto from above. I think...
>
> Thanks,
> Nick
>
>> + if (backlog_prio != 0xFF) {
>> + xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
>> + backlog_prio, group_level);
>> + regs[TM_PIPR] = backlog_prio;
>> + }
>> + }
>> + /* CPPR has changed, check if we need to raise a pending exception */
>> + xive_tctx_notify(tctx, ring_min, group_level);
>> +}
>> +
>> +void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
>> + hwaddr offset, uint64_t value, unsigned size)
>> +{
>> + xive2_tctx_set_cppr(tctx, TM_QW3_HV_PHYS, value & 0xff);
>> +}
>> +
>> +void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
>> + hwaddr offset, uint64_t value, unsigned size)
>> +{
>> + xive2_tctx_set_cppr(tctx, TM_QW1_OS, value & 0xff);
>> +}
>> +
>> static void xive2_tctx_set_target(XiveTCTX *tctx, uint8_t ring, uint8_t target)
>> {
>> uint8_t *regs = &tctx->regs[ring];
>> @@ -934,7 +1101,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
>>
>> bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
>> {
>> - uint8_t *regs = &tctx->regs[ring];
>> + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
>> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
>> + uint8_t *alt_regs = &tctx->regs[alt_ring];
>>
>> /*
>> * The xive2_presenter_tctx_match() above tells if there's a match
>> @@ -942,7 +1111,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
>> * priority to know if the thread can take the interrupt now or if
>> * it is precluded.
>> */
>> - if (priority < regs[TM_CPPR]) {
>> + if (priority < alt_regs[TM_CPPR]) {
>> return false;
>> }
>> return true;
> These last two are logically separate patch for enabling group for POOL?
>
[-- Attachment #2: Type: text/html, Size: 13985 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 12/14] pnv/xive: Support ESB Escalation
2024-11-19 5:00 ` Nicholas Piggin
@ 2024-11-21 23:22 ` Mike Kowal
0 siblings, 0 replies; 29+ messages in thread
From: Mike Kowal @ 2024-11-21 23:22 UTC (permalink / raw)
To: Nicholas Piggin, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
On 11/18/2024 11:00 PM, Nicholas Piggin wrote:
> On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
>> From: Glenn Miles <milesg@linux.vnet.ibm.com>
>>
>> END notification processing has an escalation path. The escalation is
>> not always an END escalation but can be an ESB escalation.
>>
>> Also added a check for 'resume' processing which log a message stating it
>> needs to be implemented. This is not needed at the time but is part of
>> the END notification processing.
> This patch is orthogonal to group support, right?
Sort of... Yes... When we decided to upstream the commits in 'like'
groups, many of the commits are out of order. And some were too
difficult to backport to older code. I thinks was one of those.
>> This change was taken from a patch provided by Michael Kowal
>>
>> Suggested-by: Michael Kowal <kowal@us.ibm.com>
>> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
>> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
>> ---
>> include/hw/ppc/xive2.h | 1 +
>> include/hw/ppc/xive2_regs.h | 13 +++++---
>> hw/intc/xive2.c | 61 +++++++++++++++++++++++++++++--------
>> 3 files changed, 58 insertions(+), 17 deletions(-)
>>
>> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
>> index 37aca4d26a..b17cc21ca6 100644
>> --- a/include/hw/ppc/xive2.h
>> +++ b/include/hw/ppc/xive2.h
>> @@ -82,6 +82,7 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
>> uint32_t xive2_router_get_config(Xive2Router *xrtr);
>>
>> void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
>> +void xive2_notify(Xive2Router *xrtr, uint32_t lisn, bool pq_checked);
>>
>> /*
>> * XIVE2 Presenter (POWER10)
>> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
>> index 89236b9aaf..42cdc91452 100644
>> --- a/include/hw/ppc/xive2_regs.h
>> +++ b/include/hw/ppc/xive2_regs.h
>> @@ -40,15 +40,18 @@
>>
>> typedef struct Xive2Eas {
>> uint64_t w;
>> -#define EAS2_VALID PPC_BIT(0)
>> -#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
>> -#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
>> -#define EAS2_MASKED PPC_BIT(32) /* Masked */
>> -#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
>> +#define EAS2_VALID PPC_BIT(0)
>> +#define EAS2_QOS PPC_BIT(1, 2) /* Quality of Service(unimp) */
>> +#define EAS2_RESUME PPC_BIT(3) /* END Resume(unimp) */
>> +#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
>> +#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
>> +#define EAS2_MASKED PPC_BIT(32) /* Masked */
>> +#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
>> } Xive2Eas;
>>
>> #define xive2_eas_is_valid(eas) (be64_to_cpu((eas)->w) & EAS2_VALID)
>> #define xive2_eas_is_masked(eas) (be64_to_cpu((eas)->w) & EAS2_MASKED)
>> +#define xive2_eas_is_resume(eas) (be64_to_cpu((eas)->w) & EAS2_RESUME)
>>
>> void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf);
>>
>> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
>> index 41d689eaab..f812ba9624 100644
>> --- a/hw/intc/xive2.c
>> +++ b/hw/intc/xive2.c
>> @@ -1511,18 +1511,39 @@ do_escalation:
>> }
>> }
>>
>> - /*
>> - * The END trigger becomes an Escalation trigger
>> - */
>> - xive2_router_end_notify(xrtr,
>> - xive_get_field32(END2_W4_END_BLOCK, end.w4),
>> - xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
>> - xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
>> + if (xive2_end_is_escalate_end(&end)) {
>> + /*
>> + * Perform END Adaptive escalation processing
>> + * The END trigger becomes an Escalation trigger
>> + */
>> + xive2_router_end_notify(xrtr,
>> + xive_get_field32(END2_W4_END_BLOCK, end.w4),
>> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
>> + xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
>> + } /* end END adaptive escalation */
>> +
>> + else {
>> + uint32_t lisn; /* Logical Interrupt Source Number */
>> +
>> + /*
>> + * Perform ESB escalation processing
>> + * E[N] == 1 --> N
>> + * Req[Block] <- E[ESB_Block]
>> + * Req[Index] <- E[ESB_Index]
>> + * Req[Offset] <- 0x000
>> + * Execute <ESB Store> Req command
>> + */
>> + lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
>> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
>> +
>> + xive2_notify(xrtr, lisn, true /* pq_checked */);
>> + }
>> +
>> + return;
> Don't need returns at the end of void functions.
Does it "hurt" anything? I am old-school and was taught to always put
in a return at the end of a function.
>
>> }
>>
>> -void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
>> +void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
> Can it be static?
I think so....
>
> Thanks,
> Nick
>
>> {
>> - Xive2Router *xrtr = XIVE2_ROUTER(xn);
>> uint8_t eas_blk = XIVE_EAS_BLOCK(lisn);
>> uint32_t eas_idx = XIVE_EAS_INDEX(lisn);
>> Xive2Eas eas;
>> @@ -1565,13 +1586,29 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
>> return;
>> }
>>
>> + /* TODO: add support for EAS resume if ever needed */
>> + if (xive2_eas_is_resume(&eas)) {
>> + qemu_log_mask(LOG_UNIMP,
>> + "XIVE: EAS resume processing unimplemented - LISN %x\n",
>> + lisn);
>> + return;
>> + }
>> +
>> /*
>> * The event trigger becomes an END trigger
>> */
>> xive2_router_end_notify(xrtr,
>> - xive_get_field64(EAS2_END_BLOCK, eas.w),
>> - xive_get_field64(EAS2_END_INDEX, eas.w),
>> - xive_get_field64(EAS2_END_DATA, eas.w));
>> + xive_get_field64(EAS2_END_BLOCK, eas.w),
>> + xive_get_field64(EAS2_END_INDEX, eas.w),
>> + xive_get_field64(EAS2_END_DATA, eas.w));
>> +}
>> +
>> +void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
>> +{
>> + Xive2Router *xrtr = XIVE2_ROUTER(xn);
>> +
>> + xive2_notify(xrtr, lisn, pq_checked);
>> + return;
>> }
>>
>> static Property xive2_router_properties[] = {
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 03/14] ppc/xive2: Support group-matching when looking for target
2024-11-21 22:56 ` Mike Kowal
@ 2024-12-02 22:08 ` Mike Kowal
0 siblings, 0 replies; 29+ messages in thread
From: Mike Kowal @ 2024-12-02 22:08 UTC (permalink / raw)
To: Nicholas Piggin, qemu-devel
Cc: qemu-ppc, clg, fbarrat, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
[-- Attachment #1: Type: text/plain, Size: 17963 bytes --]
On 11/21/2024 4:56 PM, Mike Kowal wrote:
>
> On 11/18/2024 9:22 PM, Nicholas Piggin wrote:
>> On Wed Oct 16, 2024 at 7:13 AM AEST, Michael Kowal wrote:
>>> From: Frederic Barrat <fbarrat@linux.ibm.com>
>>>
>>> If an END has the 'i' bit set (ignore), then it targets a group of
>>> VPs. The size of the group depends on the VP index of the target
>>> (first 0 found when looking at the least significant bits of the
>>> index) so a mask is applied on the VP index of a running thread to
>>> know if we have a match.
>>>
>>> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
>>> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
>>> ---
>>> include/hw/ppc/xive.h | 5 +++-
>>> include/hw/ppc/xive2.h | 1 +
>>> hw/intc/pnv_xive2.c | 33 ++++++++++++++-------
>>> hw/intc/xive.c | 56 +++++++++++++++++++++++++-----------
>>> hw/intc/xive2.c | 65
>>> ++++++++++++++++++++++++++++++------------
>>> 5 files changed, 114 insertions(+), 46 deletions(-)
>>>
>>> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
>>> index 27ef6c1a17..a177b75723 100644
>>> --- a/include/hw/ppc/xive.h
>>> +++ b/include/hw/ppc/xive.h
>>> @@ -424,6 +424,7 @@ void xive_router_end_notify(XiveRouter *xrtr,
>>> XiveEAS *eas);
>>> typedef struct XiveTCTXMatch {
>>> XiveTCTX *tctx;
>>> uint8_t ring;
>>> + bool precluded;
>>> } XiveTCTXMatch;
>>> #define TYPE_XIVE_PRESENTER "xive-presenter"
>>> @@ -452,7 +453,9 @@ int xive_presenter_tctx_match(XivePresenter
>>> *xptr, XiveTCTX *tctx,
>>> bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
>>> uint8_t nvt_blk, uint32_t nvt_idx,
>>> bool cam_ignore, uint8_t priority,
>>> - uint32_t logic_serv);
>>> + uint32_t logic_serv, bool *precluded);
>>> +
>>> +uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
>>> /*
>>> * XIVE Fabric (Interface between Interrupt Controller and Machine)
>>> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
>>> index 5bccf41159..17c31fcb4b 100644
>>> --- a/include/hw/ppc/xive2.h
>>> +++ b/include/hw/ppc/xive2.h
>>> @@ -121,6 +121,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter
>>> *xptr, XiveTCTX *tctx,
>>> hwaddr offset, unsigned size);
>>> void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
>>> hwaddr offset, uint64_t value,
>>> unsigned size);
>>> +bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t
>>> priority);
>>> void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
>>> hwaddr offset, uint64_t value,
>>> unsigned size);
>>> void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
>>> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
>>> index 834d32287b..3fb466bb2c 100644
>>> --- a/hw/intc/pnv_xive2.c
>>> +++ b/hw/intc/pnv_xive2.c
>>> @@ -660,21 +660,34 @@ static int pnv_xive2_match_nvt(XivePresenter
>>> *xptr, uint8_t format,
>>> logic_serv);
>>> }
>>> - /*
>>> - * Save the context and follow on to catch duplicates,
>>> - * that we don't support yet.
>>> - */
>>> if (ring != -1) {
>>> - if (match->tctx) {
>>> + /*
>>> + * For VP-specific match, finding more than one is a
>>> + * problem. For group notification, it's possible.
>>> + */
>>> + if (!cam_ignore && match->tctx) {
>>> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already
>>> found a "
>>> "thread context NVT %x/%x\n",
>>> nvt_blk, nvt_idx);
>>> - return false;
>>> + /* Should set a FIR if we ever model it */
>>> + return -1;
>>> + }
>>> + /*
>>> + * For a group notification, we need to know if the
>>> + * match is precluded first by checking the current
>>> + * thread priority. If the interrupt can be delivered,
>>> + * we always notify the first match (for now).
>>> + */
>>> + if (cam_ignore &&
>>> + xive2_tm_irq_precluded(tctx, ring, priority)) {
>>> + match->precluded = true;
>>> + } else {
>>> + if (!match->tctx) {
>>> + match->ring = ring;
>>> + match->tctx = tctx;
>>> + }
>>> + count++;
>> Multiple matches logic is a bit shoehorned into the match code here.
>>
>> "Return any best match" would be okay, but match->precluded could be set
>> to true for a non-precluded match if a different match was precluded.
>> And for VP directed interrupts, we can get a match from here which
>> *is* precluded, but has precluded = false!
>>
>> It's a bit confusing.
>>
>> typedef struct XiveTCTXMatch {
>> XiveTCTX *tctx;
>> uint8_t ring;
>> bool precluded;
>> } XiveTCTXMatch;
>>
>> What if this was changed to be more clear it doesn't refer to a single
>> tctx? Something like -
>>
>> XiveNVTMatches {
>> XiveTCTX *best_tctx;
>> uint8_t best_ring;
>> int match_count;
>> int precluded_group_match_count;
>> }
>
>
> I'll have to wrap my head around this one... 😳 If I can not figure
> it out, I will et back to you.
>
>
>>> }
>>> -
>>> - match->ring = ring;
>>> - match->tctx = tctx;
>>> - count++;
>>> }
>>> }
>>> }
>>> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
>>> index bacf518fa6..8ffcac4f65 100644
>>> --- a/hw/intc/xive.c
>>> +++ b/hw/intc/xive.c
>>> @@ -1671,6 +1671,16 @@ static uint32_t
>>> xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
>>> return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
>>> }
>>> +uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
>>> +{
>>> + /*
>>> + * Group size is a power of 2. The position of the first 0
>>> + * (starting with the least significant bits) in the NVP index
>>> + * gives the size of the group.
>>> + */
>>> + return 1 << (ctz32(~nvp_index) + 1);
>>> +}
>>> +
>>> static uint8_t xive_get_group_level(uint32_t nvp_index)
>>> {
>>> /* FIXME add crowd encoding */
>>> @@ -1743,30 +1753,39 @@ int xive_presenter_tctx_match(XivePresenter
>>> *xptr, XiveTCTX *tctx,
>>> /*
>>> * This is our simple Xive Presenter Engine model. It is merged in
>>> the
>>> * Router as it does not require an extra object.
>>> - *
>>> - * It receives notification requests sent by the IVRE to find one
>>> - * matching NVT (or more) dispatched on the processor threads. In case
>>> - * of a single NVT notification, the process is abbreviated and the
>>> - * thread is signaled if a match is found. In case of a logical server
>>> - * notification (bits ignored at the end of the NVT identifier), the
>>> - * IVPE and IVRE select a winning thread using different filters. This
>>> - * involves 2 or 3 exchanges on the PowerBus that the model does not
>>> - * support.
>>> - *
>>> - * The parameters represent what is sent on the PowerBus
>>> */
>>> bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
>>> uint8_t nvt_blk, uint32_t nvt_idx,
>>> bool cam_ignore, uint8_t priority,
>>> - uint32_t logic_serv)
>>> + uint32_t logic_serv, bool *precluded)
>>> {
>>> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
>>> - XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
>>> + XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded =
>>> false };
>>> uint8_t group_level;
>>> int count;
>>> /*
>>> - * Ask the machine to scan the interrupt controllers for a match
>>> + * Ask the machine to scan the interrupt controllers for a match.
>>> + *
>>> + * For VP-specific notification, we expect at most one match and
>>> + * one call to the presenters is all we need (abbreviated notify
>>> + * sequence documented by the architecture).
>>> + *
>>> + * For VP-group notification, match_nvt() is the equivalent of the
>>> + * "histogram" and "poll" commands sent to the power bus to the
>>> + * presenters. 'count' could be more than one, but we always
>>> + * select the first match for now. 'precluded' tells if (at least)
>>> + * one thread matches but can't take the interrupt now because
>>> + * it's running at a more favored priority. We return the
>>> + * information to the router so that it can take appropriate
>>> + * actions (backlog, escalation, broadcast, etc...)
>>> + *
>>> + * If we were to implement a better way of dispatching the
>>> + * interrupt in case of multiple matches (instead of the first
>>> + * match), we would need a heuristic to elect a thread (for
>>> + * example, the hardware keeps track of an 'age' in the TIMA) and
>>> + * a new command to the presenters (the equivalent of the "assign"
>>> + * power bus command in the documented full notify sequence.
>>> */
>>> count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
>>> priority, logic_serv, &match);
>>> @@ -1779,6 +1798,8 @@ bool xive_presenter_notify(XiveFabric *xfb,
>>> uint8_t format,
>>> group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
>>> trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring,
>>> group_level);
>>> xive_tctx_pipr_update(match.tctx, match.ring, priority,
>>> group_level);
>>> + } else {
>>> + *precluded = match.precluded;
>>> }
>>> return !!count;
>>> @@ -1818,7 +1839,7 @@ void xive_router_end_notify(XiveRouter *xrtr,
>>> XiveEAS *eas)
>>> uint8_t nvt_blk;
>>> uint32_t nvt_idx;
>>> XiveNVT nvt;
>>> - bool found;
>>> + bool found, precluded;
>>> uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
>>> uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
>>> @@ -1901,8 +1922,9 @@ void xive_router_end_notify(XiveRouter *xrtr,
>>> XiveEAS *eas)
>>> found = xive_presenter_notify(xrtr->xfb, format, nvt_blk,
>>> nvt_idx,
>>> xive_get_field32(END_W7_F0_IGNORE, end.w7),
>>> priority,
>>> - xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7));
>>> -
>>> + xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
>>> + &precluded);
>>> + /* we don't support VP-group notification on P9, so precluded
>>> is not used */
>>> /* TODO: Auto EOI. */
>>> if (found) {
>>> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
>>> index db372f4b30..2cb03c758e 100644
>>> --- a/hw/intc/xive2.c
>>> +++ b/hw/intc/xive2.c
>>> @@ -739,6 +739,12 @@ int xive2_router_write_nvgc(Xive2Router *xrtr,
>>> bool crowd,
>>> return xrc->write_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, nvgc);
>>> }
>>> +static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
>>> + uint32_t vp_mask)
>>> +{
>>> + return (cam1 & vp_mask) == (cam2 & vp_mask);
>>> +}
>>> +
>>> /*
>>> * The thread context register words are in big-endian format.
>>> */
>>> @@ -753,44 +759,50 @@ int xive2_presenter_tctx_match(XivePresenter
>>> *xptr, XiveTCTX *tctx,
>>> uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
>>> uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
>>> - /*
>>> - * TODO (PowerNV): ignore mode. The low order bits of the NVT
>>> - * identifier are ignored in the "CAM" match.
>>> - */
>>> + uint32_t vp_mask = 0xFFFFFFFF;
>>> if (format == 0) {
>>> - if (cam_ignore == true) {
>>> - /*
>>> - * F=0 & i=1: Logical server notification (bits ignored at
>>> - * the end of the NVT identifier)
>>> - */
>>> - qemu_log_mask(LOG_UNIMP, "XIVE: no support for LS NVT
>>> %x/%x\n",
>>> - nvt_blk, nvt_idx);
>>> - return -1;
>>> + /*
>>> + * i=0: Specific NVT notification
>>> + * i=1: VP-group notification (bits ignored at the end of the
>>> + * NVT identifier)
>>> + */
>>> + if (cam_ignore) {
>>> + vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
>>> }
>>> - /* F=0 & i=0: Specific NVT notification */
>>> + /* For VP-group notifications, threads with LGS=0 are
>>> excluded */
>>> /* PHYS ring */
>>> if ((be32_to_cpu(qw3w2) & TM2_QW3W2_VT) &&
>>> - cam == xive2_tctx_hw_cam_line(xptr, tctx)) {
>>> + !(cam_ignore && tctx->regs[TM_QW3_HV_PHYS + TM_LGS] ==
>>> 0) &&
>>> + xive2_vp_match_mask(cam,
>>> + xive2_tctx_hw_cam_line(xptr, tctx),
>>> + vp_mask)) {
>>> return TM_QW3_HV_PHYS;
>>> }
>>> /* HV POOL ring */
>>> if ((be32_to_cpu(qw2w2) & TM2_QW2W2_VP) &&
>>> - cam == xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2)) {
>>> + !(cam_ignore && tctx->regs[TM_QW2_HV_POOL + TM_LGS] ==
>>> 0) &&
>>> + xive2_vp_match_mask(cam,
>>> + xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2),
>>> + vp_mask)) {
>>> return TM_QW2_HV_POOL;
>>> }
>>> /* OS ring */
>>> if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
>>> - cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) {
>>> + !(cam_ignore && tctx->regs[TM_QW1_OS + TM_LGS] == 0) &&
>>> + xive2_vp_match_mask(cam,
>>> + xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2),
>>> + vp_mask)) {
>>> return TM_QW1_OS;
>>> }
>>> } else {
>>> /* F=1 : User level Event-Based Branch (EBB) notification */
>>> + /* FIXME: what if cam_ignore and LGS = 0 ? */
>>> /* USER ring */
>>> if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
>>> (cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) &&
>>> @@ -802,6 +814,22 @@ int xive2_presenter_tctx_match(XivePresenter
>>> *xptr, XiveTCTX *tctx,
>>> return -1;
>>> }
>>> +bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t
>>> priority)
>>> +{
>>> + uint8_t *regs = &tctx->regs[ring];
>>> +
>>> + /*
>>> + * The xive2_presenter_tctx_match() above tells if there's a match
>>> + * but for VP-group notification, we still need to look at the
>>> + * priority to know if the thread can take the interrupt now or if
>>> + * it is precluded.
>>> + */
>>> + if (priority < regs[TM_CPPR]) {
>> Should this also test PIPR?
>>
>> I'm not sure about CPPR and PIPR relationship exactly. Does hardware
>> set PIPR for pending IPB interrupts even if they are not < CPPR? Or
>> does it always reflect the presented interrupt
>
>
> I am not sure. I will dig into the simulation models, and the
> architecture process flows which they followed, and find out.
According to the TIMA preccess flows and other XIVE2 models, yes, this
should be PIPR. This will be included with group 5:
80 - ppc/xive2: Fix irq preempted by lower priority irq
e76a18f3ab5530f12855bb57d3d4ebecb4532b86
I could include it here but there are there changes that pre-req the
change above, in group4 and group 5.
MAK
>
>
>>
>> Thanks,
>> Nick
[-- Attachment #2: Type: text/html, Size: 27530 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2024-12-02 22:10 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-15 21:13 [PATCH 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
2024-10-15 21:13 ` [PATCH 01/14] ppc/xive2: Update NVP save/restore for group attributes Michael Kowal
2024-10-15 21:13 ` [PATCH 02/14] ppc/xive2: Add grouping level to notification Michael Kowal
2024-11-19 2:08 ` Nicholas Piggin
2024-11-21 22:31 ` Mike Kowal
2024-10-15 21:13 ` [PATCH 03/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
2024-11-19 3:22 ` Nicholas Piggin
2024-11-21 22:56 ` Mike Kowal
2024-12-02 22:08 ` Mike Kowal
2024-10-15 21:13 ` [PATCH 04/14] ppc/xive2: Add undelivered group interrupt to backlog Michael Kowal
2024-10-15 21:13 ` [PATCH 05/14] ppc/xive2: Process group backlog when pushing an OS context Michael Kowal
2024-11-19 4:20 ` Nicholas Piggin
2024-10-15 21:13 ` [PATCH 06/14] ppc/xive2: Process group backlog when updating the CPPR Michael Kowal
2024-11-19 4:34 ` Nicholas Piggin
2024-11-21 23:12 ` Mike Kowal
2024-10-15 21:13 ` [PATCH 07/14] qtest/xive: Add group-interrupt test Michael Kowal
2024-10-15 21:13 ` [PATCH 08/14] Add support for MMIO operations on the NVPG/NVC BAR Michael Kowal
2024-10-15 21:13 ` [PATCH 09/14] ppc/xive2: Support crowd-matching when looking for target Michael Kowal
2024-10-15 21:13 ` [PATCH 10/14] ppc/xive2: Check crowd backlog when scanning group backlog Michael Kowal
2024-10-15 21:13 ` [PATCH 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16 Michael Kowal
2024-11-19 2:31 ` Nicholas Piggin
2024-10-15 21:13 ` [PATCH 12/14] pnv/xive: Support ESB Escalation Michael Kowal
2024-11-19 5:00 ` Nicholas Piggin
2024-11-21 23:22 ` Mike Kowal
2024-10-15 21:13 ` [PATCH 13/14] pnv/xive: Fix problem with treating NVGC as a NVP Michael Kowal
2024-11-19 5:04 ` Nicholas Piggin
2024-10-15 21:13 ` [PATCH 14/14] qtest/xive: Add test of pool interrupts Michael Kowal
2024-10-16 8:33 ` Thomas Huth
2024-10-16 15:41 ` Mike Kowal
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).