* [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations
@ 2024-12-10 0:05 Michael Kowal
2024-12-10 0:05 ` [PATCH v2 01/14] ppc/xive2: Update NVP save/restore for group attributes Michael Kowal
` (24 more replies)
0 siblings, 25 replies; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
XIVE2 has the concepts of a Group of interrupts and a Crowd of interrupts
(where a crowd is a group of Groups). These patch sets are associated with:
- NVGC tables
- Group/Crowd level notification
- Incrementing backlog countets
- Backlog processing
- NVPG and NVC Bar MMIO operations
- Group/Crowd testing
- ESB Escalation
- Pool interrupt testing
version 2:
- Removed printfs from test models and replaced with g_test_message()
- Updated XIVE copyrights to use:
SPDX-License-Identifier: GPL-2.0-or-later
- Set entire NSR to 0, not just fields
- Moved rename of xive_ipb_to_pipr() into its own patch set 0002
- Rename xive2_presenter_backlog_check() to
xive2_presenter_backlog_scan()
- Squash patch set 11 (crowd size restrictions) into
patch set 9 (support crowd-matching)
- Made xive2_notify() a static rou
Frederic Barrat (10):
ppc/xive2: Update NVP save/restore for group attributes
ppc/xive2: Add grouping level to notification
ppc/xive2: Support group-matching when looking for target
ppc/xive2: Add undelivered group interrupt to backlog
ppc/xive2: Process group backlog when pushing an OS context
ppc/xive2: Process group backlog when updating the CPPR
qtest/xive: Add group-interrupt test
ppc/xive2: Add support for MMIO operations on the NVPG/NVC BAR
ppc/xive2: Support crowd-matching when looking for target
ppc/xive2: Check crowd backlog when scanning group backlog
Glenn Miles (3):
pnv/xive: Support ESB Escalation
pnv/xive: Fix problem with treating NVGC as a NVP
qtest/xive: Add test of pool interrupts
Michael Kowal (1):
ppc/xive: Rename ipb_to_pipr() to xive_ipb_to_pipr()
include/hw/ppc/xive.h | 41 +-
include/hw/ppc/xive2.h | 25 +-
include/hw/ppc/xive2_regs.h | 30 +-
include/hw/ppc/xive_regs.h | 25 +-
tests/qtest/pnv-xive2-common.h | 1 +
hw/intc/pnv_xive.c | 10 +-
hw/intc/pnv_xive2.c | 166 +++++--
hw/intc/spapr_xive.c | 8 +-
hw/intc/xive.c | 200 +++++---
hw/intc/xive2.c | 750 +++++++++++++++++++++++++----
hw/ppc/pnv.c | 35 +-
hw/ppc/spapr.c | 7 +-
tests/qtest/pnv-xive2-flush-sync.c | 6 +-
tests/qtest/pnv-xive2-nvpg_bar.c | 153 ++++++
tests/qtest/pnv-xive2-test.c | 249 +++++++++-
hw/intc/trace-events | 6 +-
tests/qtest/meson.build | 3 +-
17 files changed, 1475 insertions(+), 240 deletions(-)
create mode 100644 tests/qtest/pnv-xive2-nvpg_bar.c
--
2.43.0
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 01/14] ppc/xive2: Update NVP save/restore for group attributes
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 3:22 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 02/14] ppc/xive2: Add grouping level to notification Michael Kowal
` (23 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
If the 'H' attribute is set on the NVP structure, the hardware
automatically saves and restores some attributes from the TIMA in the
NVP structure.
The group-specific attributes LSMFB, LGS and T have an extra flag to
individually control what is saved/restored.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2_regs.h | 10 +++++++---
hw/intc/xive2.c | 23 ++++++++++++++++++-----
2 files changed, 25 insertions(+), 8 deletions(-)
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 1d00c8df64..e88d6eab1e 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -1,10 +1,9 @@
/*
* QEMU PowerPC XIVE2 internal structure definitions (POWER10)
*
- * Copyright (c) 2019-2022, IBM Corporation.
+ * Copyright (c) 2019-2024, IBM Corporation.
*
- * This code is licensed under the GPL version 2 or later. See the
- * COPYING file in the top-level directory.
+ * SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef PPC_XIVE2_REGS_H
@@ -152,6 +151,9 @@ typedef struct Xive2Nvp {
uint32_t w0;
#define NVP2_W0_VALID PPC_BIT32(0)
#define NVP2_W0_HW PPC_BIT32(7)
+#define NVP2_W0_L PPC_BIT32(8)
+#define NVP2_W0_G PPC_BIT32(9)
+#define NVP2_W0_T PPC_BIT32(10)
#define NVP2_W0_ESC_END PPC_BIT32(25) /* 'N' bit 0:ESB 1:END */
#define NVP2_W0_PGOFIRST PPC_BITMASK32(26, 31)
uint32_t w1;
@@ -163,6 +165,8 @@ typedef struct Xive2Nvp {
#define NVP2_W2_CPPR PPC_BITMASK32(0, 7)
#define NVP2_W2_IPB PPC_BITMASK32(8, 15)
#define NVP2_W2_LSMFB PPC_BITMASK32(16, 23)
+#define NVP2_W2_T PPC_BIT32(27)
+#define NVP2_W2_LGS PPC_BITMASK32(28, 31)
uint32_t w3;
uint32_t w4;
#define NVP2_W4_ESC_ESB_BLOCK PPC_BITMASK32(0, 3) /* N:0 */
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index d1df35e9b3..24e504fce1 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1,10 +1,9 @@
/*
* QEMU PowerPC XIVE2 interrupt controller model (POWER10)
*
- * Copyright (c) 2019-2022, IBM Corporation..
+ * Copyright (c) 2019-2024, IBM Corporation..
*
- * This code is licensed under the GPL version 2 or later. See the
- * COPYING file in the top-level directory.
+ * SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "qemu/osdep.h"
@@ -313,7 +312,19 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, regs[TM_IPB]);
nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, regs[TM_CPPR]);
- nvp.w2 = xive_set_field32(NVP2_W2_LSMFB, nvp.w2, regs[TM_LSMFB]);
+ if (nvp.w0 & NVP2_W0_L) {
+ /*
+ * Typically not used. If LSMFB is restored with 0, it will
+ * force a backlog rescan
+ */
+ nvp.w2 = xive_set_field32(NVP2_W2_LSMFB, nvp.w2, regs[TM_LSMFB]);
+ }
+ if (nvp.w0 & NVP2_W0_G) {
+ nvp.w2 = xive_set_field32(NVP2_W2_LGS, nvp.w2, regs[TM_LGS]);
+ }
+ if (nvp.w0 & NVP2_W0_T) {
+ nvp.w2 = xive_set_field32(NVP2_W2_T, nvp.w2, regs[TM_T]);
+ }
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
nvp.w1 = xive_set_field32(NVP2_W1_CO, nvp.w1, 0);
@@ -527,7 +538,9 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 2);
tctx->regs[TM_QW1_OS + TM_CPPR] = cppr;
- /* we don't model LSMFB */
+ tctx->regs[TM_QW1_OS + TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
+ tctx->regs[TM_QW1_OS + TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
+ tctx->regs[TM_QW1_OS + TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
nvp->w1 = xive_set_field32(NVP2_W1_CO, nvp->w1, 1);
nvp->w1 = xive_set_field32(NVP2_W1_CO_THRID_VALID, nvp->w1, 1);
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 02/14] ppc/xive2: Add grouping level to notification
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
2024-12-10 0:05 ` [PATCH v2 01/14] ppc/xive2: Update NVP save/restore for group attributes Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 3:27 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 02/14] ppc/xive: Rename ipb_to_pipr() to xive_ipb_to_pipr() Michael Kowal
` (22 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
The NSR has a (so far unused) grouping level field. When a interrupt
is presented, that field tells the hypervisor or OS if the interrupt
is for an individual VP or for a VP-group/crowd. This patch reworks
the presentation API to allow to set/unset the level when
raising/accepting an interrupt.
It also renames xive_tctx_ipb_update() to xive_tctx_pipr_update() as
the IPB is only used for VP-specific target, whereas the PIPR always
needs to be updated.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 19 +++++++-
include/hw/ppc/xive_regs.h | 20 +++++++--
hw/intc/xive.c | 90 +++++++++++++++++++++++---------------
hw/intc/xive2.c | 18 ++++----
hw/intc/trace-events | 2 +-
5 files changed, 100 insertions(+), 49 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index ebee982528..971da029eb 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -510,6 +510,21 @@ static inline uint8_t xive_priority_to_ipb(uint8_t priority)
0 : 1 << (XIVE_PRIORITY_MAX - priority);
}
+static inline uint8_t xive_priority_to_pipr(uint8_t priority)
+{
+ return priority > XIVE_PRIORITY_MAX ? 0xFF : priority;
+}
+
+/*
+ * Convert an Interrupt Pending Buffer (IPB) register to a Pending
+ * Interrupt Priority Register (PIPR), which contains the priority of
+ * the most favored pending notification.
+ */
+static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
+{
+ return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
+}
+
/*
* XIVE Thread Interrupt Management Aera (TIMA)
*
@@ -532,8 +547,10 @@ void xive_tctx_pic_print_info(XiveTCTX *tctx, GString *buf);
Object *xive_tctx_create(Object *cpu, XivePresenter *xptr, Error **errp);
void xive_tctx_reset(XiveTCTX *tctx);
void xive_tctx_destroy(XiveTCTX *tctx);
-void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb);
+void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+ uint8_t group_level);
void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
+void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
/*
* KVM XIVE device helpers
diff --git a/include/hw/ppc/xive_regs.h b/include/hw/ppc/xive_regs.h
index 326327fc79..b455728c9c 100644
--- a/include/hw/ppc/xive_regs.h
+++ b/include/hw/ppc/xive_regs.h
@@ -146,7 +146,14 @@
#define TM_SPC_PULL_PHYS_CTX_OL 0xc38 /* Pull phys ctx to odd cache line */
/* XXX more... */
-/* NSR fields for the various QW ack types */
+/*
+ * NSR fields for the various QW ack types
+ *
+ * P10 has an extra bit in QW3 for the group level instead of the
+ * reserved 'i' bit. Since it is not used and we don't support group
+ * interrupts on P9, we use the P10 definition for the group level so
+ * that we can have common macros for the NSR
+ */
#define TM_QW0_NSR_EB PPC_BIT8(0)
#define TM_QW1_NSR_EO PPC_BIT8(0)
#define TM_QW3_NSR_HE PPC_BITMASK8(0, 1)
@@ -154,8 +161,15 @@
#define TM_QW3_NSR_HE_POOL 1
#define TM_QW3_NSR_HE_PHYS 2
#define TM_QW3_NSR_HE_LSI 3
-#define TM_QW3_NSR_I PPC_BIT8(2)
-#define TM_QW3_NSR_GRP_LVL PPC_BIT8(3, 7)
+#define TM_NSR_GRP_LVL PPC_BITMASK8(2, 7)
+/*
+ * On P10, the format of the 6-bit group level is: 2 bits for the
+ * crowd size and 4 bits for the group size. Since group/crowd size is
+ * always a power of 2, we encode the log. For example, group_level=4
+ * means crowd size = 0 and group size = 16 (2^4)
+ * Same encoding is used in the NVP and NVGC structures for
+ * PGoFirst and PGoNext fields
+ */
/*
* EAS (Event Assignment Structure)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 245e4d181a..6e73f7b063 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -27,16 +27,6 @@
* XIVE Thread Interrupt Management context
*/
-/*
- * Convert an Interrupt Pending Buffer (IPB) register to a Pending
- * Interrupt Priority Register (PIPR), which contains the priority of
- * the most favored pending notification.
- */
-static uint8_t ipb_to_pipr(uint8_t ibp)
-{
- return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
-}
-
static uint8_t exception_mask(uint8_t ring)
{
switch (ring) {
@@ -87,10 +77,17 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
regs[TM_CPPR] = cppr;
- /* Reset the pending buffer bit */
- alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
+ /*
+ * If the interrupt was for a specific VP, reset the pending
+ * buffer bit, otherwise clear the logical server indicator
+ */
+ if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
+ regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
+ } else {
+ alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
+ }
- /* Drop Exception bit */
+ /* Drop the exception bit */
regs[TM_NSR] &= ~mask;
trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
@@ -101,7 +98,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
return ((uint64_t)nsr << 8) | regs[TM_CPPR];
}
-static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring)
+void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
{
/* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
@@ -111,13 +108,13 @@ static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring)
if (alt_regs[TM_PIPR] < alt_regs[TM_CPPR]) {
switch (ring) {
case TM_QW1_OS:
- regs[TM_NSR] |= TM_QW1_NSR_EO;
+ regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
break;
case TM_QW2_HV_POOL:
- alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6);
+ alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
break;
case TM_QW3_HV_PHYS:
- regs[TM_NSR] |= (TM_QW3_NSR_HE_PHYS << 6);
+ regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
break;
default:
g_assert_not_reached();
@@ -159,7 +156,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
* Recompute the PIPR based on local pending interrupts. The PHYS
* ring must take the minimum of both the PHYS and POOL PIPR values.
*/
- pipr_min = ipb_to_pipr(regs[TM_IPB]);
+ pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
ring_min = ring;
/* PHYS updates also depend on POOL values */
@@ -169,7 +166,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
/* POOL values only matter if POOL ctx is valid */
if (pool_regs[TM_WORD2] & 0x80) {
- uint8_t pool_pipr = ipb_to_pipr(pool_regs[TM_IPB]);
+ uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
/*
* Determine highest priority interrupt and
@@ -185,17 +182,27 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
regs[TM_PIPR] = pipr_min;
/* CPPR has changed, check if we need to raise a pending exception */
- xive_tctx_notify(tctx, ring_min);
+ xive_tctx_notify(tctx, ring_min, 0);
}
-void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb)
-{
+void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+ uint8_t group_level)
+ {
+ /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
+ uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+ uint8_t *alt_regs = &tctx->regs[alt_ring];
uint8_t *regs = &tctx->regs[ring];
- regs[TM_IPB] |= ipb;
- regs[TM_PIPR] = ipb_to_pipr(regs[TM_IPB]);
- xive_tctx_notify(tctx, ring);
-}
+ if (group_level == 0) {
+ /* VP-specific */
+ regs[TM_IPB] |= xive_priority_to_ipb(priority);
+ alt_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
+ } else {
+ /* VP-group */
+ alt_regs[TM_PIPR] = xive_priority_to_pipr(priority);
+ }
+ xive_tctx_notify(tctx, ring, group_level);
+ }
/*
* XIVE Thread Interrupt Management Area (TIMA)
@@ -411,13 +418,13 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
}
/*
- * Adjust the IPB to allow a CPU to process event queues of other
+ * Adjust the PIPR to allow a CPU to process event queues of other
* priorities during one physical interrupt cycle.
*/
static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size)
{
- xive_tctx_ipb_update(tctx, TM_QW1_OS, xive_priority_to_ipb(value & 0xff));
+ xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0);
}
static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk,
@@ -495,16 +502,20 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
/* Reset the NVT value */
nvt.w4 = xive_set_field32(NVT_W4_IPB, nvt.w4, 0);
xive_router_write_nvt(xrtr, nvt_blk, nvt_idx, &nvt, 4);
- }
+
+ uint8_t *regs = &tctx->regs[TM_QW1_OS];
+ regs[TM_IPB] |= ipb;
+}
+
/*
- * Always call xive_tctx_ipb_update(). Even if there were no
+ * Always call xive_tctx_pipr_update(). Even if there were no
* escalation triggered, there could be a pending interrupt which
* was saved when the context was pulled and that we need to take
* into account by recalculating the PIPR (which is not
* saved/restored).
* It will also raise the External interrupt signal if needed.
*/
- xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
+ xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
}
/*
@@ -841,9 +852,9 @@ void xive_tctx_reset(XiveTCTX *tctx)
* CPPR is first set.
*/
tctx->regs[TM_QW1_OS + TM_PIPR] =
- ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
+ xive_ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
tctx->regs[TM_QW3_HV_PHYS + TM_PIPR] =
- ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
+ xive_ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
}
static void xive_tctx_realize(DeviceState *dev, Error **errp)
@@ -1660,6 +1671,12 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
}
+static uint8_t xive_get_group_level(uint32_t nvp_index)
+{
+ /* FIXME add crowd encoding */
+ return ctz32(~nvp_index) + 1;
+}
+
/*
* The thread context register words are in big-endian format.
*/
@@ -1745,6 +1762,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
{
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
+ uint8_t group_level;
int count;
/*
@@ -1758,9 +1776,9 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
/* handle CPU exception delivery */
if (count) {
- trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring);
- xive_tctx_ipb_update(match.tctx, match.ring,
- xive_priority_to_ipb(priority));
+ group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
+ trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
+ xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
}
return !!count;
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 4adc3b6950..db372f4b30 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -564,8 +564,10 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
{
+ uint8_t ipb, backlog_level;
+ uint8_t backlog_prio;
+ uint8_t *regs = &tctx->regs[TM_QW1_OS];
Xive2Nvp nvp;
- uint8_t ipb;
/*
* Grab the associated thread interrupt context registers in the
@@ -594,15 +596,15 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
}
+ regs[TM_IPB] = ipb;
+ backlog_prio = xive_ipb_to_pipr(ipb);
+ backlog_level = 0;
+
/*
- * Always call xive_tctx_ipb_update(). Even if there were no
- * escalation triggered, there could be a pending interrupt which
- * was saved when the context was pulled and that we need to take
- * into account by recalculating the PIPR (which is not
- * saved/restored).
- * It will also raise the External interrupt signal if needed.
+ * Compute the PIPR based on the restored state.
+ * It will raise the External interrupt signal if needed.
*/
- xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
+ xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
}
/*
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
index 3dcf147198..7435728c51 100644
--- a/hw/intc/trace-events
+++ b/hw/intc/trace-events
@@ -282,7 +282,7 @@ xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "EN
xive_router_end_escalate(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t end_data) "END 0x%02x/0x%04x -> escalate END 0x%02x/0x%04x data 0x%08x"
xive_tctx_tm_write(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
-xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring) "found NVT 0x%x/0x%x ring=0x%x"
+xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d"
xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64
# pnv_xive.c
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 02/14] ppc/xive: Rename ipb_to_pipr() to xive_ipb_to_pipr()
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
2024-12-10 0:05 ` [PATCH v2 01/14] ppc/xive2: Update NVP save/restore for group attributes Michael Kowal
2024-12-10 0:05 ` [PATCH v2 02/14] ppc/xive2: Add grouping level to notification Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 3:45 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 03/14] ppc/xive2: Add grouping level to notification Michael Kowal
` (21 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
Renamed function to follow the convention of the other function names.
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 16 ++++++++++++----
hw/intc/xive.c | 22 ++++++----------------
2 files changed, 18 insertions(+), 20 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index ebee982528..41a4263a9d 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -130,11 +130,9 @@
* TCTX Thread interrupt Context
*
*
- * Copyright (c) 2017-2018, IBM Corporation.
- *
- * This code is licensed under the GPL version 2 or later. See the
- * COPYING file in the top-level directory.
+ * Copyright (c) 2017-2024, IBM Corporation.
*
+ * SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef PPC_XIVE_H
@@ -510,6 +508,16 @@ static inline uint8_t xive_priority_to_ipb(uint8_t priority)
0 : 1 << (XIVE_PRIORITY_MAX - priority);
}
+/*
+ * Convert an Interrupt Pending Buffer (IPB) register to a Pending
+ * Interrupt Priority Register (PIPR), which contains the priority of
+ * the most favored pending notification.
+ */
+static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
+{
+ return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
+}
+
/*
* XIVE Thread Interrupt Management Aera (TIMA)
*
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 245e4d181a..7b06a48139 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -3,8 +3,7 @@
*
* Copyright (c) 2017-2018, IBM Corporation.
*
- * This code is licensed under the GPL version 2 or later. See the
- * COPYING file in the top-level directory.
+ * SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "qemu/osdep.h"
@@ -27,15 +26,6 @@
* XIVE Thread Interrupt Management context
*/
-/*
- * Convert an Interrupt Pending Buffer (IPB) register to a Pending
- * Interrupt Priority Register (PIPR), which contains the priority of
- * the most favored pending notification.
- */
-static uint8_t ipb_to_pipr(uint8_t ibp)
-{
- return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
-}
static uint8_t exception_mask(uint8_t ring)
{
@@ -159,7 +149,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
* Recompute the PIPR based on local pending interrupts. The PHYS
* ring must take the minimum of both the PHYS and POOL PIPR values.
*/
- pipr_min = ipb_to_pipr(regs[TM_IPB]);
+ pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
ring_min = ring;
/* PHYS updates also depend on POOL values */
@@ -169,7 +159,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
/* POOL values only matter if POOL ctx is valid */
if (pool_regs[TM_WORD2] & 0x80) {
- uint8_t pool_pipr = ipb_to_pipr(pool_regs[TM_IPB]);
+ uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
/*
* Determine highest priority interrupt and
@@ -193,7 +183,7 @@ void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb)
uint8_t *regs = &tctx->regs[ring];
regs[TM_IPB] |= ipb;
- regs[TM_PIPR] = ipb_to_pipr(regs[TM_IPB]);
+ regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
xive_tctx_notify(tctx, ring);
}
@@ -841,9 +831,9 @@ void xive_tctx_reset(XiveTCTX *tctx)
* CPPR is first set.
*/
tctx->regs[TM_QW1_OS + TM_PIPR] =
- ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
+ xive_ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
tctx->regs[TM_QW3_HV_PHYS + TM_PIPR] =
- ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
+ xive_ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
}
static void xive_tctx_realize(DeviceState *dev, Error **errp)
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 03/14] ppc/xive2: Add grouping level to notification
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (2 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 02/14] ppc/xive: Rename ipb_to_pipr() to xive_ipb_to_pipr() Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 3:43 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 03/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
` (20 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
The NSR has a (so far unused) grouping level field. When a interrupt
is presented, that field tells the hypervisor or OS if the interrupt
is for an individual VP or for a VP-group/crowd. This patch reworks
the presentation API to allow to set/unset the level when
raising/accepting an interrupt.
It also renames xive_tctx_ipb_update() to xive_tctx_pipr_update() as
the IPB is only used for VP-specific target, whereas the PIPR always
needs to be updated.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 9 +++-
include/hw/ppc/xive_regs.h | 25 ++++++++---
hw/intc/xive.c | 88 ++++++++++++++++++++++----------------
hw/intc/xive2.c | 19 ++++----
hw/intc/trace-events | 2 +-
5 files changed, 90 insertions(+), 53 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 41a4263a9d..4d1ce376f1 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -508,6 +508,11 @@ static inline uint8_t xive_priority_to_ipb(uint8_t priority)
0 : 1 << (XIVE_PRIORITY_MAX - priority);
}
+static inline uint8_t xive_priority_to_pipr(uint8_t priority)
+{
+ return priority > XIVE_PRIORITY_MAX ? 0xFF : priority;
+}
+
/*
* Convert an Interrupt Pending Buffer (IPB) register to a Pending
* Interrupt Priority Register (PIPR), which contains the priority of
@@ -540,8 +545,10 @@ void xive_tctx_pic_print_info(XiveTCTX *tctx, GString *buf);
Object *xive_tctx_create(Object *cpu, XivePresenter *xptr, Error **errp);
void xive_tctx_reset(XiveTCTX *tctx);
void xive_tctx_destroy(XiveTCTX *tctx);
-void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb);
+void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+ uint8_t group_level);
void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
+void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
/*
* KVM XIVE device helpers
diff --git a/include/hw/ppc/xive_regs.h b/include/hw/ppc/xive_regs.h
index 326327fc79..54bc6c53b4 100644
--- a/include/hw/ppc/xive_regs.h
+++ b/include/hw/ppc/xive_regs.h
@@ -7,10 +7,9 @@
* access to the different fields.
*
*
- * Copyright (c) 2016-2018, IBM Corporation.
+ * Copyright (c) 2016-2024, IBM Corporation.
*
- * This code is licensed under the GPL version 2 or later. See the
- * COPYING file in the top-level directory.
+ * SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef PPC_XIVE_REGS_H
@@ -146,7 +145,14 @@
#define TM_SPC_PULL_PHYS_CTX_OL 0xc38 /* Pull phys ctx to odd cache line */
/* XXX more... */
-/* NSR fields for the various QW ack types */
+/*
+ * NSR fields for the various QW ack types
+ *
+ * P10 has an extra bit in QW3 for the group level instead of the
+ * reserved 'i' bit. Since it is not used and we don't support group
+ * interrupts on P9, we use the P10 definition for the group level so
+ * that we can have common macros for the NSR
+ */
#define TM_QW0_NSR_EB PPC_BIT8(0)
#define TM_QW1_NSR_EO PPC_BIT8(0)
#define TM_QW3_NSR_HE PPC_BITMASK8(0, 1)
@@ -154,8 +160,15 @@
#define TM_QW3_NSR_HE_POOL 1
#define TM_QW3_NSR_HE_PHYS 2
#define TM_QW3_NSR_HE_LSI 3
-#define TM_QW3_NSR_I PPC_BIT8(2)
-#define TM_QW3_NSR_GRP_LVL PPC_BIT8(3, 7)
+#define TM_NSR_GRP_LVL PPC_BITMASK8(2, 7)
+/*
+ * On P10, the format of the 6-bit group level is: 2 bits for the
+ * crowd size and 4 bits for the group size. Since group/crowd size is
+ * always a power of 2, we encode the log. For example, group_level=4
+ * means crowd size = 0 and group size = 16 (2^4)
+ * Same encoding is used in the NVP and NVGC structures for
+ * PGoFirst and PGoNext fields
+ */
/*
* EAS (Event Assignment Structure)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 7b06a48139..d2690a7d10 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -26,19 +26,6 @@
* XIVE Thread Interrupt Management context
*/
-
-static uint8_t exception_mask(uint8_t ring)
-{
- switch (ring) {
- case TM_QW1_OS:
- return TM_QW1_NSR_EO;
- case TM_QW3_HV_PHYS:
- return TM_QW3_NSR_HE;
- default:
- g_assert_not_reached();
- }
-}
-
static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
{
switch (ring) {
@@ -58,11 +45,10 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
{
uint8_t *regs = &tctx->regs[ring];
uint8_t nsr = regs[TM_NSR];
- uint8_t mask = exception_mask(ring);
qemu_irq_lower(xive_tctx_output(tctx, ring));
- if (regs[TM_NSR] & mask) {
+ if (regs[TM_NSR] != 0) {
uint8_t cppr = regs[TM_PIPR];
uint8_t alt_ring;
uint8_t *alt_regs;
@@ -77,11 +63,18 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
regs[TM_CPPR] = cppr;
- /* Reset the pending buffer bit */
- alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
+ /*
+ * If the interrupt was for a specific VP, reset the pending
+ * buffer bit, otherwise clear the logical server indicator
+ */
+ if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
+ regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
+ } else {
+ alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
+ }
- /* Drop Exception bit */
- regs[TM_NSR] &= ~mask;
+ /* Drop the exception bit and any group/crowd */
+ regs[TM_NSR] = 0;
trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
alt_regs[TM_IPB], regs[TM_PIPR],
@@ -91,7 +84,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
return ((uint64_t)nsr << 8) | regs[TM_CPPR];
}
-static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring)
+void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
{
/* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
@@ -101,13 +94,13 @@ static void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring)
if (alt_regs[TM_PIPR] < alt_regs[TM_CPPR]) {
switch (ring) {
case TM_QW1_OS:
- regs[TM_NSR] |= TM_QW1_NSR_EO;
+ regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
break;
case TM_QW2_HV_POOL:
- alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6);
+ alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
break;
case TM_QW3_HV_PHYS:
- regs[TM_NSR] |= (TM_QW3_NSR_HE_PHYS << 6);
+ regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
break;
default:
g_assert_not_reached();
@@ -175,17 +168,27 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
regs[TM_PIPR] = pipr_min;
/* CPPR has changed, check if we need to raise a pending exception */
- xive_tctx_notify(tctx, ring_min);
+ xive_tctx_notify(tctx, ring_min, 0);
}
-void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb)
-{
+void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+ uint8_t group_level)
+ {
+ /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
+ uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+ uint8_t *alt_regs = &tctx->regs[alt_ring];
uint8_t *regs = &tctx->regs[ring];
- regs[TM_IPB] |= ipb;
- regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
- xive_tctx_notify(tctx, ring);
-}
+ if (group_level == 0) {
+ /* VP-specific */
+ regs[TM_IPB] |= xive_priority_to_ipb(priority);
+ alt_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
+ } else {
+ /* VP-group */
+ alt_regs[TM_PIPR] = xive_priority_to_pipr(priority);
+ }
+ xive_tctx_notify(tctx, ring, group_level);
+ }
/*
* XIVE Thread Interrupt Management Area (TIMA)
@@ -401,13 +404,13 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
}
/*
- * Adjust the IPB to allow a CPU to process event queues of other
+ * Adjust the PIPR to allow a CPU to process event queues of other
* priorities during one physical interrupt cycle.
*/
static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size)
{
- xive_tctx_ipb_update(tctx, TM_QW1_OS, xive_priority_to_ipb(value & 0xff));
+ xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0);
}
static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk,
@@ -485,16 +488,20 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
/* Reset the NVT value */
nvt.w4 = xive_set_field32(NVT_W4_IPB, nvt.w4, 0);
xive_router_write_nvt(xrtr, nvt_blk, nvt_idx, &nvt, 4);
+
+ uint8_t *regs = &tctx->regs[TM_QW1_OS];
+ regs[TM_IPB] |= ipb;
}
+
/*
- * Always call xive_tctx_ipb_update(). Even if there were no
+ * Always call xive_tctx_pipr_update(). Even if there were no
* escalation triggered, there could be a pending interrupt which
* was saved when the context was pulled and that we need to take
* into account by recalculating the PIPR (which is not
* saved/restored).
* It will also raise the External interrupt signal if needed.
*/
- xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
+ xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
}
/*
@@ -1650,6 +1657,12 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
}
+static uint8_t xive_get_group_level(uint32_t nvp_index)
+{
+ /* FIXME add crowd encoding */
+ return ctz32(~nvp_index) + 1;
+}
+
/*
* The thread context register words are in big-endian format.
*/
@@ -1735,6 +1748,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
{
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
+ uint8_t group_level;
int count;
/*
@@ -1748,9 +1762,9 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
/* handle CPU exception delivery */
if (count) {
- trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring);
- xive_tctx_ipb_update(match.tctx, match.ring,
- xive_priority_to_ipb(priority));
+ group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
+ trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
+ xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
}
return !!count;
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 24e504fce1..54e4f784fc 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -563,8 +563,11 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
{
- Xive2Nvp nvp;
uint8_t ipb;
+ uint8_t backlog_level;
+ uint8_t backlog_prio;
+ uint8_t *regs = &tctx->regs[TM_QW1_OS];
+ Xive2Nvp nvp;
/*
* Grab the associated thread interrupt context registers in the
@@ -593,15 +596,15 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
}
+ regs[TM_IPB] = ipb;
+ backlog_prio = xive_ipb_to_pipr(ipb);
+ backlog_level = 0;
+
/*
- * Always call xive_tctx_ipb_update(). Even if there were no
- * escalation triggered, there could be a pending interrupt which
- * was saved when the context was pulled and that we need to take
- * into account by recalculating the PIPR (which is not
- * saved/restored).
- * It will also raise the External interrupt signal if needed.
+ * Compute the PIPR based on the restored state.
+ * It will raise the External interrupt signal if needed.
*/
- xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
+ xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
}
/*
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
index 3dcf147198..7435728c51 100644
--- a/hw/intc/trace-events
+++ b/hw/intc/trace-events
@@ -282,7 +282,7 @@ xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "EN
xive_router_end_escalate(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t end_data) "END 0x%02x/0x%04x -> escalate END 0x%02x/0x%04x data 0x%08x"
xive_tctx_tm_write(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
-xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring) "found NVT 0x%x/0x%x ring=0x%x"
+xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d"
xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64
# pnv_xive.c
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 03/14] ppc/xive2: Support group-matching when looking for target
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (3 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 03/14] ppc/xive2: Add grouping level to notification Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 3:52 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 04/14] ppc/xive2: Add undelivered group interrupt to backlog Michael Kowal
` (19 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
If an END has the 'i' bit set (ignore), then it targets a group of
VPs. The size of the group depends on the VP index of the target
(first 0 found when looking at the least significant bits of the
index) so a mask is applied on the VP index of a running thread to
know if we have a match.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 5 +++-
include/hw/ppc/xive2.h | 1 +
hw/intc/pnv_xive2.c | 33 ++++++++++++++-------
hw/intc/xive.c | 56 +++++++++++++++++++++++++-----------
hw/intc/xive2.c | 65 ++++++++++++++++++++++++++++++------------
5 files changed, 114 insertions(+), 46 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 971da029eb..21ce5a9df3 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -424,6 +424,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas);
typedef struct XiveTCTXMatch {
XiveTCTX *tctx;
uint8_t ring;
+ bool precluded;
} XiveTCTXMatch;
#define TYPE_XIVE_PRESENTER "xive-presenter"
@@ -452,7 +453,9 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint8_t priority,
- uint32_t logic_serv);
+ uint32_t logic_serv, bool *precluded);
+
+uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
/*
* XIVE Fabric (Interface between Interrupt Controller and Machine)
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 5bccf41159..17c31fcb4b 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -121,6 +121,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size);
void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
+bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 834d32287b..3fb466bb2c 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -660,21 +660,34 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
logic_serv);
}
- /*
- * Save the context and follow on to catch duplicates,
- * that we don't support yet.
- */
if (ring != -1) {
- if (match->tctx) {
+ /*
+ * For VP-specific match, finding more than one is a
+ * problem. For group notification, it's possible.
+ */
+ if (!cam_ignore && match->tctx) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a "
"thread context NVT %x/%x\n",
nvt_blk, nvt_idx);
- return false;
+ /* Should set a FIR if we ever model it */
+ return -1;
+ }
+ /*
+ * For a group notification, we need to know if the
+ * match is precluded first by checking the current
+ * thread priority. If the interrupt can be delivered,
+ * we always notify the first match (for now).
+ */
+ if (cam_ignore &&
+ xive2_tm_irq_precluded(tctx, ring, priority)) {
+ match->precluded = true;
+ } else {
+ if (!match->tctx) {
+ match->ring = ring;
+ match->tctx = tctx;
+ }
+ count++;
}
-
- match->ring = ring;
- match->tctx = tctx;
- count++;
}
}
}
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 6e73f7b063..9345cddead 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1671,6 +1671,16 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
}
+uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
+{
+ /*
+ * Group size is a power of 2. The position of the first 0
+ * (starting with the least significant bits) in the NVP index
+ * gives the size of the group.
+ */
+ return 1 << (ctz32(~nvp_index) + 1);
+}
+
static uint8_t xive_get_group_level(uint32_t nvp_index)
{
/* FIXME add crowd encoding */
@@ -1743,30 +1753,39 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
/*
* This is our simple Xive Presenter Engine model. It is merged in the
* Router as it does not require an extra object.
- *
- * It receives notification requests sent by the IVRE to find one
- * matching NVT (or more) dispatched on the processor threads. In case
- * of a single NVT notification, the process is abbreviated and the
- * thread is signaled if a match is found. In case of a logical server
- * notification (bits ignored at the end of the NVT identifier), the
- * IVPE and IVRE select a winning thread using different filters. This
- * involves 2 or 3 exchanges on the PowerBus that the model does not
- * support.
- *
- * The parameters represent what is sent on the PowerBus
*/
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint8_t priority,
- uint32_t logic_serv)
+ uint32_t logic_serv, bool *precluded)
{
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
- XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
+ XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false };
uint8_t group_level;
int count;
/*
- * Ask the machine to scan the interrupt controllers for a match
+ * Ask the machine to scan the interrupt controllers for a match.
+ *
+ * For VP-specific notification, we expect at most one match and
+ * one call to the presenters is all we need (abbreviated notify
+ * sequence documented by the architecture).
+ *
+ * For VP-group notification, match_nvt() is the equivalent of the
+ * "histogram" and "poll" commands sent to the power bus to the
+ * presenters. 'count' could be more than one, but we always
+ * select the first match for now. 'precluded' tells if (at least)
+ * one thread matches but can't take the interrupt now because
+ * it's running at a more favored priority. We return the
+ * information to the router so that it can take appropriate
+ * actions (backlog, escalation, broadcast, etc...)
+ *
+ * If we were to implement a better way of dispatching the
+ * interrupt in case of multiple matches (instead of the first
+ * match), we would need a heuristic to elect a thread (for
+ * example, the hardware keeps track of an 'age' in the TIMA) and
+ * a new command to the presenters (the equivalent of the "assign"
+ * power bus command in the documented full notify sequence.
*/
count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
priority, logic_serv, &match);
@@ -1779,6 +1798,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
+ } else {
+ *precluded = match.precluded;
}
return !!count;
@@ -1818,7 +1839,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
uint8_t nvt_blk;
uint32_t nvt_idx;
XiveNVT nvt;
- bool found;
+ bool found, precluded;
uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
@@ -1901,8 +1922,9 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
xive_get_field32(END_W7_F0_IGNORE, end.w7),
priority,
- xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7));
-
+ xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
+ &precluded);
+ /* we don't support VP-group notification on P9, so precluded is not used */
/* TODO: Auto EOI. */
if (found) {
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index db372f4b30..2cb03c758e 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -739,6 +739,12 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
return xrc->write_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, nvgc);
}
+static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
+ uint32_t vp_mask)
+{
+ return (cam1 & vp_mask) == (cam2 & vp_mask);
+}
+
/*
* The thread context register words are in big-endian format.
*/
@@ -753,44 +759,50 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
- /*
- * TODO (PowerNV): ignore mode. The low order bits of the NVT
- * identifier are ignored in the "CAM" match.
- */
+ uint32_t vp_mask = 0xFFFFFFFF;
if (format == 0) {
- if (cam_ignore == true) {
- /*
- * F=0 & i=1: Logical server notification (bits ignored at
- * the end of the NVT identifier)
- */
- qemu_log_mask(LOG_UNIMP, "XIVE: no support for LS NVT %x/%x\n",
- nvt_blk, nvt_idx);
- return -1;
+ /*
+ * i=0: Specific NVT notification
+ * i=1: VP-group notification (bits ignored at the end of the
+ * NVT identifier)
+ */
+ if (cam_ignore) {
+ vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
}
- /* F=0 & i=0: Specific NVT notification */
+ /* For VP-group notifications, threads with LGS=0 are excluded */
/* PHYS ring */
if ((be32_to_cpu(qw3w2) & TM2_QW3W2_VT) &&
- cam == xive2_tctx_hw_cam_line(xptr, tctx)) {
+ !(cam_ignore && tctx->regs[TM_QW3_HV_PHYS + TM_LGS] == 0) &&
+ xive2_vp_match_mask(cam,
+ xive2_tctx_hw_cam_line(xptr, tctx),
+ vp_mask)) {
return TM_QW3_HV_PHYS;
}
/* HV POOL ring */
if ((be32_to_cpu(qw2w2) & TM2_QW2W2_VP) &&
- cam == xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2)) {
+ !(cam_ignore && tctx->regs[TM_QW2_HV_POOL + TM_LGS] == 0) &&
+ xive2_vp_match_mask(cam,
+ xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2),
+ vp_mask)) {
return TM_QW2_HV_POOL;
}
/* OS ring */
if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
- cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) {
+ !(cam_ignore && tctx->regs[TM_QW1_OS + TM_LGS] == 0) &&
+ xive2_vp_match_mask(cam,
+ xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2),
+ vp_mask)) {
return TM_QW1_OS;
}
} else {
/* F=1 : User level Event-Based Branch (EBB) notification */
+ /* FIXME: what if cam_ignore and LGS = 0 ? */
/* USER ring */
if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
(cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) &&
@@ -802,6 +814,22 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
return -1;
}
+bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
+{
+ uint8_t *regs = &tctx->regs[ring];
+
+ /*
+ * The xive2_presenter_tctx_match() above tells if there's a match
+ * but for VP-group notification, we still need to look at the
+ * priority to know if the thread can take the interrupt now or if
+ * it is precluded.
+ */
+ if (priority < regs[TM_CPPR]) {
+ return false;
+ }
+ return true;
+}
+
static void xive2_router_realize(DeviceState *dev, Error **errp)
{
Xive2Router *xrtr = XIVE2_ROUTER(dev);
@@ -841,7 +869,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
Xive2End end;
uint8_t priority;
uint8_t format;
- bool found;
+ bool found, precluded;
Xive2Nvp nvp;
uint8_t nvp_blk;
uint32_t nvp_idx;
@@ -922,7 +950,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx,
xive2_end_is_ignore(&end),
priority,
- xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7));
+ xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
+ &precluded);
/* TODO: Auto EOI. */
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 04/14] ppc/xive2: Add undelivered group interrupt to backlog
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (4 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 03/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2024-12-10 0:05 ` [PATCH v2 04/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
` (18 subsequent siblings)
24 siblings, 0 replies; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When a group interrupt cannot be delivered, we need to:
- increment the backlog counter for the group in the NVG table
(if the END is configured to keep a backlog).
- start a broadcast operation to set the LSMFB field on matching CPUs
which can't take the interrupt now because they're running at too
high a priority.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 5 ++
include/hw/ppc/xive2.h | 1 +
hw/intc/pnv_xive2.c | 42 +++++++++++++++++
hw/intc/xive2.c | 105 +++++++++++++++++++++++++++++++++++------
hw/ppc/pnv.c | 18 +++++++
5 files changed, 156 insertions(+), 15 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 21ce5a9df3..c15cd4358d 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -444,6 +444,9 @@ struct XivePresenterClass {
uint32_t logic_serv, XiveTCTXMatch *match);
bool (*in_kernel)(const XivePresenter *xptr);
uint32_t (*get_config)(XivePresenter *xptr);
+ int (*broadcast)(XivePresenter *xptr,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority);
};
int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
@@ -474,6 +477,8 @@ struct XiveFabricClass {
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match);
+ int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority);
};
/*
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 17c31fcb4b..d88db05687 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -122,6 +122,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
+void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 3fb466bb2c..0482193fd7 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -706,6 +706,47 @@ static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
return cfg;
}
+static int pnv_xive2_broadcast(XivePresenter *xptr,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority)
+{
+ PnvXive2 *xive = PNV_XIVE2(xptr);
+ PnvChip *chip = xive->chip;
+ int i, j;
+ bool gen1_tima_os =
+ xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
+
+ for (i = 0; i < chip->nr_cores; i++) {
+ PnvCore *pc = chip->cores[i];
+ CPUCore *cc = CPU_CORE(pc);
+
+ for (j = 0; j < cc->nr_threads; j++) {
+ PowerPCCPU *cpu = pc->threads[j];
+ XiveTCTX *tctx;
+ int ring;
+
+ if (!pnv_xive2_is_cpu_enabled(xive, cpu)) {
+ continue;
+ }
+
+ tctx = XIVE_TCTX(pnv_cpu_state(cpu)->intc);
+
+ if (gen1_tima_os) {
+ ring = xive_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
+ nvt_idx, true, 0);
+ } else {
+ ring = xive2_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
+ nvt_idx, true, 0);
+ }
+
+ if (ring != -1) {
+ xive2_tm_set_lsmfb(tctx, ring, priority);
+ }
+ }
+ }
+ return 0;
+}
+
static uint8_t pnv_xive2_get_block_id(Xive2Router *xrtr)
{
return pnv_xive2_block_id(PNV_XIVE2(xrtr));
@@ -2446,6 +2487,7 @@ static void pnv_xive2_class_init(ObjectClass *klass, void *data)
xpc->match_nvt = pnv_xive2_match_nvt;
xpc->get_config = pnv_xive2_presenter_get_config;
+ xpc->broadcast = pnv_xive2_broadcast;
};
static const TypeInfo pnv_xive2_info = {
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 2cb03c758e..a6dc6d553f 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -63,6 +63,30 @@ static uint32_t xive2_nvgc_get_backlog(Xive2Nvgc *nvgc, uint8_t priority)
return val;
}
+static void xive2_nvgc_set_backlog(Xive2Nvgc *nvgc, uint8_t priority,
+ uint32_t val)
+{
+ uint8_t *ptr, i;
+ uint32_t shift;
+
+ if (priority > 7) {
+ return;
+ }
+
+ if (val > 0xFFFFFF) {
+ val = 0xFFFFFF;
+ }
+ /*
+ * The per-priority backlog counters are 24-bit and the structure
+ * is stored in big endian
+ */
+ ptr = (uint8_t *)&nvgc->w2 + priority * 3;
+ for (i = 0; i < 3; i++, ptr++) {
+ shift = 8 * (2 - i);
+ *ptr = (val >> shift) & 0xFF;
+ }
+}
+
void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
{
if (!xive2_eas_is_valid(eas)) {
@@ -830,6 +854,19 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
return true;
}
+void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority)
+{
+ uint8_t *regs = &tctx->regs[ring];
+
+ /*
+ * Called by the router during a VP-group notification when the
+ * thread matches but can't take the interrupt because it's
+ * already running at a more favored priority. It then stores the
+ * new interrupt priority in the LSMFB field.
+ */
+ regs[TM_LSMFB] = priority;
+}
+
static void xive2_router_realize(DeviceState *dev, Error **errp)
{
Xive2Router *xrtr = XIVE2_ROUTER(dev);
@@ -962,10 +999,9 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
/*
* If no matching NVP is dispatched on a HW thread :
* - specific VP: update the NVP structure if backlog is activated
- * - logical server : forward request to IVPE (not supported)
+ * - VP-group: update the backlog counter for that priority in the NVG
*/
if (xive2_end_is_backlog(&end)) {
- uint8_t ipb;
if (format == 1) {
qemu_log_mask(LOG_GUEST_ERROR,
@@ -974,19 +1010,58 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
return;
}
- /*
- * Record the IPB in the associated NVP structure for later
- * use. The presenter will resend the interrupt when the vCPU
- * is dispatched again on a HW thread.
- */
- ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) |
- xive_priority_to_ipb(priority);
- nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
- xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
-
- /*
- * On HW, follows a "Broadcast Backlog" to IVPEs
- */
+ if (!xive2_end_is_ignore(&end)) {
+ uint8_t ipb;
+ /*
+ * Record the IPB in the associated NVP structure for later
+ * use. The presenter will resend the interrupt when the vCPU
+ * is dispatched again on a HW thread.
+ */
+ ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) |
+ xive_priority_to_ipb(priority);
+ nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
+ xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
+ } else {
+ Xive2Nvgc nvg;
+ uint32_t backlog;
+
+ /* For groups, the per-priority backlog counters are in the NVG */
+ if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVG %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ if (!xive2_nvgc_is_valid(&nvg)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ /*
+ * Increment the backlog counter for that priority.
+ * For the precluded case, we only call broadcast the
+ * first time the counter is incremented. broadcast will
+ * set the LSMFB field of the TIMA of relevant threads so
+ * that they know an interrupt is pending.
+ */
+ backlog = xive2_nvgc_get_backlog(&nvg, priority) + 1;
+ xive2_nvgc_set_backlog(&nvg, priority, backlog);
+ xive2_router_write_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg);
+
+ if (precluded && backlog == 1) {
+ XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
+ xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, priority);
+
+ if (!xive2_end_is_precluded_escalation(&end)) {
+ /*
+ * The interrupt will be picked up when the
+ * matching thread lowers its priority level
+ */
+ return;
+ }
+ }
+ }
}
do_escalation:
diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
index f0f0d7567d..6c76f65936 100644
--- a/hw/ppc/pnv.c
+++ b/hw/ppc/pnv.c
@@ -2639,6 +2639,23 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
return total_count;
}
+static int pnv10_xive_broadcast(XiveFabric *xfb,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority)
+{
+ PnvMachineState *pnv = PNV_MACHINE(xfb);
+ int i;
+
+ for (i = 0; i < pnv->num_chips; i++) {
+ Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
+ XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
+ XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
+
+ xpc->broadcast(xptr, nvt_blk, nvt_idx, priority);
+ }
+ return 0;
+}
+
static bool pnv_machine_get_big_core(Object *obj, Error **errp)
{
PnvMachineState *pnv = PNV_MACHINE(obj);
@@ -2772,6 +2789,7 @@ static void pnv_machine_p10_common_class_init(ObjectClass *oc, void *data)
pmc->dt_power_mgt = pnv_dt_power_mgt;
xfc->match_nvt = pnv10_xive_match_nvt;
+ xfc->broadcast = pnv10_xive_broadcast;
machine_class_allow_dynamic_sysbus_dev(mc, TYPE_PNV_PHB);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 04/14] ppc/xive2: Support group-matching when looking for target
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (5 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 04/14] ppc/xive2: Add undelivered group interrupt to backlog Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2024-12-10 0:05 ` [PATCH v2 05/14] ppc/xive2: Add undelivered group interrupt to backlog Michael Kowal
` (17 subsequent siblings)
24 siblings, 0 replies; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
If an END has the 'i' bit set (ignore), then it targets a group of
VPs. The size of the group depends on the VP index of the target
(first 0 found when looking at the least significant bits of the
index) so a mask is applied on the VP index of a running thread to
know if we have a match.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 5 +++-
include/hw/ppc/xive2.h | 7 ++---
hw/intc/pnv_xive2.c | 38 +++++++++++++++---------
hw/intc/xive.c | 56 +++++++++++++++++++++++++-----------
hw/intc/xive2.c | 65 ++++++++++++++++++++++++++++++------------
5 files changed, 118 insertions(+), 53 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 4d1ce376f1..ce4eb9726b 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -422,6 +422,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas);
typedef struct XiveTCTXMatch {
XiveTCTX *tctx;
uint8_t ring;
+ bool precluded;
} XiveTCTXMatch;
#define TYPE_XIVE_PRESENTER "xive-presenter"
@@ -450,7 +451,9 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint8_t priority,
- uint32_t logic_serv);
+ uint32_t logic_serv, bool *precluded);
+
+uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
/*
* XIVE Fabric (Interface between Interrupt Controller and Machine)
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 5bccf41159..65154f78d8 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -1,11 +1,9 @@
/*
* QEMU PowerPC XIVE2 interrupt controller model (POWER10)
*
- * Copyright (c) 2019-2022, IBM Corporation.
- *
- * This code is licensed under the GPL version 2 or later. See the
- * COPYING file in the top-level directory.
+ * Copyright (c) 2019-2024, IBM Corporation.
*
+ * SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef PPC_XIVE2_H
@@ -121,6 +119,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size);
void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
+bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 834d32287b..5cdd4fdcc9 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -1,10 +1,9 @@
/*
* QEMU PowerPC XIVE2 interrupt controller model (POWER10)
*
- * Copyright (c) 2019-2022, IBM Corporation.
+ * Copyright (c) 2019-2024, IBM Corporation.
*
- * This code is licensed under the GPL version 2 or later. See the
- * COPYING file in the top-level directory.
+ * SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "qemu/osdep.h"
@@ -660,21 +659,34 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
logic_serv);
}
- /*
- * Save the context and follow on to catch duplicates,
- * that we don't support yet.
- */
if (ring != -1) {
- if (match->tctx) {
+ /*
+ * For VP-specific match, finding more than one is a
+ * problem. For group notification, it's possible.
+ */
+ if (!cam_ignore && match->tctx) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a "
"thread context NVT %x/%x\n",
nvt_blk, nvt_idx);
- return false;
+ /* Should set a FIR if we ever model it */
+ return -1;
+ }
+ /*
+ * For a group notification, we need to know if the
+ * match is precluded first by checking the current
+ * thread priority. If the interrupt can be delivered,
+ * we always notify the first match (for now).
+ */
+ if (cam_ignore &&
+ xive2_tm_irq_precluded(tctx, ring, priority)) {
+ match->precluded = true;
+ } else {
+ if (!match->tctx) {
+ match->ring = ring;
+ match->tctx = tctx;
+ }
+ count++;
}
-
- match->ring = ring;
- match->tctx = tctx;
- count++;
}
}
}
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index d2690a7d10..412bb94b91 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1657,6 +1657,16 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
}
+uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
+{
+ /*
+ * Group size is a power of 2. The position of the first 0
+ * (starting with the least significant bits) in the NVP index
+ * gives the size of the group.
+ */
+ return 1 << (ctz32(~nvp_index) + 1);
+}
+
static uint8_t xive_get_group_level(uint32_t nvp_index)
{
/* FIXME add crowd encoding */
@@ -1729,30 +1739,39 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
/*
* This is our simple Xive Presenter Engine model. It is merged in the
* Router as it does not require an extra object.
- *
- * It receives notification requests sent by the IVRE to find one
- * matching NVT (or more) dispatched on the processor threads. In case
- * of a single NVT notification, the process is abbreviated and the
- * thread is signaled if a match is found. In case of a logical server
- * notification (bits ignored at the end of the NVT identifier), the
- * IVPE and IVRE select a winning thread using different filters. This
- * involves 2 or 3 exchanges on the PowerBus that the model does not
- * support.
- *
- * The parameters represent what is sent on the PowerBus
*/
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint8_t priority,
- uint32_t logic_serv)
+ uint32_t logic_serv, bool *precluded)
{
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
- XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
+ XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false };
uint8_t group_level;
int count;
/*
- * Ask the machine to scan the interrupt controllers for a match
+ * Ask the machine to scan the interrupt controllers for a match.
+ *
+ * For VP-specific notification, we expect at most one match and
+ * one call to the presenters is all we need (abbreviated notify
+ * sequence documented by the architecture).
+ *
+ * For VP-group notification, match_nvt() is the equivalent of the
+ * "histogram" and "poll" commands sent to the power bus to the
+ * presenters. 'count' could be more than one, but we always
+ * select the first match for now. 'precluded' tells if (at least)
+ * one thread matches but can't take the interrupt now because
+ * it's running at a more favored priority. We return the
+ * information to the router so that it can take appropriate
+ * actions (backlog, escalation, broadcast, etc...)
+ *
+ * If we were to implement a better way of dispatching the
+ * interrupt in case of multiple matches (instead of the first
+ * match), we would need a heuristic to elect a thread (for
+ * example, the hardware keeps track of an 'age' in the TIMA) and
+ * a new command to the presenters (the equivalent of the "assign"
+ * power bus command in the documented full notify sequence.
*/
count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
priority, logic_serv, &match);
@@ -1765,6 +1784,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
+ } else {
+ *precluded = match.precluded;
}
return !!count;
@@ -1804,7 +1825,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
uint8_t nvt_blk;
uint32_t nvt_idx;
XiveNVT nvt;
- bool found;
+ bool found, precluded;
uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
@@ -1887,8 +1908,9 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
xive_get_field32(END_W7_F0_IGNORE, end.w7),
priority,
- xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7));
-
+ xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
+ &precluded);
+ /* we don't support VP-group notification on P9, so precluded is not used */
/* TODO: Auto EOI. */
if (found) {
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 54e4f784fc..cffcf3ff05 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -739,6 +739,12 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
return xrc->write_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, nvgc);
}
+static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
+ uint32_t vp_mask)
+{
+ return (cam1 & vp_mask) == (cam2 & vp_mask);
+}
+
/*
* The thread context register words are in big-endian format.
*/
@@ -753,44 +759,50 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
- /*
- * TODO (PowerNV): ignore mode. The low order bits of the NVT
- * identifier are ignored in the "CAM" match.
- */
+ uint32_t vp_mask = 0xFFFFFFFF;
if (format == 0) {
- if (cam_ignore == true) {
- /*
- * F=0 & i=1: Logical server notification (bits ignored at
- * the end of the NVT identifier)
- */
- qemu_log_mask(LOG_UNIMP, "XIVE: no support for LS NVT %x/%x\n",
- nvt_blk, nvt_idx);
- return -1;
+ /*
+ * i=0: Specific NVT notification
+ * i=1: VP-group notification (bits ignored at the end of the
+ * NVT identifier)
+ */
+ if (cam_ignore) {
+ vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
}
- /* F=0 & i=0: Specific NVT notification */
+ /* For VP-group notifications, threads with LGS=0 are excluded */
/* PHYS ring */
if ((be32_to_cpu(qw3w2) & TM2_QW3W2_VT) &&
- cam == xive2_tctx_hw_cam_line(xptr, tctx)) {
+ !(cam_ignore && tctx->regs[TM_QW3_HV_PHYS + TM_LGS] == 0) &&
+ xive2_vp_match_mask(cam,
+ xive2_tctx_hw_cam_line(xptr, tctx),
+ vp_mask)) {
return TM_QW3_HV_PHYS;
}
/* HV POOL ring */
if ((be32_to_cpu(qw2w2) & TM2_QW2W2_VP) &&
- cam == xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2)) {
+ !(cam_ignore && tctx->regs[TM_QW2_HV_POOL + TM_LGS] == 0) &&
+ xive2_vp_match_mask(cam,
+ xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2),
+ vp_mask)) {
return TM_QW2_HV_POOL;
}
/* OS ring */
if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
- cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) {
+ !(cam_ignore && tctx->regs[TM_QW1_OS + TM_LGS] == 0) &&
+ xive2_vp_match_mask(cam,
+ xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2),
+ vp_mask)) {
return TM_QW1_OS;
}
} else {
/* F=1 : User level Event-Based Branch (EBB) notification */
+ /* FIXME: what if cam_ignore and LGS = 0 ? */
/* USER ring */
if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
(cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) &&
@@ -802,6 +814,22 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
return -1;
}
+bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
+{
+ uint8_t *regs = &tctx->regs[ring];
+
+ /*
+ * The xive2_presenter_tctx_match() above tells if there's a match
+ * but for VP-group notification, we still need to look at the
+ * priority to know if the thread can take the interrupt now or if
+ * it is precluded.
+ */
+ if (priority < regs[TM_CPPR]) {
+ return false;
+ }
+ return true;
+}
+
static void xive2_router_realize(DeviceState *dev, Error **errp)
{
Xive2Router *xrtr = XIVE2_ROUTER(dev);
@@ -841,7 +869,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
Xive2End end;
uint8_t priority;
uint8_t format;
- bool found;
+ bool found, precluded;
Xive2Nvp nvp;
uint8_t nvp_blk;
uint32_t nvp_idx;
@@ -922,7 +950,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx,
xive2_end_is_ignore(&end),
priority,
- xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7));
+ xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
+ &precluded);
/* TODO: Auto EOI. */
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 05/14] ppc/xive2: Add undelivered group interrupt to backlog
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (6 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 04/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 4:07 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 05/14] ppc/xive2: Process group backlog when pushing an OS context Michael Kowal
` (16 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When a group interrupt cannot be delivered, we need to:
- increment the backlog counter for the group in the NVG table
(if the END is configured to keep a backlog).
- start a broadcast operation to set the LSMFB field on matching CPUs
which can't take the interrupt now because they're running at too
high a priority.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 5 ++
include/hw/ppc/xive2.h | 1 +
hw/intc/pnv_xive2.c | 42 +++++++++++++++++
hw/intc/xive2.c | 105 +++++++++++++++++++++++++++++++++++------
hw/ppc/pnv.c | 22 ++++++++-
5 files changed, 159 insertions(+), 16 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index ce4eb9726b..f443a39cf1 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -442,6 +442,9 @@ struct XivePresenterClass {
uint32_t logic_serv, XiveTCTXMatch *match);
bool (*in_kernel)(const XivePresenter *xptr);
uint32_t (*get_config)(XivePresenter *xptr);
+ int (*broadcast)(XivePresenter *xptr,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority);
};
int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
@@ -472,6 +475,8 @@ struct XiveFabricClass {
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match);
+ int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority);
};
/*
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 65154f78d8..ebf301bb5b 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -120,6 +120,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
+void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 5cdd4fdcc9..41b727d1fb 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -705,6 +705,47 @@ static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
return cfg;
}
+static int pnv_xive2_broadcast(XivePresenter *xptr,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority)
+{
+ PnvXive2 *xive = PNV_XIVE2(xptr);
+ PnvChip *chip = xive->chip;
+ int i, j;
+ bool gen1_tima_os =
+ xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
+
+ for (i = 0; i < chip->nr_cores; i++) {
+ PnvCore *pc = chip->cores[i];
+ CPUCore *cc = CPU_CORE(pc);
+
+ for (j = 0; j < cc->nr_threads; j++) {
+ PowerPCCPU *cpu = pc->threads[j];
+ XiveTCTX *tctx;
+ int ring;
+
+ if (!pnv_xive2_is_cpu_enabled(xive, cpu)) {
+ continue;
+ }
+
+ tctx = XIVE_TCTX(pnv_cpu_state(cpu)->intc);
+
+ if (gen1_tima_os) {
+ ring = xive_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
+ nvt_idx, true, 0);
+ } else {
+ ring = xive2_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
+ nvt_idx, true, 0);
+ }
+
+ if (ring != -1) {
+ xive2_tm_set_lsmfb(tctx, ring, priority);
+ }
+ }
+ }
+ return 0;
+}
+
static uint8_t pnv_xive2_get_block_id(Xive2Router *xrtr)
{
return pnv_xive2_block_id(PNV_XIVE2(xrtr));
@@ -2445,6 +2486,7 @@ static void pnv_xive2_class_init(ObjectClass *klass, void *data)
xpc->match_nvt = pnv_xive2_match_nvt;
xpc->get_config = pnv_xive2_presenter_get_config;
+ xpc->broadcast = pnv_xive2_broadcast;
};
static const TypeInfo pnv_xive2_info = {
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index cffcf3ff05..05cb17518d 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -62,6 +62,30 @@ static uint32_t xive2_nvgc_get_backlog(Xive2Nvgc *nvgc, uint8_t priority)
return val;
}
+static void xive2_nvgc_set_backlog(Xive2Nvgc *nvgc, uint8_t priority,
+ uint32_t val)
+{
+ uint8_t *ptr, i;
+ uint32_t shift;
+
+ if (priority > 7) {
+ return;
+ }
+
+ if (val > 0xFFFFFF) {
+ val = 0xFFFFFF;
+ }
+ /*
+ * The per-priority backlog counters are 24-bit and the structure
+ * is stored in big endian
+ */
+ ptr = (uint8_t *)&nvgc->w2 + priority * 3;
+ for (i = 0; i < 3; i++, ptr++) {
+ shift = 8 * (2 - i);
+ *ptr = (val >> shift) & 0xFF;
+ }
+}
+
void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
{
if (!xive2_eas_is_valid(eas)) {
@@ -830,6 +854,19 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
return true;
}
+void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority)
+{
+ uint8_t *regs = &tctx->regs[ring];
+
+ /*
+ * Called by the router during a VP-group notification when the
+ * thread matches but can't take the interrupt because it's
+ * already running at a more favored priority. It then stores the
+ * new interrupt priority in the LSMFB field.
+ */
+ regs[TM_LSMFB] = priority;
+}
+
static void xive2_router_realize(DeviceState *dev, Error **errp)
{
Xive2Router *xrtr = XIVE2_ROUTER(dev);
@@ -962,10 +999,9 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
/*
* If no matching NVP is dispatched on a HW thread :
* - specific VP: update the NVP structure if backlog is activated
- * - logical server : forward request to IVPE (not supported)
+ * - VP-group: update the backlog counter for that priority in the NVG
*/
if (xive2_end_is_backlog(&end)) {
- uint8_t ipb;
if (format == 1) {
qemu_log_mask(LOG_GUEST_ERROR,
@@ -974,19 +1010,58 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
return;
}
- /*
- * Record the IPB in the associated NVP structure for later
- * use. The presenter will resend the interrupt when the vCPU
- * is dispatched again on a HW thread.
- */
- ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) |
- xive_priority_to_ipb(priority);
- nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
- xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
-
- /*
- * On HW, follows a "Broadcast Backlog" to IVPEs
- */
+ if (!xive2_end_is_ignore(&end)) {
+ uint8_t ipb;
+ /*
+ * Record the IPB in the associated NVP structure for later
+ * use. The presenter will resend the interrupt when the vCPU
+ * is dispatched again on a HW thread.
+ */
+ ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) |
+ xive_priority_to_ipb(priority);
+ nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
+ xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
+ } else {
+ Xive2Nvgc nvg;
+ uint32_t backlog;
+
+ /* For groups, the per-priority backlog counters are in the NVG */
+ if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVG %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ if (!xive2_nvgc_is_valid(&nvg)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ /*
+ * Increment the backlog counter for that priority.
+ * For the precluded case, we only call broadcast the
+ * first time the counter is incremented. broadcast will
+ * set the LSMFB field of the TIMA of relevant threads so
+ * that they know an interrupt is pending.
+ */
+ backlog = xive2_nvgc_get_backlog(&nvg, priority) + 1;
+ xive2_nvgc_set_backlog(&nvg, priority, backlog);
+ xive2_router_write_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg);
+
+ if (precluded && backlog == 1) {
+ XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
+ xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, priority);
+
+ if (!xive2_end_is_precluded_escalation(&end)) {
+ /*
+ * The interrupt will be picked up when the
+ * matching thread lowers its priority level
+ */
+ return;
+ }
+ }
+ }
}
do_escalation:
diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
index f0f0d7567d..7c11143749 100644
--- a/hw/ppc/pnv.c
+++ b/hw/ppc/pnv.c
@@ -1,7 +1,9 @@
/*
* QEMU PowerPC PowerNV machine model
*
- * Copyright (c) 2016, IBM Corporation.
+ * Copyright (c) 2016-2024, IBM Corporation.
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -2639,6 +2641,23 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
return total_count;
}
+static int pnv10_xive_broadcast(XiveFabric *xfb,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ uint8_t priority)
+{
+ PnvMachineState *pnv = PNV_MACHINE(xfb);
+ int i;
+
+ for (i = 0; i < pnv->num_chips; i++) {
+ Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
+ XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
+ XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
+
+ xpc->broadcast(xptr, nvt_blk, nvt_idx, priority);
+ }
+ return 0;
+}
+
static bool pnv_machine_get_big_core(Object *obj, Error **errp)
{
PnvMachineState *pnv = PNV_MACHINE(obj);
@@ -2772,6 +2791,7 @@ static void pnv_machine_p10_common_class_init(ObjectClass *oc, void *data)
pmc->dt_power_mgt = pnv_dt_power_mgt;
xfc->match_nvt = pnv10_xive_match_nvt;
+ xfc->broadcast = pnv10_xive_broadcast;
machine_class_allow_dynamic_sysbus_dev(mc, TYPE_PNV_PHB);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 05/14] ppc/xive2: Process group backlog when pushing an OS context
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (7 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 05/14] ppc/xive2: Add undelivered group interrupt to backlog Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2024-12-10 0:05 ` [PATCH v2 06/14] " Michael Kowal
` (15 subsequent siblings)
24 siblings, 0 replies; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When pushing an OS context, we were already checking if there was a
pending interrupt in the IPB and sending a notification if needed. We
also need to check if there is a pending group interrupt stored in the
NVG table. To avoid useless backlog scans, we only scan if the NVP
belongs to a group.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/xive2.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 97 insertions(+), 3 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index a6dc6d553f..7130892482 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -279,6 +279,85 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
}
+/*
+ * Scan the group chain and return the highest priority and group
+ * level of pending group interrupts.
+ */
+static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr,
+ uint8_t nvp_blk, uint32_t nvp_idx,
+ uint8_t first_group,
+ uint8_t *out_level)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint32_t nvgc_idx, mask;
+ uint32_t current_level, count;
+ uint8_t prio;
+ Xive2Nvgc nvgc;
+
+ for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) {
+ current_level = first_group & 0xF;
+
+ while (current_level) {
+ mask = (1 << current_level) - 1;
+ nvgc_idx = nvp_idx & ~mask;
+ nvgc_idx |= mask >> 1;
+ qemu_log("fxb %s checking backlog for prio %d group idx %x\n",
+ __func__, prio, nvgc_idx);
+
+ if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return 0xFF;
+ }
+ if (!xive2_nvgc_is_valid(&nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return 0xFF;
+ }
+
+ count = xive2_nvgc_get_backlog(&nvgc, prio);
+ if (count) {
+ *out_level = current_level;
+ return prio;
+ }
+ current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0xF;
+ }
+ }
+ return 0xFF;
+}
+
+static void xive2_presenter_backlog_decr(XivePresenter *xptr,
+ uint8_t nvp_blk, uint32_t nvp_idx,
+ uint8_t group_prio,
+ uint8_t group_level)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint32_t nvgc_idx, mask, count;
+ Xive2Nvgc nvgc;
+
+ group_level &= 0xF;
+ mask = (1 << group_level) - 1;
+ nvgc_idx = nvp_idx & ~mask;
+ nvgc_idx |= mask >> 1;
+
+ if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return;
+ }
+ if (!xive2_nvgc_is_valid(&nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return;
+ }
+ count = xive2_nvgc_get_backlog(&nvgc, group_prio);
+ if (!count) {
+ return;
+ }
+ xive2_nvgc_set_backlog(&nvgc, group_prio, count - 1);
+ xive2_router_write_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc);
+}
+
/*
* XIVE Thread Interrupt Management Area (TIMA) - Gen2 mode
*
@@ -588,8 +667,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
{
- uint8_t ipb, backlog_level;
- uint8_t backlog_prio;
+ XivePresenter *xptr = XIVE_PRESENTER(xrtr);
+ uint8_t ipb, backlog_level, group_level, first_group;
+ uint8_t backlog_prio, group_prio;
uint8_t *regs = &tctx->regs[TM_QW1_OS];
Xive2Nvp nvp;
@@ -624,8 +704,22 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
backlog_prio = xive_ipb_to_pipr(ipb);
backlog_level = 0;
+ first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
+ if (first_group && regs[TM_LSMFB] < backlog_prio) {
+ group_prio = xive2_presenter_backlog_check(xptr, nvp_blk, nvp_idx,
+ first_group, &group_level);
+ regs[TM_LSMFB] = group_prio;
+ if (regs[TM_LGS] && group_prio < backlog_prio) {
+ /* VP can take a group interrupt */
+ xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
+ group_prio, group_level);
+ backlog_prio = group_prio;
+ backlog_level = group_level;
+ }
+ }
+
/*
- * Compute the PIPR based on the restored state.
+ * Compute the PIPR based on the restored state.
* It will raise the External interrupt signal if needed.
*/
xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 06/14] ppc/xive2: Process group backlog when pushing an OS context
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (8 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 05/14] ppc/xive2: Process group backlog when pushing an OS context Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2024-12-10 0:05 ` [PATCH v2 06/14] ppc/xive2: Process group backlog when updating the CPPR Michael Kowal
` (14 subsequent siblings)
24 siblings, 0 replies; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When pushing an OS context, we were already checking if there was a
pending interrupt in the IPB and sending a notification if needed. We
also need to check if there is a pending group interrupt stored in the
NVG table. To avoid useless backlog scans, we only scan if the NVP
belongs to a group.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/xive2.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 97 insertions(+)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 05cb17518d..bb18a56e8f 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -278,6 +278,85 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
}
+/*
+ * Scan the group chain and return the highest priority and group
+ * level of pending group interrupts.
+ */
+static uint8_t xive2_presenter_backlog_scan(XivePresenter *xptr,
+ uint8_t nvp_blk, uint32_t nvp_idx,
+ uint8_t first_group,
+ uint8_t *out_level)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint32_t nvgc_idx, mask;
+ uint32_t current_level, count;
+ uint8_t prio;
+ Xive2Nvgc nvgc;
+
+ for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) {
+ current_level = first_group & 0xF;
+
+ while (current_level) {
+ mask = (1 << current_level) - 1;
+ nvgc_idx = nvp_idx & ~mask;
+ nvgc_idx |= mask >> 1;
+ qemu_log("fxb %s checking backlog for prio %d group idx %x\n",
+ __func__, prio, nvgc_idx);
+
+ if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return 0xFF;
+ }
+ if (!xive2_nvgc_is_valid(&nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return 0xFF;
+ }
+
+ count = xive2_nvgc_get_backlog(&nvgc, prio);
+ if (count) {
+ *out_level = current_level;
+ return prio;
+ }
+ current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0xF;
+ }
+ }
+ return 0xFF;
+}
+
+static void xive2_presenter_backlog_decr(XivePresenter *xptr,
+ uint8_t nvp_blk, uint32_t nvp_idx,
+ uint8_t group_prio,
+ uint8_t group_level)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint32_t nvgc_idx, mask, count;
+ Xive2Nvgc nvgc;
+
+ group_level &= 0xF;
+ mask = (1 << group_level) - 1;
+ nvgc_idx = nvp_idx & ~mask;
+ nvgc_idx |= mask >> 1;
+
+ if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return;
+ }
+ if (!xive2_nvgc_is_valid(&nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
+ nvp_blk, nvgc_idx);
+ return;
+ }
+ count = xive2_nvgc_get_backlog(&nvgc, group_prio);
+ if (!count) {
+ return;
+ }
+ xive2_nvgc_set_backlog(&nvgc, group_prio, count - 1);
+ xive2_router_write_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc);
+}
+
/*
* XIVE Thread Interrupt Management Area (TIMA) - Gen2 mode
*
@@ -587,9 +666,13 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
{
+ XivePresenter *xptr = XIVE_PRESENTER(xrtr);
uint8_t ipb;
uint8_t backlog_level;
+ uint8_t group_level;
+ uint8_t first_group;
uint8_t backlog_prio;
+ uint8_t group_prio;
uint8_t *regs = &tctx->regs[TM_QW1_OS];
Xive2Nvp nvp;
@@ -624,6 +707,20 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
backlog_prio = xive_ipb_to_pipr(ipb);
backlog_level = 0;
+ first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
+ if (first_group && regs[TM_LSMFB] < backlog_prio) {
+ group_prio = xive2_presenter_backlog_scan(xptr, nvp_blk, nvp_idx,
+ first_group, &group_level);
+ regs[TM_LSMFB] = group_prio;
+ if (regs[TM_LGS] && group_prio < backlog_prio) {
+ /* VP can take a group interrupt */
+ xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
+ group_prio, group_level);
+ backlog_prio = group_prio;
+ backlog_level = group_level;
+ }
+ }
+
/*
* Compute the PIPR based on the restored state.
* It will raise the External interrupt signal if needed.
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 06/14] ppc/xive2: Process group backlog when updating the CPPR
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (9 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 06/14] " Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2024-12-10 0:05 ` [PATCH v2 07/14] " Michael Kowal
` (13 subsequent siblings)
24 siblings, 0 replies; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When the hypervisor or OS pushes a new value to the CPPR, if the LSMFB
value is lower than the new CPPR value, there could be a pending group
interrupt in the backlog, so it needs to be scanned.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2.h | 4 +
hw/intc/xive.c | 4 +-
hw/intc/xive2.c | 173 ++++++++++++++++++++++++++++++++++++++++-
3 files changed, 177 insertions(+), 4 deletions(-)
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index d88db05687..e61b978f37 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -115,6 +115,10 @@ typedef struct Xive2EndSource {
* XIVE2 Thread Interrupt Management Area (POWER10)
*/
+void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size);
+void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
uint64_t value, unsigned size);
uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 9345cddead..74a78da88b 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -603,7 +603,7 @@ static const XiveTmOp xive2_tm_operations[] = {
* MMIOs below 2K : raw values and special operations without side
* effects
*/
- { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive_tm_set_os_cppr,
+ { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive2_tm_set_os_cppr,
NULL },
{ XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive2_tm_push_os_ctx,
NULL },
@@ -611,7 +611,7 @@ static const XiveTmOp xive2_tm_operations[] = {
NULL },
{ XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, xive_tm_set_os_lgs,
NULL },
- { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive_tm_set_hv_cppr,
+ { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive2_tm_set_hv_cppr,
NULL },
{ XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
NULL },
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 7130892482..0c53f71879 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -18,6 +18,7 @@
#include "hw/ppc/xive.h"
#include "hw/ppc/xive2.h"
#include "hw/ppc/xive2_regs.h"
+#include "trace.h"
uint32_t xive2_router_get_config(Xive2Router *xrtr)
{
@@ -764,6 +765,172 @@ void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
}
}
+static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
+ uint32_t *nvp_blk, uint32_t *nvp_idx)
+{
+ uint32_t w2, cam;
+
+ w2 = xive_tctx_word2(&tctx->regs[ring]);
+ switch (ring) {
+ case TM_QW1_OS:
+ if (!(be32_to_cpu(w2) & TM2_QW1W2_VO)) {
+ return -1;
+ }
+ cam = xive_get_field32(TM2_QW1W2_OS_CAM, w2);
+ break;
+ case TM_QW2_HV_POOL:
+ if (!(be32_to_cpu(w2) & TM2_QW2W2_VP)) {
+ return -1;
+ }
+ cam = xive_get_field32(TM2_QW2W2_POOL_CAM, w2);
+ break;
+ case TM_QW3_HV_PHYS:
+ if (!(be32_to_cpu(w2) & TM2_QW3W2_VT)) {
+ return -1;
+ }
+ cam = xive2_tctx_hw_cam_line(tctx->xptr, tctx);
+ break;
+ default:
+ return -1;
+ }
+ *nvp_blk = xive2_nvp_blk(cam);
+ *nvp_idx = xive2_nvp_idx(cam);
+ return 0;
+}
+
+static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
+{
+ uint8_t *regs = &tctx->regs[ring];
+ Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
+ uint8_t old_cppr, backlog_prio, first_group, group_level = 0;
+ uint8_t pipr_min, lsmfb_min, ring_min;
+ bool group_enabled;
+ uint32_t nvp_blk, nvp_idx;
+ Xive2Nvp nvp;
+ int rc;
+
+ trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
+ regs[TM_IPB], regs[TM_PIPR],
+ cppr, regs[TM_NSR]);
+
+ if (cppr > XIVE_PRIORITY_MAX) {
+ cppr = 0xff;
+ }
+
+ old_cppr = regs[TM_CPPR];
+ regs[TM_CPPR] = cppr;
+
+ /*
+ * Recompute the PIPR based on local pending interrupts. It will
+ * be adjusted below if needed in case of pending group interrupts.
+ */
+ pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
+ group_enabled = !!regs[TM_LGS];
+ lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff;
+ ring_min = ring;
+
+ /* PHYS updates also depend on POOL values */
+ if (ring == TM_QW3_HV_PHYS) {
+ uint8_t *pregs = &tctx->regs[TM_QW2_HV_POOL];
+
+ /* POOL values only matter if POOL ctx is valid */
+ if (pregs[TM_WORD2] & 0x80) {
+
+ uint8_t pool_pipr = xive_ipb_to_pipr(pregs[TM_IPB]);
+ uint8_t pool_lsmfb = pregs[TM_LSMFB];
+
+ /*
+ * Determine highest priority interrupt and
+ * remember which ring has it.
+ */
+ if (pool_pipr < pipr_min) {
+ pipr_min = pool_pipr;
+ if (pool_pipr < lsmfb_min) {
+ ring_min = TM_QW2_HV_POOL;
+ }
+ }
+
+ /* Values needed for group priority calculation */
+ if (pregs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
+ group_enabled = true;
+ lsmfb_min = pool_lsmfb;
+ if (lsmfb_min < pipr_min) {
+ ring_min = TM_QW2_HV_POOL;
+ }
+ }
+ }
+ }
+ regs[TM_PIPR] = pipr_min;
+
+ rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
+ if (rc) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
+ return;
+ }
+
+ if (cppr < old_cppr) {
+ /*
+ * FIXME: check if there's a group interrupt being presented
+ * and if the new cppr prevents it. If so, then the group
+ * interrupt needs to be re-added to the backlog and
+ * re-triggered (see re-trigger END info in the NVGC
+ * structure)
+ */
+ }
+
+ if (group_enabled &&
+ lsmfb_min < cppr &&
+ lsmfb_min < regs[TM_PIPR]) {
+ /*
+ * Thread has seen a group interrupt with a higher priority
+ * than the new cppr or pending local interrupt. Check the
+ * backlog
+ */
+ if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ if (!xive2_nvp_is_valid(&nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
+ if (!first_group) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ backlog_prio = xive2_presenter_backlog_check(tctx->xptr,
+ nvp_blk, nvp_idx,
+ first_group, &group_level);
+ tctx->regs[ring_min + TM_LSMFB] = backlog_prio;
+ if (backlog_prio != 0xFF) {
+ xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
+ backlog_prio, group_level);
+ regs[TM_PIPR] = backlog_prio;
+ }
+ }
+ /* CPPR has changed, check if we need to raise a pending exception */
+ xive_tctx_notify(tctx, ring_min, group_level);
+}
+
+void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive2_tctx_set_cppr(tctx, TM_QW3_HV_PHYS, value & 0xff);
+}
+
+void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive2_tctx_set_cppr(tctx, TM_QW1_OS, value & 0xff);
+}
+
static void xive2_tctx_set_target(XiveTCTX *tctx, uint8_t ring, uint8_t target)
{
uint8_t *regs = &tctx->regs[ring];
@@ -934,7 +1101,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
{
- uint8_t *regs = &tctx->regs[ring];
+ /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
+ uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+ uint8_t *alt_regs = &tctx->regs[alt_ring];
/*
* The xive2_presenter_tctx_match() above tells if there's a match
@@ -942,7 +1111,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
* priority to know if the thread can take the interrupt now or if
* it is precluded.
*/
- if (priority < regs[TM_CPPR]) {
+ if (priority < alt_regs[TM_CPPR]) {
return false;
}
return true;
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 07/14] ppc/xive2: Process group backlog when updating the CPPR
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (10 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 06/14] ppc/xive2: Process group backlog when updating the CPPR Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 4:35 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 07/14] qtest/xive: Add group-interrupt test Michael Kowal
` (12 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When the hypervisor or OS pushes a new value to the CPPR, if the LSMFB
value is lower than the new CPPR value, there could be a pending group
interrupt in the backlog, so it needs to be scanned.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2.h | 4 +
hw/intc/xive.c | 4 +-
hw/intc/xive2.c | 173 ++++++++++++++++++++++++++++++++++++++++-
3 files changed, 177 insertions(+), 4 deletions(-)
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index ebf301bb5b..fc7422fea7 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -113,6 +113,10 @@ typedef struct Xive2EndSource {
* XIVE2 Thread Interrupt Management Area (POWER10)
*/
+void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size);
+void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
uint64_t value, unsigned size);
uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 412bb94b91..308de5aefc 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -589,7 +589,7 @@ static const XiveTmOp xive2_tm_operations[] = {
* MMIOs below 2K : raw values and special operations without side
* effects
*/
- { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive_tm_set_os_cppr,
+ { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive2_tm_set_os_cppr,
NULL },
{ XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive2_tm_push_os_ctx,
NULL },
@@ -597,7 +597,7 @@ static const XiveTmOp xive2_tm_operations[] = {
NULL },
{ XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, xive_tm_set_os_lgs,
NULL },
- { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive_tm_set_hv_cppr,
+ { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive2_tm_set_hv_cppr,
NULL },
{ XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
NULL },
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index bb18a56e8f..47f7a099de 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -17,6 +17,7 @@
#include "hw/ppc/xive.h"
#include "hw/ppc/xive2.h"
#include "hw/ppc/xive2_regs.h"
+#include "trace.h"
uint32_t xive2_router_get_config(Xive2Router *xrtr)
{
@@ -767,6 +768,172 @@ void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
}
}
+static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
+ uint32_t *nvp_blk, uint32_t *nvp_idx)
+{
+ uint32_t w2, cam;
+
+ w2 = xive_tctx_word2(&tctx->regs[ring]);
+ switch (ring) {
+ case TM_QW1_OS:
+ if (!(be32_to_cpu(w2) & TM2_QW1W2_VO)) {
+ return -1;
+ }
+ cam = xive_get_field32(TM2_QW1W2_OS_CAM, w2);
+ break;
+ case TM_QW2_HV_POOL:
+ if (!(be32_to_cpu(w2) & TM2_QW2W2_VP)) {
+ return -1;
+ }
+ cam = xive_get_field32(TM2_QW2W2_POOL_CAM, w2);
+ break;
+ case TM_QW3_HV_PHYS:
+ if (!(be32_to_cpu(w2) & TM2_QW3W2_VT)) {
+ return -1;
+ }
+ cam = xive2_tctx_hw_cam_line(tctx->xptr, tctx);
+ break;
+ default:
+ return -1;
+ }
+ *nvp_blk = xive2_nvp_blk(cam);
+ *nvp_idx = xive2_nvp_idx(cam);
+ return 0;
+}
+
+static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
+{
+ uint8_t *regs = &tctx->regs[ring];
+ Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
+ uint8_t old_cppr, backlog_prio, first_group, group_level = 0;
+ uint8_t pipr_min, lsmfb_min, ring_min;
+ bool group_enabled;
+ uint32_t nvp_blk, nvp_idx;
+ Xive2Nvp nvp;
+ int rc;
+
+ trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
+ regs[TM_IPB], regs[TM_PIPR],
+ cppr, regs[TM_NSR]);
+
+ if (cppr > XIVE_PRIORITY_MAX) {
+ cppr = 0xff;
+ }
+
+ old_cppr = regs[TM_CPPR];
+ regs[TM_CPPR] = cppr;
+
+ /*
+ * Recompute the PIPR based on local pending interrupts. It will
+ * be adjusted below if needed in case of pending group interrupts.
+ */
+ pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
+ group_enabled = !!regs[TM_LGS];
+ lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff;
+ ring_min = ring;
+
+ /* PHYS updates also depend on POOL values */
+ if (ring == TM_QW3_HV_PHYS) {
+ uint8_t *pregs = &tctx->regs[TM_QW2_HV_POOL];
+
+ /* POOL values only matter if POOL ctx is valid */
+ if (pregs[TM_WORD2] & 0x80) {
+
+ uint8_t pool_pipr = xive_ipb_to_pipr(pregs[TM_IPB]);
+ uint8_t pool_lsmfb = pregs[TM_LSMFB];
+
+ /*
+ * Determine highest priority interrupt and
+ * remember which ring has it.
+ */
+ if (pool_pipr < pipr_min) {
+ pipr_min = pool_pipr;
+ if (pool_pipr < lsmfb_min) {
+ ring_min = TM_QW2_HV_POOL;
+ }
+ }
+
+ /* Values needed for group priority calculation */
+ if (pregs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
+ group_enabled = true;
+ lsmfb_min = pool_lsmfb;
+ if (lsmfb_min < pipr_min) {
+ ring_min = TM_QW2_HV_POOL;
+ }
+ }
+ }
+ }
+ regs[TM_PIPR] = pipr_min;
+
+ rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
+ if (rc) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
+ return;
+ }
+
+ if (cppr < old_cppr) {
+ /*
+ * FIXME: check if there's a group interrupt being presented
+ * and if the new cppr prevents it. If so, then the group
+ * interrupt needs to be re-added to the backlog and
+ * re-triggered (see re-trigger END info in the NVGC
+ * structure)
+ */
+ }
+
+ if (group_enabled &&
+ lsmfb_min < cppr &&
+ lsmfb_min < regs[TM_PIPR]) {
+ /*
+ * Thread has seen a group interrupt with a higher priority
+ * than the new cppr or pending local interrupt. Check the
+ * backlog
+ */
+ if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ if (!xive2_nvp_is_valid(&nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
+ if (!first_group) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ backlog_prio = xive2_presenter_backlog_scan(tctx->xptr,
+ nvp_blk, nvp_idx,
+ first_group, &group_level);
+ tctx->regs[ring_min + TM_LSMFB] = backlog_prio;
+ if (backlog_prio != 0xFF) {
+ xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
+ backlog_prio, group_level);
+ regs[TM_PIPR] = backlog_prio;
+ }
+ }
+ /* CPPR has changed, check if we need to raise a pending exception */
+ xive_tctx_notify(tctx, ring_min, group_level);
+}
+
+void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive2_tctx_set_cppr(tctx, TM_QW3_HV_PHYS, value & 0xff);
+}
+
+void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive2_tctx_set_cppr(tctx, TM_QW1_OS, value & 0xff);
+}
+
static void xive2_tctx_set_target(XiveTCTX *tctx, uint8_t ring, uint8_t target)
{
uint8_t *regs = &tctx->regs[ring];
@@ -937,7 +1104,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
{
- uint8_t *regs = &tctx->regs[ring];
+ /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
+ uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+ uint8_t *alt_regs = &tctx->regs[alt_ring];
/*
* The xive2_presenter_tctx_match() above tells if there's a match
@@ -945,7 +1114,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
* priority to know if the thread can take the interrupt now or if
* it is precluded.
*/
- if (priority < regs[TM_CPPR]) {
+ if (priority < alt_regs[TM_CPPR]) {
return false;
}
return true;
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 07/14] qtest/xive: Add group-interrupt test
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (11 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 07/14] " Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2024-12-10 0:05 ` [PATCH v2 08/14] Add support for MMIO operations on the NVPG/NVC BAR Michael Kowal
` (11 subsequent siblings)
24 siblings, 0 replies; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
Add XIVE2 tests for group interrupts and group interrupts that have
been backlogged.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
tests/qtest/pnv-xive2-test.c | 160 +++++++++++++++++++++++++++++++++++
1 file changed, 160 insertions(+)
diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
index dd19e88861..a4d06550ee 100644
--- a/tests/qtest/pnv-xive2-test.c
+++ b/tests/qtest/pnv-xive2-test.c
@@ -2,6 +2,8 @@
* QTest testcase for PowerNV 10 interrupt controller (xive2)
* - Test irq to hardware thread
* - Test 'Pull Thread Context to Odd Thread Reporting Line'
+ * - Test irq to hardware group
+ * - Test irq to hardware group going through backlog
*
* Copyright (c) 2024, IBM Corporation.
*
@@ -315,6 +317,158 @@ static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts)
word2 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD2);
g_assert_cmphex(xive_get_field32(TM_QW3W2_VT, word2), ==, 0);
}
+
+static void test_hw_group_irq(QTestState *qts)
+{
+ uint32_t irq = 100;
+ uint32_t irq_data = 0xdeadbeef;
+ uint32_t end_index = 23;
+ uint32_t chosen_one;
+ uint32_t target_nvp = 0x81; /* group size = 4 */
+ uint8_t priority = 6;
+ uint32_t reg32;
+ uint16_t reg16;
+ uint8_t pq, nsr, cppr;
+
+ printf("# ============================================================\n");
+ printf("# Testing irq %d to hardware group of size 4\n", irq);
+
+ /* irq config */
+ set_eas(qts, irq, end_index, irq_data);
+ set_end(qts, end_index, target_nvp, priority, true /* group */);
+
+ /* enable and trigger irq */
+ get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
+
+ /* check irq is raised on cpu */
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
+
+ /* find the targeted vCPU */
+ for (chosen_one = 0; chosen_one < SMT; chosen_one++) {
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ if (nsr == 0x82) {
+ break;
+ }
+ }
+ g_assert_cmphex(chosen_one, <, SMT);
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, 0xFF);
+
+ /* ack the irq */
+ reg16 = get_tima16(qts, chosen_one, TM_SPC_ACK_HV_REG);
+ nsr = reg16 >> 8;
+ cppr = reg16 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, priority);
+
+ /* check irq data is what was configured */
+ reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
+ g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
+
+ /* End Of Interrupt */
+ set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
+
+ /* reset CPPR */
+ set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x00);
+ g_assert_cmphex(cppr, ==, 0xFF);
+}
+
+static void test_hw_group_irq_backlog(QTestState *qts)
+{
+ uint32_t irq = 31;
+ uint32_t irq_data = 0x01234567;
+ uint32_t end_index = 129;
+ uint32_t target_nvp = 0x81; /* group size = 4 */
+ uint32_t chosen_one = 3;
+ uint8_t blocking_priority, priority = 3;
+ uint32_t reg32;
+ uint16_t reg16;
+ uint8_t pq, nsr, cppr, lsmfb, i;
+
+ printf("# ============================================================\n");
+ printf("# Testing irq %d to hardware group of size 4 going through " \
+ "backlog\n",
+ irq);
+
+ /*
+ * set current priority of all threads in the group to something
+ * higher than what we're about to trigger
+ */
+ blocking_priority = priority - 1;
+ for (i = 0; i < SMT; i++) {
+ set_tima8(qts, i, TM_QW3_HV_PHYS + TM_CPPR, blocking_priority);
+ }
+
+ /* irq config */
+ set_eas(qts, irq, end_index, irq_data);
+ set_end(qts, end_index, target_nvp, priority, true /* group */);
+
+ /* enable and trigger irq */
+ get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
+
+ /* check irq is raised on cpu */
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
+
+ /* check no interrupt is pending on the 2 possible targets */
+ for (i = 0; i < SMT; i++) {
+ reg32 = get_tima32(qts, i, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ lsmfb = reg32 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x0);
+ g_assert_cmphex(cppr, ==, blocking_priority);
+ g_assert_cmphex(lsmfb, ==, priority);
+ }
+
+ /* lower priority of one thread */
+ set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, priority + 1);
+
+ /* check backlogged interrupt is presented */
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, priority + 1);
+
+ /* ack the irq */
+ reg16 = get_tima16(qts, chosen_one, TM_SPC_ACK_HV_REG);
+ nsr = reg16 >> 8;
+ cppr = reg16 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, priority);
+
+ /* check irq data is what was configured */
+ reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
+ g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
+
+ /* End Of Interrupt */
+ set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
+
+ /* reset CPPR */
+ set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ lsmfb = reg32 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x00);
+ g_assert_cmphex(cppr, ==, 0xFF);
+ g_assert_cmphex(lsmfb, ==, 0xFF);
+}
+
static void test_xive(void)
{
QTestState *qts;
@@ -330,6 +484,12 @@ static void test_xive(void)
/* omit reset_state here and use settings from test_hw_irq */
test_pull_thread_ctx_to_odd_thread_cl(qts);
+ reset_state(qts);
+ test_hw_group_irq(qts);
+
+ reset_state(qts);
+ test_hw_group_irq_backlog(qts);
+
reset_state(qts);
test_flush_sync_inject(qts);
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 08/14] Add support for MMIO operations on the NVPG/NVC BAR
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (12 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 07/14] qtest/xive: Add group-interrupt test Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2024-12-10 0:05 ` [PATCH v2 08/14] qtest/xive: Add group-interrupt test Michael Kowal
` (10 subsequent siblings)
24 siblings, 0 replies; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
Add support for the NVPG and NVC BARs. Access to the BAR pages will
cause backlog counter operations to either increment or decriment
the counter.
Also added qtests for the same.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2.h | 9 ++
include/hw/ppc/xive2_regs.h | 3 +
tests/qtest/pnv-xive2-common.h | 1 +
hw/intc/pnv_xive2.c | 80 +++++++++++++---
hw/intc/xive2.c | 87 +++++++++++++++++
tests/qtest/pnv-xive2-nvpg_bar.c | 154 +++++++++++++++++++++++++++++++
tests/qtest/pnv-xive2-test.c | 3 +
hw/intc/trace-events | 4 +
tests/qtest/meson.build | 3 +-
9 files changed, 329 insertions(+), 15 deletions(-)
create mode 100644 tests/qtest/pnv-xive2-nvpg_bar.c
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index e61b978f37..049028d2c2 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -92,6 +92,15 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint32_t logic_serv);
+uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset);
+
+uint64_t xive2_presenter_nvgc_backlog_op(XivePresenter *xptr,
+ bool crowd,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset, uint16_t val);
+
/*
* XIVE2 END ESBs (POWER10)
*/
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 30868e8e09..66a419441c 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -234,4 +234,7 @@ typedef struct Xive2Nvgc {
void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx,
GString *buf);
+#define NVx_BACKLOG_OP PPC_BITMASK(52, 53)
+#define NVx_BACKLOG_PRIO PPC_BITMASK(57, 59)
+
#endif /* PPC_XIVE2_REGS_H */
diff --git a/tests/qtest/pnv-xive2-common.h b/tests/qtest/pnv-xive2-common.h
index 9ae34771aa..2077c05ebc 100644
--- a/tests/qtest/pnv-xive2-common.h
+++ b/tests/qtest/pnv-xive2-common.h
@@ -107,5 +107,6 @@ extern void set_end(QTestState *qts, uint32_t index, uint32_t nvp_index,
void test_flush_sync_inject(QTestState *qts);
+void test_nvpg_bar(QTestState *qts);
#endif /* TEST_PNV_XIVE2_COMMON_H */
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 0482193fd7..9736b623ba 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -2203,21 +2203,40 @@ static const MemoryRegionOps pnv_xive2_tm_ops = {
},
};
-static uint64_t pnv_xive2_nvc_read(void *opaque, hwaddr offset,
+static uint64_t pnv_xive2_nvc_read(void *opaque, hwaddr addr,
unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvpg_shift;
+ uint16_t op = addr & 0xFFF;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVC: invalid read @%"HWADDR_PRIx, offset);
- return -1;
+ if (size != 2) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvc load size %d\n",
+ size);
+ return -1;
+ }
+
+ return xive2_presenter_nvgc_backlog_op(xptr, true, blk, page, op, 1);
}
-static void pnv_xive2_nvc_write(void *opaque, hwaddr offset,
+static void pnv_xive2_nvc_write(void *opaque, hwaddr addr,
uint64_t val, unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvc_shift;
+ uint16_t op = addr & 0xFFF;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVC: invalid write @%"HWADDR_PRIx, offset);
+ if (size != 1) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvc write size %d\n",
+ size);
+ return;
+ }
+
+ (void)xive2_presenter_nvgc_backlog_op(xptr, true, blk, page, op, val);
}
static const MemoryRegionOps pnv_xive2_nvc_ops = {
@@ -2225,30 +2244,63 @@ static const MemoryRegionOps pnv_xive2_nvc_ops = {
.write = pnv_xive2_nvc_write,
.endianness = DEVICE_BIG_ENDIAN,
.valid = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
.impl = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
};
-static uint64_t pnv_xive2_nvpg_read(void *opaque, hwaddr offset,
+static uint64_t pnv_xive2_nvpg_read(void *opaque, hwaddr addr,
unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvpg_shift;
+ uint16_t op = addr & 0xFFF;
+ uint32_t index = page >> 1;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVPG: invalid read @%"HWADDR_PRIx, offset);
- return -1;
+ if (size != 2) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvpg load size %d\n",
+ size);
+ return -1;
+ }
+
+ if (page % 2) {
+ /* odd page - NVG */
+ return xive2_presenter_nvgc_backlog_op(xptr, false, blk, index, op, 1);
+ } else {
+ /* even page - NVP */
+ return xive2_presenter_nvp_backlog_op(xptr, blk, index, op);
+ }
}
-static void pnv_xive2_nvpg_write(void *opaque, hwaddr offset,
+static void pnv_xive2_nvpg_write(void *opaque, hwaddr addr,
uint64_t val, unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvpg_shift;
+ uint16_t op = addr & 0xFFF;
+ uint32_t index = page >> 1;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVPG: invalid write @%"HWADDR_PRIx, offset);
+ if (size != 1) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvpg write size %d\n",
+ size);
+ return;
+ }
+
+ if (page % 2) {
+ /* odd page - NVG */
+ (void)xive2_presenter_nvgc_backlog_op(xptr, false, blk, index, op, val);
+ } else {
+ /* even page - NVP */
+ (void)xive2_presenter_nvp_backlog_op(xptr, blk, index, op);
+ }
}
static const MemoryRegionOps pnv_xive2_nvpg_ops = {
@@ -2256,11 +2308,11 @@ static const MemoryRegionOps pnv_xive2_nvpg_ops = {
.write = pnv_xive2_nvpg_write,
.endianness = DEVICE_BIG_ENDIAN,
.valid = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
.impl = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
};
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 0c53f71879..b6f279e6a3 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -88,6 +88,93 @@ static void xive2_nvgc_set_backlog(Xive2Nvgc *nvgc, uint8_t priority,
}
}
+uint64_t xive2_presenter_nvgc_backlog_op(XivePresenter *xptr,
+ bool crowd,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset, uint16_t val)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint8_t priority = GETFIELD(NVx_BACKLOG_PRIO, offset);
+ uint8_t op = GETFIELD(NVx_BACKLOG_OP, offset);
+ Xive2Nvgc nvgc;
+ uint32_t count, old_count;
+
+ if (xive2_router_get_nvgc(xrtr, crowd, blk, idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No %s %x/%x\n",
+ crowd ? "NVC" : "NVG", blk, idx);
+ return -1;
+ }
+ if (!xive2_nvgc_is_valid(&nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n", blk, idx);
+ return -1;
+ }
+
+ old_count = xive2_nvgc_get_backlog(&nvgc, priority);
+ count = old_count;
+ /*
+ * op:
+ * 0b00 => increment
+ * 0b01 => decrement
+ * 0b1- => read
+ */
+ if (op == 0b00 || op == 0b01) {
+ if (op == 0b00) {
+ count += val;
+ } else {
+ if (count > val) {
+ count -= val;
+ } else {
+ count = 0;
+ }
+ }
+ xive2_nvgc_set_backlog(&nvgc, priority, count);
+ xive2_router_write_nvgc(xrtr, crowd, blk, idx, &nvgc);
+ }
+ trace_xive_nvgc_backlog_op(crowd, blk, idx, op, priority, old_count);
+ return old_count;
+}
+
+uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint8_t priority = GETFIELD(NVx_BACKLOG_PRIO, offset);
+ uint8_t op = GETFIELD(NVx_BACKLOG_OP, offset);
+ Xive2Nvp nvp;
+ uint8_t ipb, old_ipb, rc;
+
+ if (xive2_router_get_nvp(xrtr, blk, idx, &nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n", blk, idx);
+ return -1;
+ }
+ if (!xive2_nvp_is_valid(&nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVP %x/%x\n", blk, idx);
+ return -1;
+ }
+
+ old_ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
+ ipb = old_ipb;
+ /*
+ * op:
+ * 0b00 => set priority bit
+ * 0b01 => reset priority bit
+ * 0b1- => read
+ */
+ if (op == 0b00 || op == 0b01) {
+ if (op == 0b00) {
+ ipb |= xive_priority_to_ipb(priority);
+ } else {
+ ipb &= ~xive_priority_to_ipb(priority);
+ }
+ nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
+ xive2_router_write_nvp(xrtr, blk, idx, &nvp, 2);
+ }
+ rc = !!(old_ipb & xive_priority_to_ipb(priority));
+ trace_xive_nvp_backlog_op(blk, idx, op, priority, rc);
+ return rc;
+}
+
void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
{
if (!xive2_eas_is_valid(eas)) {
diff --git a/tests/qtest/pnv-xive2-nvpg_bar.c b/tests/qtest/pnv-xive2-nvpg_bar.c
new file mode 100644
index 0000000000..10d4962d1e
--- /dev/null
+++ b/tests/qtest/pnv-xive2-nvpg_bar.c
@@ -0,0 +1,154 @@
+/*
+ * QTest testcase for PowerNV 10 interrupt controller (xive2)
+ * - Test NVPG BAR MMIO operations
+ *
+ * Copyright (c) 2024, IBM Corporation.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later. See the COPYING file in the top-level directory.
+ */
+#include "qemu/osdep.h"
+#include "libqtest.h"
+
+#include "pnv-xive2-common.h"
+
+#define NVPG_BACKLOG_OP_SHIFT 10
+#define NVPG_BACKLOG_PRIO_SHIFT 4
+
+#define XIVE_PRIORITY_MAX 7
+
+enum NVx {
+ NVP,
+ NVG,
+ NVC
+};
+
+typedef enum {
+ INCR_STORE = 0b100,
+ INCR_LOAD = 0b000,
+ DECR_STORE = 0b101,
+ DECR_LOAD = 0b001,
+ READ_x = 0b010,
+ READ_y = 0b011,
+} backlog_op;
+
+static uint32_t nvpg_backlog_op(QTestState *qts, backlog_op op,
+ enum NVx type, uint64_t index,
+ uint8_t priority, uint8_t delta)
+{
+ uint64_t addr, offset;
+ uint32_t count = 0;
+
+ switch (type) {
+ case NVP:
+ addr = XIVE_NVPG_ADDR + (index << (XIVE_PAGE_SHIFT + 1));
+ break;
+ case NVG:
+ addr = XIVE_NVPG_ADDR + (index << (XIVE_PAGE_SHIFT + 1)) +
+ (1 << XIVE_PAGE_SHIFT);
+ break;
+ case NVC:
+ addr = XIVE_NVC_ADDR + (index << XIVE_PAGE_SHIFT);
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ offset = (op & 0b11) << NVPG_BACKLOG_OP_SHIFT;
+ offset |= priority << NVPG_BACKLOG_PRIO_SHIFT;
+ if (op >> 2) {
+ qtest_writeb(qts, addr + offset, delta);
+ } else {
+ count = qtest_readw(qts, addr + offset);
+ }
+ return count;
+}
+
+void test_nvpg_bar(QTestState *qts)
+{
+ uint32_t nvp_target = 0x11;
+ uint32_t group_target = 0x17; /* size 16 */
+ uint32_t vp_irq = 33, group_irq = 47;
+ uint32_t vp_end = 3, group_end = 97;
+ uint32_t vp_irq_data = 0x33333333;
+ uint32_t group_irq_data = 0x66666666;
+ uint8_t vp_priority = 0, group_priority = 5;
+ uint32_t vp_count[XIVE_PRIORITY_MAX + 1] = { 0 };
+ uint32_t group_count[XIVE_PRIORITY_MAX + 1] = { 0 };
+ uint32_t count, delta;
+ uint8_t i;
+
+ printf("# ============================================================\n");
+ printf("# Testing NVPG BAR operations\n");
+
+ set_nvg(qts, group_target, 0);
+ set_nvp(qts, nvp_target, 0x04);
+ set_nvp(qts, group_target, 0x04);
+
+ /*
+ * Setup: trigger a VP-specific interrupt and a group interrupt
+ * so that the backlog counters are initialized to something else
+ * than 0 for at least one priority level
+ */
+ set_eas(qts, vp_irq, vp_end, vp_irq_data);
+ set_end(qts, vp_end, nvp_target, vp_priority, false /* group */);
+
+ set_eas(qts, group_irq, group_end, group_irq_data);
+ set_end(qts, group_end, group_target, group_priority, true /* group */);
+
+ get_esb(qts, vp_irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, vp_irq, XIVE_TRIGGER_PAGE, 0, 0);
+ vp_count[vp_priority]++;
+
+ get_esb(qts, group_irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, group_irq, XIVE_TRIGGER_PAGE, 0, 0);
+ group_count[group_priority]++;
+
+ /* check the initial counters */
+ for (i = 0; i <= XIVE_PRIORITY_MAX; i++) {
+ count = nvpg_backlog_op(qts, READ_x, NVP, nvp_target, i, 0);
+ g_assert_cmpuint(count, ==, vp_count[i]);
+
+ count = nvpg_backlog_op(qts, READ_y, NVG, group_target, i, 0);
+ g_assert_cmpuint(count, ==, group_count[i]);
+ }
+
+ /* do a few ops on the VP. Counter can only be 0 and 1 */
+ vp_priority = 2;
+ delta = 7;
+ nvpg_backlog_op(qts, INCR_STORE, NVP, nvp_target, vp_priority, delta);
+ vp_count[vp_priority] = 1;
+ count = nvpg_backlog_op(qts, INCR_LOAD, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+ count = nvpg_backlog_op(qts, READ_y, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+
+ count = nvpg_backlog_op(qts, DECR_LOAD, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+ vp_count[vp_priority] = 0;
+ nvpg_backlog_op(qts, DECR_STORE, NVP, nvp_target, vp_priority, delta);
+ count = nvpg_backlog_op(qts, READ_x, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+
+ /* do a few ops on the group */
+ group_priority = 2;
+ delta = 9;
+ /* can't go negative */
+ nvpg_backlog_op(qts, DECR_STORE, NVG, group_target, group_priority, delta);
+ count = nvpg_backlog_op(qts, READ_y, NVG, group_target, group_priority, 0);
+ g_assert_cmpuint(count, ==, 0);
+ nvpg_backlog_op(qts, INCR_STORE, NVG, group_target, group_priority, delta);
+ group_count[group_priority] += delta;
+ count = nvpg_backlog_op(qts, INCR_LOAD, NVG, group_target,
+ group_priority, delta);
+ g_assert_cmpuint(count, ==, group_count[group_priority]);
+ group_count[group_priority]++;
+
+ count = nvpg_backlog_op(qts, DECR_LOAD, NVG, group_target,
+ group_priority, delta);
+ g_assert_cmpuint(count, ==, group_count[group_priority]);
+ group_count[group_priority]--;
+ count = nvpg_backlog_op(qts, READ_x, NVG, group_target, group_priority, 0);
+ g_assert_cmpuint(count, ==, group_count[group_priority]);
+}
+
diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
index a4d06550ee..a0e9f19313 100644
--- a/tests/qtest/pnv-xive2-test.c
+++ b/tests/qtest/pnv-xive2-test.c
@@ -493,6 +493,9 @@ static void test_xive(void)
reset_state(qts);
test_flush_sync_inject(qts);
+ reset_state(qts);
+ test_nvpg_bar(qts);
+
qtest_quit(qts);
}
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
index 7435728c51..7f362c38b0 100644
--- a/hw/intc/trace-events
+++ b/hw/intc/trace-events
@@ -285,6 +285,10 @@ xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t v
xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d"
xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64
+# xive2.c
+xive_nvp_backlog_op(uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint8_t rc) "NVP 0x%x/0x%x operation=%d priority=%d rc=%d"
+xive_nvgc_backlog_op(bool c, uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint32_t rc) "NVGC crowd=%d 0x%x/0x%x operation=%d priority=%d rc=%d"
+
# pnv_xive.c
pnv_xive_ic_hw_trigger(uint64_t addr, uint64_t val) "@0x%"PRIx64" val=0x%"PRIx64
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 0dbdb59a55..352760545e 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -346,7 +346,8 @@ qtests = {
'ivshmem-test': [rt, '../../contrib/ivshmem-server/ivshmem-server.c'],
'migration-test': migration_files,
'pxe-test': files('boot-sector.c'),
- 'pnv-xive2-test': files('pnv-xive2-common.c', 'pnv-xive2-flush-sync.c'),
+ 'pnv-xive2-test': files('pnv-xive2-common.c', 'pnv-xive2-flush-sync.c',
+ 'pnv-xive2-nvpg_bar.c'),
'qos-test': [chardev, io, qos_test_ss.apply({}).sources()],
'tpm-crb-swtpm-test': [io, tpmemu_files],
'tpm-crb-test': [io, tpmemu_files],
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 08/14] qtest/xive: Add group-interrupt test
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (13 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 08/14] Add support for MMIO operations on the NVPG/NVC BAR Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 4:46 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 09/14] ppc/xive2: Add support for MMIO operations on the NVPG/NVC BAR Michael Kowal
` (9 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
Add XIVE2 tests for group interrupts and group interrupts that have
been backlogged.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
tests/qtest/pnv-xive2-test.c | 160 +++++++++++++++++++++++++++++++++++
1 file changed, 160 insertions(+)
diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
index dd19e88861..a4d06550ee 100644
--- a/tests/qtest/pnv-xive2-test.c
+++ b/tests/qtest/pnv-xive2-test.c
@@ -2,6 +2,8 @@
* QTest testcase for PowerNV 10 interrupt controller (xive2)
* - Test irq to hardware thread
* - Test 'Pull Thread Context to Odd Thread Reporting Line'
+ * - Test irq to hardware group
+ * - Test irq to hardware group going through backlog
*
* Copyright (c) 2024, IBM Corporation.
*
@@ -315,6 +317,158 @@ static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts)
word2 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD2);
g_assert_cmphex(xive_get_field32(TM_QW3W2_VT, word2), ==, 0);
}
+
+static void test_hw_group_irq(QTestState *qts)
+{
+ uint32_t irq = 100;
+ uint32_t irq_data = 0xdeadbeef;
+ uint32_t end_index = 23;
+ uint32_t chosen_one;
+ uint32_t target_nvp = 0x81; /* group size = 4 */
+ uint8_t priority = 6;
+ uint32_t reg32;
+ uint16_t reg16;
+ uint8_t pq, nsr, cppr;
+
+ printf("# ============================================================\n");
+ printf("# Testing irq %d to hardware group of size 4\n", irq);
+
+ /* irq config */
+ set_eas(qts, irq, end_index, irq_data);
+ set_end(qts, end_index, target_nvp, priority, true /* group */);
+
+ /* enable and trigger irq */
+ get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
+
+ /* check irq is raised on cpu */
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
+
+ /* find the targeted vCPU */
+ for (chosen_one = 0; chosen_one < SMT; chosen_one++) {
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ if (nsr == 0x82) {
+ break;
+ }
+ }
+ g_assert_cmphex(chosen_one, <, SMT);
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, 0xFF);
+
+ /* ack the irq */
+ reg16 = get_tima16(qts, chosen_one, TM_SPC_ACK_HV_REG);
+ nsr = reg16 >> 8;
+ cppr = reg16 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, priority);
+
+ /* check irq data is what was configured */
+ reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
+ g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
+
+ /* End Of Interrupt */
+ set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
+
+ /* reset CPPR */
+ set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x00);
+ g_assert_cmphex(cppr, ==, 0xFF);
+}
+
+static void test_hw_group_irq_backlog(QTestState *qts)
+{
+ uint32_t irq = 31;
+ uint32_t irq_data = 0x01234567;
+ uint32_t end_index = 129;
+ uint32_t target_nvp = 0x81; /* group size = 4 */
+ uint32_t chosen_one = 3;
+ uint8_t blocking_priority, priority = 3;
+ uint32_t reg32;
+ uint16_t reg16;
+ uint8_t pq, nsr, cppr, lsmfb, i;
+
+ printf("# ============================================================\n");
+ printf("# Testing irq %d to hardware group of size 4 going through " \
+ "backlog\n",
+ irq);
+
+ /*
+ * set current priority of all threads in the group to something
+ * higher than what we're about to trigger
+ */
+ blocking_priority = priority - 1;
+ for (i = 0; i < SMT; i++) {
+ set_tima8(qts, i, TM_QW3_HV_PHYS + TM_CPPR, blocking_priority);
+ }
+
+ /* irq config */
+ set_eas(qts, irq, end_index, irq_data);
+ set_end(qts, end_index, target_nvp, priority, true /* group */);
+
+ /* enable and trigger irq */
+ get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
+
+ /* check irq is raised on cpu */
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
+
+ /* check no interrupt is pending on the 2 possible targets */
+ for (i = 0; i < SMT; i++) {
+ reg32 = get_tima32(qts, i, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ lsmfb = reg32 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x0);
+ g_assert_cmphex(cppr, ==, blocking_priority);
+ g_assert_cmphex(lsmfb, ==, priority);
+ }
+
+ /* lower priority of one thread */
+ set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, priority + 1);
+
+ /* check backlogged interrupt is presented */
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, priority + 1);
+
+ /* ack the irq */
+ reg16 = get_tima16(qts, chosen_one, TM_SPC_ACK_HV_REG);
+ nsr = reg16 >> 8;
+ cppr = reg16 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x82);
+ g_assert_cmphex(cppr, ==, priority);
+
+ /* check irq data is what was configured */
+ reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
+ g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
+
+ /* End Of Interrupt */
+ set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
+
+ /* reset CPPR */
+ set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
+ reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ lsmfb = reg32 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x00);
+ g_assert_cmphex(cppr, ==, 0xFF);
+ g_assert_cmphex(lsmfb, ==, 0xFF);
+}
+
static void test_xive(void)
{
QTestState *qts;
@@ -330,6 +484,12 @@ static void test_xive(void)
/* omit reset_state here and use settings from test_hw_irq */
test_pull_thread_ctx_to_odd_thread_cl(qts);
+ reset_state(qts);
+ test_hw_group_irq(qts);
+
+ reset_state(qts);
+ test_hw_group_irq_backlog(qts);
+
reset_state(qts);
test_flush_sync_inject(qts);
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 09/14] ppc/xive2: Add support for MMIO operations on the NVPG/NVC BAR
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (14 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 08/14] qtest/xive: Add group-interrupt test Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 5:10 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 09/14] ppc/xive2: Support crowd-matching when looking for target Michael Kowal
` (8 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
Add support for the NVPG and NVC BARs. Access to the BAR pages will
cause backlog counter operations to either increment or decriment
the counter.
Also added qtests for the same.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2.h | 9 ++
include/hw/ppc/xive2_regs.h | 3 +
tests/qtest/pnv-xive2-common.h | 1 +
hw/intc/pnv_xive2.c | 80 +++++++++++++---
hw/intc/xive2.c | 87 +++++++++++++++++
tests/qtest/pnv-xive2-nvpg_bar.c | 154 +++++++++++++++++++++++++++++++
tests/qtest/pnv-xive2-test.c | 3 +
hw/intc/trace-events | 4 +
tests/qtest/meson.build | 3 +-
9 files changed, 329 insertions(+), 15 deletions(-)
create mode 100644 tests/qtest/pnv-xive2-nvpg_bar.c
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index fc7422fea7..c07e23e1d3 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -90,6 +90,15 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint32_t logic_serv);
+uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset);
+
+uint64_t xive2_presenter_nvgc_backlog_op(XivePresenter *xptr,
+ bool crowd,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset, uint16_t val);
+
/*
* XIVE2 END ESBs (POWER10)
*/
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index e88d6eab1e..9bcf7a8a6f 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -233,4 +233,7 @@ typedef struct Xive2Nvgc {
void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx,
GString *buf);
+#define NVx_BACKLOG_OP PPC_BITMASK(52, 53)
+#define NVx_BACKLOG_PRIO PPC_BITMASK(57, 59)
+
#endif /* PPC_XIVE2_REGS_H */
diff --git a/tests/qtest/pnv-xive2-common.h b/tests/qtest/pnv-xive2-common.h
index 9ae34771aa..2077c05ebc 100644
--- a/tests/qtest/pnv-xive2-common.h
+++ b/tests/qtest/pnv-xive2-common.h
@@ -107,5 +107,6 @@ extern void set_end(QTestState *qts, uint32_t index, uint32_t nvp_index,
void test_flush_sync_inject(QTestState *qts);
+void test_nvpg_bar(QTestState *qts);
#endif /* TEST_PNV_XIVE2_COMMON_H */
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 41b727d1fb..54abfe3947 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -2202,21 +2202,40 @@ static const MemoryRegionOps pnv_xive2_tm_ops = {
},
};
-static uint64_t pnv_xive2_nvc_read(void *opaque, hwaddr offset,
+static uint64_t pnv_xive2_nvc_read(void *opaque, hwaddr addr,
unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvpg_shift;
+ uint16_t op = addr & 0xFFF;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVC: invalid read @%"HWADDR_PRIx, offset);
- return -1;
+ if (size != 2) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvc load size %d\n",
+ size);
+ return -1;
+ }
+
+ return xive2_presenter_nvgc_backlog_op(xptr, true, blk, page, op, 1);
}
-static void pnv_xive2_nvc_write(void *opaque, hwaddr offset,
+static void pnv_xive2_nvc_write(void *opaque, hwaddr addr,
uint64_t val, unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvc_shift;
+ uint16_t op = addr & 0xFFF;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVC: invalid write @%"HWADDR_PRIx, offset);
+ if (size != 1) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvc write size %d\n",
+ size);
+ return;
+ }
+
+ (void)xive2_presenter_nvgc_backlog_op(xptr, true, blk, page, op, val);
}
static const MemoryRegionOps pnv_xive2_nvc_ops = {
@@ -2224,30 +2243,63 @@ static const MemoryRegionOps pnv_xive2_nvc_ops = {
.write = pnv_xive2_nvc_write,
.endianness = DEVICE_BIG_ENDIAN,
.valid = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
.impl = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
};
-static uint64_t pnv_xive2_nvpg_read(void *opaque, hwaddr offset,
+static uint64_t pnv_xive2_nvpg_read(void *opaque, hwaddr addr,
unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvpg_shift;
+ uint16_t op = addr & 0xFFF;
+ uint32_t index = page >> 1;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVPG: invalid read @%"HWADDR_PRIx, offset);
- return -1;
+ if (size != 2) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvpg load size %d\n",
+ size);
+ return -1;
+ }
+
+ if (page % 2) {
+ /* odd page - NVG */
+ return xive2_presenter_nvgc_backlog_op(xptr, false, blk, index, op, 1);
+ } else {
+ /* even page - NVP */
+ return xive2_presenter_nvp_backlog_op(xptr, blk, index, op);
+ }
}
-static void pnv_xive2_nvpg_write(void *opaque, hwaddr offset,
+static void pnv_xive2_nvpg_write(void *opaque, hwaddr addr,
uint64_t val, unsigned size)
{
PnvXive2 *xive = PNV_XIVE2(opaque);
+ XivePresenter *xptr = XIVE_PRESENTER(xive);
+ uint32_t page = addr >> xive->nvpg_shift;
+ uint16_t op = addr & 0xFFF;
+ uint32_t index = page >> 1;
+ uint8_t blk = pnv_xive2_block_id(xive);
- xive2_error(xive, "NVPG: invalid write @%"HWADDR_PRIx, offset);
+ if (size != 1) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvpg write size %d\n",
+ size);
+ return;
+ }
+
+ if (page % 2) {
+ /* odd page - NVG */
+ (void)xive2_presenter_nvgc_backlog_op(xptr, false, blk, index, op, val);
+ } else {
+ /* even page - NVP */
+ (void)xive2_presenter_nvp_backlog_op(xptr, blk, index, op);
+ }
}
static const MemoryRegionOps pnv_xive2_nvpg_ops = {
@@ -2255,11 +2307,11 @@ static const MemoryRegionOps pnv_xive2_nvpg_ops = {
.write = pnv_xive2_nvpg_write,
.endianness = DEVICE_BIG_ENDIAN,
.valid = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
.impl = {
- .min_access_size = 8,
+ .min_access_size = 1,
.max_access_size = 8,
},
};
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 47f7a099de..f4621bdd02 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -87,6 +87,93 @@ static void xive2_nvgc_set_backlog(Xive2Nvgc *nvgc, uint8_t priority,
}
}
+uint64_t xive2_presenter_nvgc_backlog_op(XivePresenter *xptr,
+ bool crowd,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset, uint16_t val)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint8_t priority = GETFIELD(NVx_BACKLOG_PRIO, offset);
+ uint8_t op = GETFIELD(NVx_BACKLOG_OP, offset);
+ Xive2Nvgc nvgc;
+ uint32_t count, old_count;
+
+ if (xive2_router_get_nvgc(xrtr, crowd, blk, idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No %s %x/%x\n",
+ crowd ? "NVC" : "NVG", blk, idx);
+ return -1;
+ }
+ if (!xive2_nvgc_is_valid(&nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n", blk, idx);
+ return -1;
+ }
+
+ old_count = xive2_nvgc_get_backlog(&nvgc, priority);
+ count = old_count;
+ /*
+ * op:
+ * 0b00 => increment
+ * 0b01 => decrement
+ * 0b1- => read
+ */
+ if (op == 0b00 || op == 0b01) {
+ if (op == 0b00) {
+ count += val;
+ } else {
+ if (count > val) {
+ count -= val;
+ } else {
+ count = 0;
+ }
+ }
+ xive2_nvgc_set_backlog(&nvgc, priority, count);
+ xive2_router_write_nvgc(xrtr, crowd, blk, idx, &nvgc);
+ }
+ trace_xive_nvgc_backlog_op(crowd, blk, idx, op, priority, old_count);
+ return old_count;
+}
+
+uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
+ uint8_t blk, uint32_t idx,
+ uint16_t offset)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint8_t priority = GETFIELD(NVx_BACKLOG_PRIO, offset);
+ uint8_t op = GETFIELD(NVx_BACKLOG_OP, offset);
+ Xive2Nvp nvp;
+ uint8_t ipb, old_ipb, rc;
+
+ if (xive2_router_get_nvp(xrtr, blk, idx, &nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n", blk, idx);
+ return -1;
+ }
+ if (!xive2_nvp_is_valid(&nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVP %x/%x\n", blk, idx);
+ return -1;
+ }
+
+ old_ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
+ ipb = old_ipb;
+ /*
+ * op:
+ * 0b00 => set priority bit
+ * 0b01 => reset priority bit
+ * 0b1- => read
+ */
+ if (op == 0b00 || op == 0b01) {
+ if (op == 0b00) {
+ ipb |= xive_priority_to_ipb(priority);
+ } else {
+ ipb &= ~xive_priority_to_ipb(priority);
+ }
+ nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
+ xive2_router_write_nvp(xrtr, blk, idx, &nvp, 2);
+ }
+ rc = !!(old_ipb & xive_priority_to_ipb(priority));
+ trace_xive_nvp_backlog_op(blk, idx, op, priority, rc);
+ return rc;
+}
+
void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
{
if (!xive2_eas_is_valid(eas)) {
diff --git a/tests/qtest/pnv-xive2-nvpg_bar.c b/tests/qtest/pnv-xive2-nvpg_bar.c
new file mode 100644
index 0000000000..10d4962d1e
--- /dev/null
+++ b/tests/qtest/pnv-xive2-nvpg_bar.c
@@ -0,0 +1,154 @@
+/*
+ * QTest testcase for PowerNV 10 interrupt controller (xive2)
+ * - Test NVPG BAR MMIO operations
+ *
+ * Copyright (c) 2024, IBM Corporation.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * later. See the COPYING file in the top-level directory.
+ */
+#include "qemu/osdep.h"
+#include "libqtest.h"
+
+#include "pnv-xive2-common.h"
+
+#define NVPG_BACKLOG_OP_SHIFT 10
+#define NVPG_BACKLOG_PRIO_SHIFT 4
+
+#define XIVE_PRIORITY_MAX 7
+
+enum NVx {
+ NVP,
+ NVG,
+ NVC
+};
+
+typedef enum {
+ INCR_STORE = 0b100,
+ INCR_LOAD = 0b000,
+ DECR_STORE = 0b101,
+ DECR_LOAD = 0b001,
+ READ_x = 0b010,
+ READ_y = 0b011,
+} backlog_op;
+
+static uint32_t nvpg_backlog_op(QTestState *qts, backlog_op op,
+ enum NVx type, uint64_t index,
+ uint8_t priority, uint8_t delta)
+{
+ uint64_t addr, offset;
+ uint32_t count = 0;
+
+ switch (type) {
+ case NVP:
+ addr = XIVE_NVPG_ADDR + (index << (XIVE_PAGE_SHIFT + 1));
+ break;
+ case NVG:
+ addr = XIVE_NVPG_ADDR + (index << (XIVE_PAGE_SHIFT + 1)) +
+ (1 << XIVE_PAGE_SHIFT);
+ break;
+ case NVC:
+ addr = XIVE_NVC_ADDR + (index << XIVE_PAGE_SHIFT);
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ offset = (op & 0b11) << NVPG_BACKLOG_OP_SHIFT;
+ offset |= priority << NVPG_BACKLOG_PRIO_SHIFT;
+ if (op >> 2) {
+ qtest_writeb(qts, addr + offset, delta);
+ } else {
+ count = qtest_readw(qts, addr + offset);
+ }
+ return count;
+}
+
+void test_nvpg_bar(QTestState *qts)
+{
+ uint32_t nvp_target = 0x11;
+ uint32_t group_target = 0x17; /* size 16 */
+ uint32_t vp_irq = 33, group_irq = 47;
+ uint32_t vp_end = 3, group_end = 97;
+ uint32_t vp_irq_data = 0x33333333;
+ uint32_t group_irq_data = 0x66666666;
+ uint8_t vp_priority = 0, group_priority = 5;
+ uint32_t vp_count[XIVE_PRIORITY_MAX + 1] = { 0 };
+ uint32_t group_count[XIVE_PRIORITY_MAX + 1] = { 0 };
+ uint32_t count, delta;
+ uint8_t i;
+
+ printf("# ============================================================\n");
+ printf("# Testing NVPG BAR operations\n");
+
+ set_nvg(qts, group_target, 0);
+ set_nvp(qts, nvp_target, 0x04);
+ set_nvp(qts, group_target, 0x04);
+
+ /*
+ * Setup: trigger a VP-specific interrupt and a group interrupt
+ * so that the backlog counters are initialized to something else
+ * than 0 for at least one priority level
+ */
+ set_eas(qts, vp_irq, vp_end, vp_irq_data);
+ set_end(qts, vp_end, nvp_target, vp_priority, false /* group */);
+
+ set_eas(qts, group_irq, group_end, group_irq_data);
+ set_end(qts, group_end, group_target, group_priority, true /* group */);
+
+ get_esb(qts, vp_irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, vp_irq, XIVE_TRIGGER_PAGE, 0, 0);
+ vp_count[vp_priority]++;
+
+ get_esb(qts, group_irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, group_irq, XIVE_TRIGGER_PAGE, 0, 0);
+ group_count[group_priority]++;
+
+ /* check the initial counters */
+ for (i = 0; i <= XIVE_PRIORITY_MAX; i++) {
+ count = nvpg_backlog_op(qts, READ_x, NVP, nvp_target, i, 0);
+ g_assert_cmpuint(count, ==, vp_count[i]);
+
+ count = nvpg_backlog_op(qts, READ_y, NVG, group_target, i, 0);
+ g_assert_cmpuint(count, ==, group_count[i]);
+ }
+
+ /* do a few ops on the VP. Counter can only be 0 and 1 */
+ vp_priority = 2;
+ delta = 7;
+ nvpg_backlog_op(qts, INCR_STORE, NVP, nvp_target, vp_priority, delta);
+ vp_count[vp_priority] = 1;
+ count = nvpg_backlog_op(qts, INCR_LOAD, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+ count = nvpg_backlog_op(qts, READ_y, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+
+ count = nvpg_backlog_op(qts, DECR_LOAD, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+ vp_count[vp_priority] = 0;
+ nvpg_backlog_op(qts, DECR_STORE, NVP, nvp_target, vp_priority, delta);
+ count = nvpg_backlog_op(qts, READ_x, NVP, nvp_target, vp_priority, 0);
+ g_assert_cmpuint(count, ==, vp_count[vp_priority]);
+
+ /* do a few ops on the group */
+ group_priority = 2;
+ delta = 9;
+ /* can't go negative */
+ nvpg_backlog_op(qts, DECR_STORE, NVG, group_target, group_priority, delta);
+ count = nvpg_backlog_op(qts, READ_y, NVG, group_target, group_priority, 0);
+ g_assert_cmpuint(count, ==, 0);
+ nvpg_backlog_op(qts, INCR_STORE, NVG, group_target, group_priority, delta);
+ group_count[group_priority] += delta;
+ count = nvpg_backlog_op(qts, INCR_LOAD, NVG, group_target,
+ group_priority, delta);
+ g_assert_cmpuint(count, ==, group_count[group_priority]);
+ group_count[group_priority]++;
+
+ count = nvpg_backlog_op(qts, DECR_LOAD, NVG, group_target,
+ group_priority, delta);
+ g_assert_cmpuint(count, ==, group_count[group_priority]);
+ group_count[group_priority]--;
+ count = nvpg_backlog_op(qts, READ_x, NVG, group_target, group_priority, 0);
+ g_assert_cmpuint(count, ==, group_count[group_priority]);
+}
+
diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
index a4d06550ee..a0e9f19313 100644
--- a/tests/qtest/pnv-xive2-test.c
+++ b/tests/qtest/pnv-xive2-test.c
@@ -493,6 +493,9 @@ static void test_xive(void)
reset_state(qts);
test_flush_sync_inject(qts);
+ reset_state(qts);
+ test_nvpg_bar(qts);
+
qtest_quit(qts);
}
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
index 7435728c51..7f362c38b0 100644
--- a/hw/intc/trace-events
+++ b/hw/intc/trace-events
@@ -285,6 +285,10 @@ xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t v
xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d"
xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64
+# xive2.c
+xive_nvp_backlog_op(uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint8_t rc) "NVP 0x%x/0x%x operation=%d priority=%d rc=%d"
+xive_nvgc_backlog_op(bool c, uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint32_t rc) "NVGC crowd=%d 0x%x/0x%x operation=%d priority=%d rc=%d"
+
# pnv_xive.c
pnv_xive_ic_hw_trigger(uint64_t addr, uint64_t val) "@0x%"PRIx64" val=0x%"PRIx64
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index bd41c9da5f..f7da3df24b 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -348,7 +348,8 @@ qtests = {
'ivshmem-test': [rt, '../../contrib/ivshmem-server/ivshmem-server.c'],
'migration-test': migration_files,
'pxe-test': files('boot-sector.c'),
- 'pnv-xive2-test': files('pnv-xive2-common.c', 'pnv-xive2-flush-sync.c'),
+ 'pnv-xive2-test': files('pnv-xive2-common.c', 'pnv-xive2-flush-sync.c',
+ 'pnv-xive2-nvpg_bar.c'),
'qos-test': [chardev, io, qos_test_ss.apply({}).sources()],
'tpm-crb-swtpm-test': [io, tpmemu_files],
'tpm-crb-test': [io, tpmemu_files],
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 09/14] ppc/xive2: Support crowd-matching when looking for target
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (15 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 09/14] ppc/xive2: Add support for MMIO operations on the NVPG/NVC BAR Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2024-12-10 0:05 ` [PATCH v2 10/14] ppc/xive2: Check crowd backlog when scanning group backlog Michael Kowal
` (7 subsequent siblings)
24 siblings, 0 replies; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
If an END is defined with the 'crowd' bit set, then a target can be
running on different blocks. It means that some bits from the block
VP are masked when looking for a match. It is similar to groups, but
on the block instead of the VP index.
Most of the changes are due to passing the extra argument 'crowd' all
the way to the function checking for matches.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 10 +++---
include/hw/ppc/xive2.h | 3 +-
hw/intc/pnv_xive.c | 5 +--
hw/intc/pnv_xive2.c | 12 +++----
hw/intc/spapr_xive.c | 3 +-
hw/intc/xive.c | 21 ++++++++----
hw/intc/xive2.c | 78 +++++++++++++++++++++++++++++++++---------
hw/ppc/pnv.c | 15 ++++----
hw/ppc/spapr.c | 4 +--
9 files changed, 105 insertions(+), 46 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index c15cd4358d..187a03d55c 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -440,13 +440,13 @@ struct XivePresenterClass {
InterfaceClass parent;
int (*match_nvt)(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match);
bool (*in_kernel)(const XivePresenter *xptr);
uint32_t (*get_config)(XivePresenter *xptr);
int (*broadcast)(XivePresenter *xptr,
uint8_t nvt_blk, uint32_t nvt_idx,
- uint8_t priority);
+ bool crowd, bool cam_ignore, uint8_t priority);
};
int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
@@ -455,7 +455,7 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
bool cam_ignore, uint32_t logic_serv);
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, bool *precluded);
uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
@@ -475,10 +475,10 @@ struct XiveFabricClass {
InterfaceClass parent;
int (*match_nvt)(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match);
int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
- uint8_t priority);
+ bool crowd, bool cam_ignore, uint8_t priority);
};
/*
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 049028d2c2..37aca4d26a 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -90,7 +90,8 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint32_t logic_serv);
+ bool crowd, bool cam_ignore,
+ uint32_t logic_serv);
uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
uint8_t blk, uint32_t idx,
diff --git a/hw/intc/pnv_xive.c b/hw/intc/pnv_xive.c
index 5bacbce6a4..346549f32e 100644
--- a/hw/intc/pnv_xive.c
+++ b/hw/intc/pnv_xive.c
@@ -473,7 +473,7 @@ static bool pnv_xive_is_cpu_enabled(PnvXive *xive, PowerPCCPU *cpu)
static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
PnvXive *xive = PNV_XIVE(xptr);
@@ -500,7 +500,8 @@ static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
* Check the thread context CAM lines and record matches.
*/
ring = xive_presenter_tctx_match(xptr, tctx, format, nvt_blk,
- nvt_idx, cam_ignore, logic_serv);
+ nvt_idx, cam_ignore,
+ logic_serv);
/*
* Save the context and follow on to catch duplicates, that we
* don't support yet.
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 9736b623ba..236f9d7eb7 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -625,7 +625,7 @@ static bool pnv_xive2_is_cpu_enabled(PnvXive2 *xive, PowerPCCPU *cpu)
static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
PnvXive2 *xive = PNV_XIVE2(xptr);
@@ -656,8 +656,8 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
logic_serv);
} else {
ring = xive2_presenter_tctx_match(xptr, tctx, format, nvt_blk,
- nvt_idx, cam_ignore,
- logic_serv);
+ nvt_idx, crowd, cam_ignore,
+ logic_serv);
}
if (ring != -1) {
@@ -708,7 +708,7 @@ static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
static int pnv_xive2_broadcast(XivePresenter *xptr,
uint8_t nvt_blk, uint32_t nvt_idx,
- uint8_t priority)
+ bool crowd, bool ignore, uint8_t priority)
{
PnvXive2 *xive = PNV_XIVE2(xptr);
PnvChip *chip = xive->chip;
@@ -733,10 +733,10 @@ static int pnv_xive2_broadcast(XivePresenter *xptr,
if (gen1_tima_os) {
ring = xive_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
- nvt_idx, true, 0);
+ nvt_idx, ignore, 0);
} else {
ring = xive2_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
- nvt_idx, true, 0);
+ nvt_idx, crowd, ignore, 0);
}
if (ring != -1) {
diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
index 283a6b8fd2..41cfcab3b9 100644
--- a/hw/intc/spapr_xive.c
+++ b/hw/intc/spapr_xive.c
@@ -431,7 +431,8 @@ static int spapr_xive_write_nvt(XiveRouter *xrtr, uint8_t nvt_blk,
static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore,
+ uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
CPUState *cs;
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 74a78da88b..2a7ce72606 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1681,10 +1681,18 @@ uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
return 1 << (ctz32(~nvp_index) + 1);
}
-static uint8_t xive_get_group_level(uint32_t nvp_index)
+static uint8_t xive_get_group_level(bool crowd, bool ignore,
+ uint32_t nvp_blk, uint32_t nvp_index)
{
- /* FIXME add crowd encoding */
- return ctz32(~nvp_index) + 1;
+ uint8_t level = 0;
+
+ if (crowd) {
+ level = ((ctz32(~nvp_blk) + 1) & 0b11) << 4;
+ }
+ if (ignore) {
+ level |= (ctz32(~nvp_index) + 1) & 0b1111;
+ }
+ return level;
}
/*
@@ -1756,7 +1764,7 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
*/
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, bool *precluded)
{
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
@@ -1787,7 +1795,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
* a new command to the presenters (the equivalent of the "assign"
* power bus command in the documented full notify sequence.
*/
- count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
+ count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
priority, logic_serv, &match);
if (count < 0) {
return false;
@@ -1795,7 +1803,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
/* handle CPU exception delivery */
if (count) {
- group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
+ group_level = xive_get_group_level(crowd, cam_ignore, nvt_blk, nvt_idx);
trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
} else {
@@ -1920,6 +1928,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
}
found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
+ false /* crowd */,
xive_get_field32(END_W7_F0_IGNORE, end.w7),
priority,
xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index b6f279e6a3..1f2837104c 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1117,13 +1117,42 @@ static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
return (cam1 & vp_mask) == (cam2 & vp_mask);
}
+static uint8_t xive2_get_vp_block_mask(uint32_t nvt_blk, bool crowd)
+{
+ uint8_t size, block_mask = 0b1111;
+
+ /* 3 supported crowd sizes: 2, 4, 16 */
+ if (crowd) {
+ size = xive_get_vpgroup_size(nvt_blk);
+ if (size == 8) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid crowd size of 8n");
+ return block_mask;
+ }
+ block_mask = ~(size - 1);
+ block_mask &= 0b1111;
+ }
+ return block_mask;
+}
+
+static uint32_t xive2_get_vp_index_mask(uint32_t nvt_index, bool cam_ignore)
+{
+ uint32_t index_mask = 0xFFFFFF; /* 24 bits */
+
+ if (cam_ignore) {
+ index_mask = ~(xive_get_vpgroup_size(nvt_index) - 1);
+ index_mask &= 0xFFFFFF;
+ }
+ return index_mask;
+}
+
/*
* The thread context register words are in big-endian format.
*/
int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint32_t logic_serv)
+ bool crowd, bool cam_ignore,
+ uint32_t logic_serv)
{
uint32_t cam = xive2_nvp_cam_line(nvt_blk, nvt_idx);
uint32_t qw3w2 = xive_tctx_word2(&tctx->regs[TM_QW3_HV_PHYS]);
@@ -1131,7 +1160,8 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
- uint32_t vp_mask = 0xFFFFFFFF;
+ uint32_t index_mask, vp_mask;
+ uint8_t block_mask;
if (format == 0) {
/*
@@ -1139,9 +1169,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
* i=1: VP-group notification (bits ignored at the end of the
* NVT identifier)
*/
- if (cam_ignore) {
- vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
- }
+ block_mask = xive2_get_vp_block_mask(nvt_blk, crowd);
+ index_mask = xive2_get_vp_index_mask(nvt_idx, cam_ignore);
+ vp_mask = xive2_nvp_cam_line(block_mask, index_mask);
/* For VP-group notifications, threads with LGS=0 are excluded */
@@ -1274,6 +1304,12 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
return;
}
+ if (xive2_end_is_crowd(&end) & !xive2_end_is_ignore(&end)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "XIVE: invalid END, 'crowd' bit requires 'ignore' bit\n");
+ return;
+ }
+
if (xive2_end_is_enqueue(&end)) {
xive2_end_enqueue(&end, end_data);
/* Enqueuing event data modifies the EQ toggle and index */
@@ -1335,7 +1371,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
}
found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx,
- xive2_end_is_ignore(&end),
+ xive2_end_is_crowd(&end), xive2_end_is_ignore(&end),
priority,
xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
&precluded);
@@ -1372,17 +1408,24 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
} else {
- Xive2Nvgc nvg;
+ Xive2Nvgc nvgc;
uint32_t backlog;
+ bool crowd;
- /* For groups, the per-priority backlog counters are in the NVG */
- if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVG %x/%x\n",
- nvp_blk, nvp_idx);
+ crowd = xive2_end_is_crowd(&end);
+
+ /*
+ * For groups and crowds, the per-priority backlog
+ * counters are stored in the NVG/NVC structures
+ */
+ if (xive2_router_get_nvgc(xrtr, crowd,
+ nvp_blk, nvp_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n",
+ crowd ? "NVC" : "NVG", nvp_blk, nvp_idx);
return;
}
- if (!xive2_nvgc_is_valid(&nvg)) {
+ if (!xive2_nvgc_is_valid(&nvgc)) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n",
nvp_blk, nvp_idx);
return;
@@ -1395,13 +1438,16 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
* set the LSMFB field of the TIMA of relevant threads so
* that they know an interrupt is pending.
*/
- backlog = xive2_nvgc_get_backlog(&nvg, priority) + 1;
- xive2_nvgc_set_backlog(&nvg, priority, backlog);
- xive2_router_write_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg);
+ backlog = xive2_nvgc_get_backlog(&nvgc, priority) + 1;
+ xive2_nvgc_set_backlog(&nvgc, priority, backlog);
+ xive2_router_write_nvgc(xrtr, crowd, nvp_blk, nvp_idx, &nvgc);
if (precluded && backlog == 1) {
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
- xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, priority);
+ xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx,
+ xive2_end_is_crowd(&end),
+ xive2_end_is_ignore(&end),
+ priority);
if (!xive2_end_is_precluded_escalation(&end)) {
/*
diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
index 6c76f65936..419f65607a 100644
--- a/hw/ppc/pnv.c
+++ b/hw/ppc/pnv.c
@@ -2583,7 +2583,7 @@ static void pnv_pic_print_info(InterruptStatsProvider *obj, GString *buf)
static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv,
XiveTCTXMatch *match)
{
@@ -2597,8 +2597,8 @@ static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
int count;
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore,
- priority, logic_serv, match);
+ count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
+ cam_ignore, priority, logic_serv, match);
if (count < 0) {
return count;
@@ -2612,7 +2612,7 @@ static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv,
XiveTCTXMatch *match)
{
@@ -2626,8 +2626,8 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
int count;
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore,
- priority, logic_serv, match);
+ count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
+ cam_ignore, priority, logic_serv, match);
if (count < 0) {
return count;
@@ -2641,6 +2641,7 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
static int pnv10_xive_broadcast(XiveFabric *xfb,
uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore,
uint8_t priority)
{
PnvMachineState *pnv = PNV_MACHINE(xfb);
@@ -2651,7 +2652,7 @@ static int pnv10_xive_broadcast(XiveFabric *xfb,
XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
- xpc->broadcast(xptr, nvt_blk, nvt_idx, priority);
+ xpc->broadcast(xptr, nvt_blk, nvt_idx, crowd, cam_ignore, priority);
}
return 0;
}
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 5c02037c56..5fdd9ad915 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -4437,7 +4437,7 @@ static void spapr_pic_print_info(InterruptStatsProvider *obj, GString *buf)
*/
static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
SpaprMachineState *spapr = SPAPR_MACHINE(xfb);
@@ -4445,7 +4445,7 @@ static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
int count;
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore,
+ count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
priority, logic_serv, match);
if (count < 0) {
return count;
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 10/14] ppc/xive2: Check crowd backlog when scanning group backlog
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (16 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 09/14] ppc/xive2: Support crowd-matching when looking for target Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2024-12-10 0:05 ` [PATCH v2 10/14] ppc/xive2: Support crowd-matching when looking for target Michael Kowal
` (6 subsequent siblings)
24 siblings, 0 replies; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When processing a backlog scan for group interrupts, also take
into account crowd interrupts.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2_regs.h | 4 ++
hw/intc/xive2.c | 82 +++++++++++++++++++++++++------------
2 files changed, 60 insertions(+), 26 deletions(-)
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 66a419441c..89236b9aaf 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -237,4 +237,8 @@ void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx,
#define NVx_BACKLOG_OP PPC_BITMASK(52, 53)
#define NVx_BACKLOG_PRIO PPC_BITMASK(57, 59)
+/* split the 6-bit crowd/group level */
+#define NVx_CROWD_LVL(level) ((level >> 4) & 0b11)
+#define NVx_GROUP_LVL(level) (level & 0b1111)
+
#endif /* PPC_XIVE2_REGS_H */
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 1f2837104c..41d689eaab 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -367,6 +367,35 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
}
+static void xive2_pgofnext(uint8_t *nvgc_blk, uint32_t *nvgc_idx,
+ uint8_t next_level)
+{
+ uint32_t mask, next_idx;
+ uint8_t next_blk;
+
+ /*
+ * Adjust the block and index of a VP for the next group/crowd
+ * size (PGofFirst/PGofNext field in the NVP and NVGC structures).
+ *
+ * The 6-bit group level is split into a 2-bit crowd and 4-bit
+ * group levels. Encoding is similar. However, we don't support
+ * crowd size of 8. So a crowd level of 0b11 is bumped to a crowd
+ * size of 16.
+ */
+ next_blk = NVx_CROWD_LVL(next_level);
+ if (next_blk == 3) {
+ next_blk = 4;
+ }
+ mask = (1 << next_blk) - 1;
+ *nvgc_blk &= ~mask;
+ *nvgc_blk |= mask >> 1;
+
+ next_idx = NVx_GROUP_LVL(next_level);
+ mask = (1 << next_idx) - 1;
+ *nvgc_idx &= ~mask;
+ *nvgc_idx |= mask >> 1;
+}
+
/*
* Scan the group chain and return the highest priority and group
* level of pending group interrupts.
@@ -377,29 +406,28 @@ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr,
uint8_t *out_level)
{
Xive2Router *xrtr = XIVE2_ROUTER(xptr);
- uint32_t nvgc_idx, mask;
+ uint32_t nvgc_idx;
uint32_t current_level, count;
- uint8_t prio;
+ uint8_t nvgc_blk, prio;
Xive2Nvgc nvgc;
for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) {
- current_level = first_group & 0xF;
+ current_level = first_group & 0x3F;
+ nvgc_blk = nvp_blk;
+ nvgc_idx = nvp_idx;
while (current_level) {
- mask = (1 << current_level) - 1;
- nvgc_idx = nvp_idx & ~mask;
- nvgc_idx |= mask >> 1;
- qemu_log("fxb %s checking backlog for prio %d group idx %x\n",
- __func__, prio, nvgc_idx);
-
- if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ xive2_pgofnext(&nvgc_blk, &nvgc_idx, current_level);
+
+ if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(current_level),
+ nvgc_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return 0xFF;
}
if (!xive2_nvgc_is_valid(&nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return 0xFF;
}
@@ -408,7 +436,7 @@ static uint8_t xive2_presenter_backlog_check(XivePresenter *xptr,
*out_level = current_level;
return prio;
}
- current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0xF;
+ current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0x3F;
}
}
return 0xFF;
@@ -420,22 +448,23 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
uint8_t group_level)
{
Xive2Router *xrtr = XIVE2_ROUTER(xptr);
- uint32_t nvgc_idx, mask, count;
+ uint32_t nvgc_idx, count;
+ uint8_t nvgc_blk;
Xive2Nvgc nvgc;
- group_level &= 0xF;
- mask = (1 << group_level) - 1;
- nvgc_idx = nvp_idx & ~mask;
- nvgc_idx |= mask >> 1;
+ nvgc_blk = nvp_blk;
+ nvgc_idx = nvp_idx;
+ xive2_pgofnext(&nvgc_blk, &nvgc_idx, group_level);
- if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(group_level),
+ nvgc_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return;
}
if (!xive2_nvgc_is_valid(&nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return;
}
count = xive2_nvgc_get_backlog(&nvgc, group_prio);
@@ -443,7 +472,8 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
return;
}
xive2_nvgc_set_backlog(&nvgc, group_prio, count - 1);
- xive2_router_write_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc);
+ xive2_router_write_nvgc(xrtr, NVx_CROWD_LVL(group_level),
+ nvgc_blk, nvgc_idx, &nvgc);
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 10/14] ppc/xive2: Support crowd-matching when looking for target
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (17 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 10/14] ppc/xive2: Check crowd backlog when scanning group backlog Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 7:31 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16 Michael Kowal
` (5 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
XIVE crowd sizes are encoded into a 2-bit field as follows:
0: 0b00
2: 0b01
4: 0b10
16: 0b11
A crowd size of 8 is not supported.
If an END is defined with the 'crowd' bit set, then a target can be
running on different blocks. It means that some bits from the block
VP are masked when looking for a match. It is similar to groups, but
on the block instead of the VP index.
Most of the changes are due to passing the extra argument 'crowd' all
the way to the function checking for matches.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive.h | 10 +++---
include/hw/ppc/xive2.h | 3 +-
hw/intc/pnv_xive.c | 10 +++---
hw/intc/pnv_xive2.c | 12 +++----
hw/intc/spapr_xive.c | 8 ++---
hw/intc/xive.c | 40 ++++++++++++++++++----
hw/intc/xive2.c | 78 +++++++++++++++++++++++++++++++++---------
hw/ppc/pnv.c | 15 ++++----
hw/ppc/spapr.c | 7 ++--
9 files changed, 131 insertions(+), 52 deletions(-)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index f443a39cf1..8317fde0db 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -438,13 +438,13 @@ struct XivePresenterClass {
InterfaceClass parent;
int (*match_nvt)(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match);
bool (*in_kernel)(const XivePresenter *xptr);
uint32_t (*get_config)(XivePresenter *xptr);
int (*broadcast)(XivePresenter *xptr,
uint8_t nvt_blk, uint32_t nvt_idx,
- uint8_t priority);
+ bool crowd, bool cam_ignore, uint8_t priority);
};
int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
@@ -453,7 +453,7 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
bool cam_ignore, uint32_t logic_serv);
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, bool *precluded);
uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
@@ -473,10 +473,10 @@ struct XiveFabricClass {
InterfaceClass parent;
int (*match_nvt)(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match);
int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
- uint8_t priority);
+ bool crowd, bool cam_ignore, uint8_t priority);
};
/*
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index c07e23e1d3..8cdf819174 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -88,7 +88,8 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint32_t logic_serv);
+ bool crowd, bool cam_ignore,
+ uint32_t logic_serv);
uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
uint8_t blk, uint32_t idx,
diff --git a/hw/intc/pnv_xive.c b/hw/intc/pnv_xive.c
index 5bacbce6a4..d4796ab5a6 100644
--- a/hw/intc/pnv_xive.c
+++ b/hw/intc/pnv_xive.c
@@ -1,10 +1,9 @@
/*
* QEMU PowerPC XIVE interrupt controller model
*
- * Copyright (c) 2017-2019, IBM Corporation.
+ * Copyright (c) 2017-2024, IBM Corporation.
*
- * This code is licensed under the GPL version 2 or later. See the
- * COPYING file in the top-level directory.
+ * SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "qemu/osdep.h"
@@ -473,7 +472,7 @@ static bool pnv_xive_is_cpu_enabled(PnvXive *xive, PowerPCCPU *cpu)
static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
PnvXive *xive = PNV_XIVE(xptr);
@@ -500,7 +499,8 @@ static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
* Check the thread context CAM lines and record matches.
*/
ring = xive_presenter_tctx_match(xptr, tctx, format, nvt_blk,
- nvt_idx, cam_ignore, logic_serv);
+ nvt_idx, cam_ignore,
+ logic_serv);
/*
* Save the context and follow on to catch duplicates, that we
* don't support yet.
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 54abfe3947..91f3514f93 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -624,7 +624,7 @@ static bool pnv_xive2_is_cpu_enabled(PnvXive2 *xive, PowerPCCPU *cpu)
static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
PnvXive2 *xive = PNV_XIVE2(xptr);
@@ -655,8 +655,8 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
logic_serv);
} else {
ring = xive2_presenter_tctx_match(xptr, tctx, format, nvt_blk,
- nvt_idx, cam_ignore,
- logic_serv);
+ nvt_idx, crowd, cam_ignore,
+ logic_serv);
}
if (ring != -1) {
@@ -707,7 +707,7 @@ static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
static int pnv_xive2_broadcast(XivePresenter *xptr,
uint8_t nvt_blk, uint32_t nvt_idx,
- uint8_t priority)
+ bool crowd, bool ignore, uint8_t priority)
{
PnvXive2 *xive = PNV_XIVE2(xptr);
PnvChip *chip = xive->chip;
@@ -732,10 +732,10 @@ static int pnv_xive2_broadcast(XivePresenter *xptr,
if (gen1_tima_os) {
ring = xive_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
- nvt_idx, true, 0);
+ nvt_idx, ignore, 0);
} else {
ring = xive2_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
- nvt_idx, true, 0);
+ nvt_idx, crowd, ignore, 0);
}
if (ring != -1) {
diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
index 283a6b8fd2..0477fdd594 100644
--- a/hw/intc/spapr_xive.c
+++ b/hw/intc/spapr_xive.c
@@ -1,10 +1,9 @@
/*
* QEMU PowerPC sPAPR XIVE interrupt controller model
*
- * Copyright (c) 2017-2018, IBM Corporation.
+ * Copyright (c) 2017-2024, IBM Corporation.
*
- * This code is licensed under the GPL version 2 or later. See the
- * COPYING file in the top-level directory.
+ * SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "qemu/osdep.h"
@@ -431,7 +430,8 @@ static int spapr_xive_write_nvt(XiveRouter *xrtr, uint8_t nvt_blk,
static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore,
+ uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
CPUState *cs;
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 308de5aefc..97d1c42bb2 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1667,10 +1667,37 @@ uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
return 1 << (ctz32(~nvp_index) + 1);
}
-static uint8_t xive_get_group_level(uint32_t nvp_index)
+static uint8_t xive_get_group_level(bool crowd, bool ignore,
+ uint32_t nvp_blk, uint32_t nvp_index)
{
- /* FIXME add crowd encoding */
- return ctz32(~nvp_index) + 1;
+ uint8_t level = 0;
+
+ if (crowd) {
+ /* crowd level is bit position of first 0 from the right in nvp_blk */
+ level = ctz32(~nvp_blk) + 1;
+
+ /*
+ * Supported crowd sizes are 2^1, 2^2, and 2^4. 2^3 is not supported.
+ * HW will encode level 4 as the value 3. See xive2_pgofnext().
+ */
+ switch (level) {
+ case 1:
+ case 2:
+ break;
+ case 4:
+ level = 3;
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ /* Crowd level bits reside in upper 2 bits of the 6 bit group level */
+ level <<= 4;
+ }
+ if (ignore) {
+ level |= (ctz32(~nvp_index) + 1) & 0b1111;
+ }
+ return level;
}
/*
@@ -1742,7 +1769,7 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
*/
bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, bool *precluded)
{
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
@@ -1773,7 +1800,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
* a new command to the presenters (the equivalent of the "assign"
* power bus command in the documented full notify sequence.
*/
- count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
+ count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
priority, logic_serv, &match);
if (count < 0) {
return false;
@@ -1781,7 +1808,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
/* handle CPU exception delivery */
if (count) {
- group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
+ group_level = xive_get_group_level(crowd, cam_ignore, nvt_blk, nvt_idx);
trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
} else {
@@ -1906,6 +1933,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
}
found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
+ false /* crowd */,
xive_get_field32(END_W7_F0_IGNORE, end.w7),
priority,
xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index f4621bdd02..20d63e8f6e 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1120,13 +1120,42 @@ static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
return (cam1 & vp_mask) == (cam2 & vp_mask);
}
+static uint8_t xive2_get_vp_block_mask(uint32_t nvt_blk, bool crowd)
+{
+ uint8_t size, block_mask = 0b1111;
+
+ /* 3 supported crowd sizes: 2, 4, 16 */
+ if (crowd) {
+ size = xive_get_vpgroup_size(nvt_blk);
+ if (size == 8) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid crowd size of 8n");
+ return block_mask;
+ }
+ block_mask = ~(size - 1);
+ block_mask &= 0b1111;
+ }
+ return block_mask;
+}
+
+static uint32_t xive2_get_vp_index_mask(uint32_t nvt_index, bool cam_ignore)
+{
+ uint32_t index_mask = 0xFFFFFF; /* 24 bits */
+
+ if (cam_ignore) {
+ index_mask = ~(xive_get_vpgroup_size(nvt_index) - 1);
+ index_mask &= 0xFFFFFF;
+ }
+ return index_mask;
+}
+
/*
* The thread context register words are in big-endian format.
*/
int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint32_t logic_serv)
+ bool crowd, bool cam_ignore,
+ uint32_t logic_serv)
{
uint32_t cam = xive2_nvp_cam_line(nvt_blk, nvt_idx);
uint32_t qw3w2 = xive_tctx_word2(&tctx->regs[TM_QW3_HV_PHYS]);
@@ -1134,7 +1163,8 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
- uint32_t vp_mask = 0xFFFFFFFF;
+ uint32_t index_mask, vp_mask;
+ uint8_t block_mask;
if (format == 0) {
/*
@@ -1142,9 +1172,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
* i=1: VP-group notification (bits ignored at the end of the
* NVT identifier)
*/
- if (cam_ignore) {
- vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
- }
+ block_mask = xive2_get_vp_block_mask(nvt_blk, crowd);
+ index_mask = xive2_get_vp_index_mask(nvt_idx, cam_ignore);
+ vp_mask = xive2_nvp_cam_line(block_mask, index_mask);
/* For VP-group notifications, threads with LGS=0 are excluded */
@@ -1277,6 +1307,12 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
return;
}
+ if (xive2_end_is_crowd(&end) & !xive2_end_is_ignore(&end)) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "XIVE: invalid END, 'crowd' bit requires 'ignore' bit\n");
+ return;
+ }
+
if (xive2_end_is_enqueue(&end)) {
xive2_end_enqueue(&end, end_data);
/* Enqueuing event data modifies the EQ toggle and index */
@@ -1338,7 +1374,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
}
found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx,
- xive2_end_is_ignore(&end),
+ xive2_end_is_crowd(&end), xive2_end_is_ignore(&end),
priority,
xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
&precluded);
@@ -1375,17 +1411,24 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
} else {
- Xive2Nvgc nvg;
+ Xive2Nvgc nvgc;
uint32_t backlog;
+ bool crowd;
- /* For groups, the per-priority backlog counters are in the NVG */
- if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVG %x/%x\n",
- nvp_blk, nvp_idx);
+ crowd = xive2_end_is_crowd(&end);
+
+ /*
+ * For groups and crowds, the per-priority backlog
+ * counters are stored in the NVG/NVC structures
+ */
+ if (xive2_router_get_nvgc(xrtr, crowd,
+ nvp_blk, nvp_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n",
+ crowd ? "NVC" : "NVG", nvp_blk, nvp_idx);
return;
}
- if (!xive2_nvgc_is_valid(&nvg)) {
+ if (!xive2_nvgc_is_valid(&nvgc)) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n",
nvp_blk, nvp_idx);
return;
@@ -1398,13 +1441,16 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
* set the LSMFB field of the TIMA of relevant threads so
* that they know an interrupt is pending.
*/
- backlog = xive2_nvgc_get_backlog(&nvg, priority) + 1;
- xive2_nvgc_set_backlog(&nvg, priority, backlog);
- xive2_router_write_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg);
+ backlog = xive2_nvgc_get_backlog(&nvgc, priority) + 1;
+ xive2_nvgc_set_backlog(&nvgc, priority, backlog);
+ xive2_router_write_nvgc(xrtr, crowd, nvp_blk, nvp_idx, &nvgc);
if (precluded && backlog == 1) {
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
- xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, priority);
+ xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx,
+ xive2_end_is_crowd(&end),
+ xive2_end_is_ignore(&end),
+ priority);
if (!xive2_end_is_precluded_escalation(&end)) {
/*
diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
index 7c11143749..6681648ed6 100644
--- a/hw/ppc/pnv.c
+++ b/hw/ppc/pnv.c
@@ -2585,7 +2585,7 @@ static void pnv_pic_print_info(InterruptStatsProvider *obj, GString *buf)
static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv,
XiveTCTXMatch *match)
{
@@ -2599,8 +2599,8 @@ static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
int count;
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore,
- priority, logic_serv, match);
+ count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
+ cam_ignore, priority, logic_serv, match);
if (count < 0) {
return count;
@@ -2614,7 +2614,7 @@ static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv,
XiveTCTXMatch *match)
{
@@ -2628,8 +2628,8 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
int count;
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore,
- priority, logic_serv, match);
+ count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
+ cam_ignore, priority, logic_serv, match);
if (count < 0) {
return count;
@@ -2643,6 +2643,7 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
static int pnv10_xive_broadcast(XiveFabric *xfb,
uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore,
uint8_t priority)
{
PnvMachineState *pnv = PNV_MACHINE(xfb);
@@ -2653,7 +2654,7 @@ static int pnv10_xive_broadcast(XiveFabric *xfb,
XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
- xpc->broadcast(xptr, nvt_blk, nvt_idx, priority);
+ xpc->broadcast(xptr, nvt_blk, nvt_idx, crowd, cam_ignore, priority);
}
return 0;
}
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 0d4efaa0c0..7a922ef309 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -4,6 +4,9 @@
* Copyright (c) 2004-2007 Fabrice Bellard
* Copyright (c) 2007 Jocelyn Mayer
* Copyright (c) 2010 David Gibson, IBM Corporation.
+ * Copyright (c) 2010-2024, IBM Corporation..
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@@ -4437,7 +4440,7 @@ static void spapr_pic_print_info(InterruptStatsProvider *obj, GString *buf)
*/
static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
- bool cam_ignore, uint8_t priority,
+ bool crowd, bool cam_ignore, uint8_t priority,
uint32_t logic_serv, XiveTCTXMatch *match)
{
SpaprMachineState *spapr = SPAPR_MACHINE(xfb);
@@ -4445,7 +4448,7 @@ static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
int count;
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore,
+ count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
priority, logic_serv, match);
if (count < 0) {
return count;
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (18 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 10/14] ppc/xive2: Support crowd-matching when looking for target Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 5:15 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 11/14] ppc/xive2: Check crowd backlog when scanning group backlog Michael Kowal
` (4 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Glenn Miles <milesg@linux.vnet.ibm.com>
XIVE crowd sizes are encoded into a 2-bit field as follows:
0: 0b00
2: 0b01
4: 0b10
16: 0b11
A crowd size of 8 is not supported.
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/xive.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 2a7ce72606..df77098dd7 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1687,7 +1687,26 @@ static uint8_t xive_get_group_level(bool crowd, bool ignore,
uint8_t level = 0;
if (crowd) {
- level = ((ctz32(~nvp_blk) + 1) & 0b11) << 4;
+ /* crowd level is bit position of first 0 from the right in nvp_blk */
+ level = ctz32(~nvp_blk) + 1;
+
+ /*
+ * Supported crowd sizes are 2^1, 2^2, and 2^4. 2^3 is not supported.
+ * HW will encode level 4 as the value 3. See xive2_pgofnext().
+ */
+ switch (level) {
+ case 1:
+ case 2:
+ break;
+ case 4:
+ level = 3;
+ break;
+ default:
+ g_assert_not_reached();
+ }
+
+ /* Crowd level bits reside in upper 2 bits of the 6 bit group level */
+ level <<= 4;
}
if (ignore) {
level |= (ctz32(~nvp_index) + 1) & 0b1111;
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 11/14] ppc/xive2: Check crowd backlog when scanning group backlog
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (19 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16 Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 7:32 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 12/14] pnv/xive: Support ESB Escalation Michael Kowal
` (3 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Frederic Barrat <fbarrat@linux.ibm.com>
When processing a backlog scan for group interrupts, also take
into account crowd interrupts.
Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2_regs.h | 4 ++
hw/intc/xive2.c | 82 +++++++++++++++++++++++++------------
2 files changed, 60 insertions(+), 26 deletions(-)
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 9bcf7a8a6f..b11395c563 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -236,4 +236,8 @@ void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx,
#define NVx_BACKLOG_OP PPC_BITMASK(52, 53)
#define NVx_BACKLOG_PRIO PPC_BITMASK(57, 59)
+/* split the 6-bit crowd/group level */
+#define NVx_CROWD_LVL(level) ((level >> 4) & 0b11)
+#define NVx_GROUP_LVL(level) (level & 0b1111)
+
#endif /* PPC_XIVE2_REGS_H */
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 20d63e8f6e..c29d8e4831 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -366,6 +366,35 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
}
+static void xive2_pgofnext(uint8_t *nvgc_blk, uint32_t *nvgc_idx,
+ uint8_t next_level)
+{
+ uint32_t mask, next_idx;
+ uint8_t next_blk;
+
+ /*
+ * Adjust the block and index of a VP for the next group/crowd
+ * size (PGofFirst/PGofNext field in the NVP and NVGC structures).
+ *
+ * The 6-bit group level is split into a 2-bit crowd and 4-bit
+ * group levels. Encoding is similar. However, we don't support
+ * crowd size of 8. So a crowd level of 0b11 is bumped to a crowd
+ * size of 16.
+ */
+ next_blk = NVx_CROWD_LVL(next_level);
+ if (next_blk == 3) {
+ next_blk = 4;
+ }
+ mask = (1 << next_blk) - 1;
+ *nvgc_blk &= ~mask;
+ *nvgc_blk |= mask >> 1;
+
+ next_idx = NVx_GROUP_LVL(next_level);
+ mask = (1 << next_idx) - 1;
+ *nvgc_idx &= ~mask;
+ *nvgc_idx |= mask >> 1;
+}
+
/*
* Scan the group chain and return the highest priority and group
* level of pending group interrupts.
@@ -376,29 +405,28 @@ static uint8_t xive2_presenter_backlog_scan(XivePresenter *xptr,
uint8_t *out_level)
{
Xive2Router *xrtr = XIVE2_ROUTER(xptr);
- uint32_t nvgc_idx, mask;
+ uint32_t nvgc_idx;
uint32_t current_level, count;
- uint8_t prio;
+ uint8_t nvgc_blk, prio;
Xive2Nvgc nvgc;
for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) {
- current_level = first_group & 0xF;
+ current_level = first_group & 0x3F;
+ nvgc_blk = nvp_blk;
+ nvgc_idx = nvp_idx;
while (current_level) {
- mask = (1 << current_level) - 1;
- nvgc_idx = nvp_idx & ~mask;
- nvgc_idx |= mask >> 1;
- qemu_log("fxb %s checking backlog for prio %d group idx %x\n",
- __func__, prio, nvgc_idx);
-
- if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ xive2_pgofnext(&nvgc_blk, &nvgc_idx, current_level);
+
+ if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(current_level),
+ nvgc_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return 0xFF;
}
if (!xive2_nvgc_is_valid(&nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return 0xFF;
}
@@ -407,7 +435,7 @@ static uint8_t xive2_presenter_backlog_scan(XivePresenter *xptr,
*out_level = current_level;
return prio;
}
- current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0xF;
+ current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0x3F;
}
}
return 0xFF;
@@ -419,22 +447,23 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
uint8_t group_level)
{
Xive2Router *xrtr = XIVE2_ROUTER(xptr);
- uint32_t nvgc_idx, mask, count;
+ uint32_t nvgc_idx, count;
+ uint8_t nvgc_blk;
Xive2Nvgc nvgc;
- group_level &= 0xF;
- mask = (1 << group_level) - 1;
- nvgc_idx = nvp_idx & ~mask;
- nvgc_idx |= mask >> 1;
+ nvgc_blk = nvp_blk;
+ nvgc_idx = nvp_idx;
+ xive2_pgofnext(&nvgc_blk, &nvgc_idx, group_level);
- if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(group_level),
+ nvgc_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return;
}
if (!xive2_nvgc_is_valid(&nvgc)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
- nvp_blk, nvgc_idx);
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVGC %x/%x\n",
+ nvgc_blk, nvgc_idx);
return;
}
count = xive2_nvgc_get_backlog(&nvgc, group_prio);
@@ -442,7 +471,8 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
return;
}
xive2_nvgc_set_backlog(&nvgc, group_prio, count - 1);
- xive2_router_write_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc);
+ xive2_router_write_nvgc(xrtr, NVx_CROWD_LVL(group_level),
+ nvgc_blk, nvgc_idx, &nvgc);
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 12/14] pnv/xive: Support ESB Escalation
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (20 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 11/14] ppc/xive2: Check crowd backlog when scanning group backlog Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 8:07 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 13/14] pnv/xive: Fix problem with treating NVGC as a NVP Michael Kowal
` (2 subsequent siblings)
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Glenn Miles <milesg@linux.vnet.ibm.com>
END notification processing has an escalation path. The escalation is
not always an END escalation but can be an ESB escalation.
Also added a check for 'resume' processing which log a message stating it
needs to be implemented. This is not needed at the time but is part of
the END notification processing.
This change was taken from a patch provided by Michael Kowal
Suggested-by: Michael Kowal <kowal@us.ibm.com>
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
include/hw/ppc/xive2.h | 1 +
include/hw/ppc/xive2_regs.h | 13 +++++---
hw/intc/xive2.c | 61 +++++++++++++++++++++++++++++--------
3 files changed, 58 insertions(+), 17 deletions(-)
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 8cdf819174..2436ddb5e5 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -80,6 +80,7 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
uint32_t xive2_router_get_config(Xive2Router *xrtr);
void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
+void xive2_notify(Xive2Router *xrtr, uint32_t lisn, bool pq_checked);
/*
* XIVE2 Presenter (POWER10)
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index b11395c563..164d61e605 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -39,15 +39,18 @@
typedef struct Xive2Eas {
uint64_t w;
-#define EAS2_VALID PPC_BIT(0)
-#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
-#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
-#define EAS2_MASKED PPC_BIT(32) /* Masked */
-#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
+#define EAS2_VALID PPC_BIT(0)
+#define EAS2_QOS PPC_BIT(1, 2) /* Quality of Service(unimp) */
+#define EAS2_RESUME PPC_BIT(3) /* END Resume(unimp) */
+#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
+#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
+#define EAS2_MASKED PPC_BIT(32) /* Masked */
+#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
} Xive2Eas;
#define xive2_eas_is_valid(eas) (be64_to_cpu((eas)->w) & EAS2_VALID)
#define xive2_eas_is_masked(eas) (be64_to_cpu((eas)->w) & EAS2_MASKED)
+#define xive2_eas_is_resume(eas) (be64_to_cpu((eas)->w) & EAS2_RESUME)
void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf);
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index c29d8e4831..44b7743b2b 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1514,18 +1514,39 @@ do_escalation:
}
}
- /*
- * The END trigger becomes an Escalation trigger
- */
- xive2_router_end_notify(xrtr,
- xive_get_field32(END2_W4_END_BLOCK, end.w4),
- xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
- xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
+ if (xive2_end_is_escalate_end(&end)) {
+ /*
+ * Perform END Adaptive escalation processing
+ * The END trigger becomes an Escalation trigger
+ */
+ xive2_router_end_notify(xrtr,
+ xive_get_field32(END2_W4_END_BLOCK, end.w4),
+ xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
+ xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
+ } /* end END adaptive escalation */
+
+ else {
+ uint32_t lisn; /* Logical Interrupt Source Number */
+
+ /*
+ * Perform ESB escalation processing
+ * E[N] == 1 --> N
+ * Req[Block] <- E[ESB_Block]
+ * Req[Index] <- E[ESB_Index]
+ * Req[Offset] <- 0x000
+ * Execute <ESB Store> Req command
+ */
+ lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
+ xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
+
+ xive2_notify(xrtr, lisn, true /* pq_checked */);
+ }
+
+ return;
}
-void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
+void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
{
- Xive2Router *xrtr = XIVE2_ROUTER(xn);
uint8_t eas_blk = XIVE_EAS_BLOCK(lisn);
uint32_t eas_idx = XIVE_EAS_INDEX(lisn);
Xive2Eas eas;
@@ -1568,13 +1589,29 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
return;
}
+ /* TODO: add support for EAS resume if ever needed */
+ if (xive2_eas_is_resume(&eas)) {
+ qemu_log_mask(LOG_UNIMP,
+ "XIVE: EAS resume processing unimplemented - LISN %x\n",
+ lisn);
+ return;
+ }
+
/*
* The event trigger becomes an END trigger
*/
xive2_router_end_notify(xrtr,
- xive_get_field64(EAS2_END_BLOCK, eas.w),
- xive_get_field64(EAS2_END_INDEX, eas.w),
- xive_get_field64(EAS2_END_DATA, eas.w));
+ xive_get_field64(EAS2_END_BLOCK, eas.w),
+ xive_get_field64(EAS2_END_INDEX, eas.w),
+ xive_get_field64(EAS2_END_DATA, eas.w));
+}
+
+void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xn);
+
+ xive2_notify(xrtr, lisn, pq_checked);
+ return;
}
static Property xive2_router_properties[] = {
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 13/14] pnv/xive: Fix problem with treating NVGC as a NVP
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (21 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 12/14] pnv/xive: Support ESB Escalation Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 5:19 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 14/14] qtest/xive: Add test of pool interrupts Michael Kowal
2025-03-11 13:16 ` [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Nicholas Piggin
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Glenn Miles <milesg@linux.ibm.com>
When booting with PHYP, the blk/index for a NVGC was being
mistakenly treated as the blk/index for a NVP. Renamed
nvp_blk/nvp_idx throughout the code to nvx_blk/nvx_idx to prevent
confusion in the future and now we delay loading the NVP until
the point where we know that the block and index actually point to
a NVP.
Suggested-by: Michael Kowal <kowal@us.ibm.com>
Fixes: ("ppc/xive2: Support crowd-matching when looking for target")
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/xive2.c | 78 ++++++++++++++++++++++++-------------------------
1 file changed, 39 insertions(+), 39 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 44b7743b2b..07f2d20aec 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -225,8 +225,8 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf)
uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
uint32_t qentries = 1 << (qsize + 10);
- uint32_t nvp_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6);
- uint32_t nvp_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6);
+ uint32_t nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6);
+ uint32_t nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6);
uint8_t priority = xive_get_field32(END2_W7_F0_PRIORITY, end->w7);
uint8_t pq;
@@ -255,7 +255,7 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf)
xive2_end_is_firmware2(end) ? 'F' : '-',
xive2_end_is_ignore(end) ? 'i' : '-',
xive2_end_is_crowd(end) ? 'c' : '-',
- priority, nvp_blk, nvp_idx);
+ priority, nvx_blk, nvx_idx);
if (qaddr_base) {
g_string_append_printf(buf, " eq:@%08"PRIx64"% 6d/%5d ^%d",
@@ -400,7 +400,7 @@ static void xive2_pgofnext(uint8_t *nvgc_blk, uint32_t *nvgc_idx,
* level of pending group interrupts.
*/
static uint8_t xive2_presenter_backlog_scan(XivePresenter *xptr,
- uint8_t nvp_blk, uint32_t nvp_idx,
+ uint8_t nvx_blk, uint32_t nvx_idx,
uint8_t first_group,
uint8_t *out_level)
{
@@ -412,8 +412,8 @@ static uint8_t xive2_presenter_backlog_scan(XivePresenter *xptr,
for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) {
current_level = first_group & 0x3F;
- nvgc_blk = nvp_blk;
- nvgc_idx = nvp_idx;
+ nvgc_blk = nvx_blk;
+ nvgc_idx = nvx_idx;
while (current_level) {
xive2_pgofnext(&nvgc_blk, &nvgc_idx, current_level);
@@ -442,7 +442,7 @@ static uint8_t xive2_presenter_backlog_scan(XivePresenter *xptr,
}
static void xive2_presenter_backlog_decr(XivePresenter *xptr,
- uint8_t nvp_blk, uint32_t nvp_idx,
+ uint8_t nvx_blk, uint32_t nvx_idx,
uint8_t group_prio,
uint8_t group_level)
{
@@ -451,8 +451,8 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
uint8_t nvgc_blk;
Xive2Nvgc nvgc;
- nvgc_blk = nvp_blk;
- nvgc_idx = nvp_idx;
+ nvgc_blk = nvx_blk;
+ nvgc_idx = nvx_idx;
xive2_pgofnext(&nvgc_blk, &nvgc_idx, group_level);
if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(group_level),
@@ -1320,9 +1320,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
uint8_t priority;
uint8_t format;
bool found, precluded;
- Xive2Nvp nvp;
- uint8_t nvp_blk;
- uint32_t nvp_idx;
+ uint8_t nvx_blk;
+ uint32_t nvx_idx;
/* END cache lookup */
if (xive2_router_get_end(xrtr, end_blk, end_idx, &end)) {
@@ -1387,23 +1386,10 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
/*
* Follows IVPE notification
*/
- nvp_blk = xive_get_field32(END2_W6_VP_BLOCK, end.w6);
- nvp_idx = xive_get_field32(END2_W6_VP_OFFSET, end.w6);
-
- /* NVP cache lookup */
- if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVP %x/%x\n",
- nvp_blk, nvp_idx);
- return;
- }
-
- if (!xive2_nvp_is_valid(&nvp)) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVP %x/%x is invalid\n",
- nvp_blk, nvp_idx);
- return;
- }
+ nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end.w6);
+ nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end.w6);
- found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx,
+ found = xive_presenter_notify(xrtr->xfb, format, nvx_blk, nvx_idx,
xive2_end_is_crowd(&end), xive2_end_is_ignore(&end),
priority,
xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
@@ -1431,6 +1417,21 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
if (!xive2_end_is_ignore(&end)) {
uint8_t ipb;
+ Xive2Nvp nvp;
+
+ /* NVP cache lookup */
+ if (xive2_router_get_nvp(xrtr, nvx_blk, nvx_idx, &nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVP %x/%x\n",
+ nvx_blk, nvx_idx);
+ return;
+ }
+
+ if (!xive2_nvp_is_valid(&nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVP %x/%x is invalid\n",
+ nvx_blk, nvx_idx);
+ return;
+ }
+
/*
* Record the IPB in the associated NVP structure for later
* use. The presenter will resend the interrupt when the vCPU
@@ -1439,7 +1440,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) |
xive_priority_to_ipb(priority);
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
- xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
+ xive2_router_write_nvp(xrtr, nvx_blk, nvx_idx, &nvp, 2);
} else {
Xive2Nvgc nvgc;
uint32_t backlog;
@@ -1452,32 +1453,31 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
* counters are stored in the NVG/NVC structures
*/
if (xive2_router_get_nvgc(xrtr, crowd,
- nvp_blk, nvp_idx, &nvgc)) {
+ nvx_blk, nvx_idx, &nvgc)) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n",
- crowd ? "NVC" : "NVG", nvp_blk, nvp_idx);
+ crowd ? "NVC" : "NVG", nvx_blk, nvx_idx);
return;
}
if (!xive2_nvgc_is_valid(&nvgc)) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n",
- nvp_blk, nvp_idx);
+ nvx_blk, nvx_idx);
return;
}
/*
* Increment the backlog counter for that priority.
- * For the precluded case, we only call broadcast the
- * first time the counter is incremented. broadcast will
- * set the LSMFB field of the TIMA of relevant threads so
- * that they know an interrupt is pending.
+ * We only call broadcast the first time the counter is
+ * incremented. broadcast will set the LSMFB field of the TIMA of
+ * relevant threads so that they know an interrupt is pending.
*/
backlog = xive2_nvgc_get_backlog(&nvgc, priority) + 1;
xive2_nvgc_set_backlog(&nvgc, priority, backlog);
- xive2_router_write_nvgc(xrtr, crowd, nvp_blk, nvp_idx, &nvgc);
+ xive2_router_write_nvgc(xrtr, crowd, nvx_blk, nvx_idx, &nvgc);
- if (precluded && backlog == 1) {
+ if (backlog == 1) {
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
- xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx,
+ xfc->broadcast(xrtr->xfb, nvx_blk, nvx_idx,
xive2_end_is_crowd(&end),
xive2_end_is_ignore(&end),
priority);
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH v2 14/14] qtest/xive: Add test of pool interrupts
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (22 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 13/14] pnv/xive: Fix problem with treating NVGC as a NVP Michael Kowal
@ 2024-12-10 0:05 ` Michael Kowal
2025-03-10 8:20 ` Nicholas Piggin
2025-03-11 13:16 ` [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Nicholas Piggin
24 siblings, 1 reply; 41+ messages in thread
From: Michael Kowal @ 2024-12-10 0:05 UTC (permalink / raw)
To: qemu-devel
Cc: qemu-ppc, fbarrat, npiggin, milesg, danielhb413, david, harshpb,
thuth, lvivier, pbonzini
From: Glenn Miles <milesg@linux.ibm.com>
Added new test for pool interrupts. Removed all printfs from pnv-xive2-* qtests.
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
tests/qtest/pnv-xive2-flush-sync.c | 6 +-
tests/qtest/pnv-xive2-nvpg_bar.c | 7 +--
tests/qtest/pnv-xive2-test.c | 98 +++++++++++++++++++++++++++---
3 files changed, 94 insertions(+), 17 deletions(-)
diff --git a/tests/qtest/pnv-xive2-flush-sync.c b/tests/qtest/pnv-xive2-flush-sync.c
index 3b32446adb..142826bad0 100644
--- a/tests/qtest/pnv-xive2-flush-sync.c
+++ b/tests/qtest/pnv-xive2-flush-sync.c
@@ -178,14 +178,14 @@ void test_flush_sync_inject(QTestState *qts)
int test_nr;
uint8_t byte;
- printf("# ============================================================\n");
- printf("# Starting cache flush/queue sync injection tests...\n");
+ g_test_message("=========================================================");
+ g_test_message("Starting cache flush/queue sync injection tests...");
for (test_nr = 0; test_nr < sizeof(xive_inject_tests);
test_nr++) {
int op_type = xive_inject_tests[test_nr];
- printf("# Running test %d\n", test_nr);
+ g_test_message("Running test %d", test_nr);
/* start with status byte set to 0 */
clr_sync(qts, src_pir, ic_topo_id, op_type);
diff --git a/tests/qtest/pnv-xive2-nvpg_bar.c b/tests/qtest/pnv-xive2-nvpg_bar.c
index 10d4962d1e..8481a70f22 100644
--- a/tests/qtest/pnv-xive2-nvpg_bar.c
+++ b/tests/qtest/pnv-xive2-nvpg_bar.c
@@ -4,8 +4,7 @@
*
* Copyright (c) 2024, IBM Corporation.
*
- * This work is licensed under the terms of the GNU GPL, version 2 or
- * later. See the COPYING file in the top-level directory.
+ * SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "qemu/osdep.h"
#include "libqtest.h"
@@ -78,8 +77,8 @@ void test_nvpg_bar(QTestState *qts)
uint32_t count, delta;
uint8_t i;
- printf("# ============================================================\n");
- printf("# Testing NVPG BAR operations\n");
+ g_test_message("=========================================================");
+ g_test_message("Testing NVPG BAR operations");
set_nvg(qts, group_target, 0);
set_nvp(qts, nvp_target, 0x04);
diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
index a0e9f19313..5313d4ef18 100644
--- a/tests/qtest/pnv-xive2-test.c
+++ b/tests/qtest/pnv-xive2-test.c
@@ -4,6 +4,7 @@
* - Test 'Pull Thread Context to Odd Thread Reporting Line'
* - Test irq to hardware group
* - Test irq to hardware group going through backlog
+ * - Test irq to pool thread
*
* Copyright (c) 2024, IBM Corporation.
*
@@ -220,8 +221,8 @@ static void test_hw_irq(QTestState *qts)
uint16_t reg16;
uint8_t pq, nsr, cppr;
- printf("# ============================================================\n");
- printf("# Testing irq %d to hardware thread %d\n", irq, target_pir);
+ g_test_message("=========================================================");
+ g_test_message("Testing irq %d to hardware thread %d", irq, target_pir);
/* irq config */
set_eas(qts, irq, end_index, irq_data);
@@ -266,6 +267,79 @@ static void test_hw_irq(QTestState *qts)
g_assert_cmphex(cppr, ==, 0xFF);
}
+static void test_pool_irq(QTestState *qts)
+{
+ uint32_t irq = 2;
+ uint32_t irq_data = 0x600d0d06;
+ uint32_t end_index = 5;
+ uint32_t target_pir = 1;
+ uint32_t target_nvp = 0x100 + target_pir;
+ uint8_t priority = 5;
+ uint32_t reg32;
+ uint16_t reg16;
+ uint8_t pq, nsr, cppr, ipb;
+
+ g_test_message("=========================================================");
+ g_test_message("Testing irq %d to pool thread %d", irq, target_pir);
+
+ /* irq config */
+ set_eas(qts, irq, end_index, irq_data);
+ set_end(qts, end_index, target_nvp, priority, false /* group */);
+
+ /* enable and trigger irq */
+ get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
+ set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
+
+ /* check irq is raised on cpu */
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
+
+ /* check TIMA values in the PHYS ring (shared by POOL ring) */
+ reg32 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x40);
+ g_assert_cmphex(cppr, ==, 0xFF);
+
+ /* check TIMA values in the POOL ring */
+ reg32 = get_tima32(qts, target_pir, TM_QW2_HV_POOL + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ ipb = (reg32 >> 8) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0);
+ g_assert_cmphex(cppr, ==, 0);
+ g_assert_cmphex(ipb, ==, 0x80 >> priority);
+
+ /* ack the irq */
+ reg16 = get_tima16(qts, target_pir, TM_SPC_ACK_HV_REG);
+ nsr = reg16 >> 8;
+ cppr = reg16 & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x40);
+ g_assert_cmphex(cppr, ==, priority);
+
+ /* check irq data is what was configured */
+ reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
+ g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
+
+ /* check IPB is cleared in the POOL ring */
+ reg32 = get_tima32(qts, target_pir, TM_QW2_HV_POOL + TM_WORD0);
+ ipb = (reg32 >> 8) & 0xFF;
+ g_assert_cmphex(ipb, ==, 0);
+
+ /* End Of Interrupt */
+ set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
+ pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
+ g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
+
+ /* reset CPPR */
+ set_tima8(qts, target_pir, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
+ reg32 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD0);
+ nsr = reg32 >> 24;
+ cppr = (reg32 >> 16) & 0xFF;
+ g_assert_cmphex(nsr, ==, 0x00);
+ g_assert_cmphex(cppr, ==, 0xFF);
+}
+
#define XIVE_ODD_CL 0x80
static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts)
{
@@ -278,8 +352,9 @@ static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts)
uint32_t cl_word;
uint32_t word2;
- printf("# ============================================================\n");
- printf("# Testing 'Pull Thread Context to Odd Thread Reporting Line'\n");
+ g_test_message("=========================================================");
+ g_test_message("Testing 'Pull Thread Context to Odd Thread Reporting " \
+ "Line'");
/* clear odd cache line prior to pull operation */
memset(cl_pair, 0, sizeof(cl_pair));
@@ -330,8 +405,8 @@ static void test_hw_group_irq(QTestState *qts)
uint16_t reg16;
uint8_t pq, nsr, cppr;
- printf("# ============================================================\n");
- printf("# Testing irq %d to hardware group of size 4\n", irq);
+ g_test_message("=========================================================");
+ g_test_message("Testing irq %d to hardware group of size 4", irq);
/* irq config */
set_eas(qts, irq, end_index, irq_data);
@@ -395,10 +470,10 @@ static void test_hw_group_irq_backlog(QTestState *qts)
uint16_t reg16;
uint8_t pq, nsr, cppr, lsmfb, i;
- printf("# ============================================================\n");
- printf("# Testing irq %d to hardware group of size 4 going through " \
- "backlog\n",
- irq);
+ g_test_message("=========================================================");
+ g_test_message("Testing irq %d to hardware group of size 4 going " \
+ "through backlog",
+ irq);
/*
* set current priority of all threads in the group to something
@@ -484,6 +559,9 @@ static void test_xive(void)
/* omit reset_state here and use settings from test_hw_irq */
test_pull_thread_ctx_to_odd_thread_cl(qts);
+ reset_state(qts);
+ test_pool_irq(qts);
+
reset_state(qts);
test_hw_group_irq(qts);
--
2.43.0
^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH v2 01/14] ppc/xive2: Update NVP save/restore for group attributes
2024-12-10 0:05 ` [PATCH v2 01/14] ppc/xive2: Update NVP save/restore for group attributes Michael Kowal
@ 2025-03-10 3:22 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 3:22 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> If the 'H' attribute is set on the NVP structure, the hardware
> automatically saves and restores some attributes from the TIMA in the
> NVP structure.
> The group-specific attributes LSMFB, LGS and T have an extra flag to
> individually control what is saved/restored.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> include/hw/ppc/xive2_regs.h | 10 +++++++---
> hw/intc/xive2.c | 23 ++++++++++++++++++-----
> 2 files changed, 25 insertions(+), 8 deletions(-)
>
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index 1d00c8df64..e88d6eab1e 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -1,10 +1,9 @@
> /*
> * QEMU PowerPC XIVE2 internal structure definitions (POWER10)
> *
> - * Copyright (c) 2019-2022, IBM Corporation.
> + * Copyright (c) 2019-2024, IBM Corporation.
> *
> - * This code is licensed under the GPL version 2 or later. See the
> - * COPYING file in the top-level directory.
> + * SPDX-License-Identifier: GPL-2.0-or-later
> */
>
> #ifndef PPC_XIVE2_REGS_H
> @@ -152,6 +151,9 @@ typedef struct Xive2Nvp {
> uint32_t w0;
> #define NVP2_W0_VALID PPC_BIT32(0)
> #define NVP2_W0_HW PPC_BIT32(7)
> +#define NVP2_W0_L PPC_BIT32(8)
> +#define NVP2_W0_G PPC_BIT32(9)
> +#define NVP2_W0_T PPC_BIT32(10)
> #define NVP2_W0_ESC_END PPC_BIT32(25) /* 'N' bit 0:ESB 1:END */
> #define NVP2_W0_PGOFIRST PPC_BITMASK32(26, 31)
> uint32_t w1;
> @@ -163,6 +165,8 @@ typedef struct Xive2Nvp {
> #define NVP2_W2_CPPR PPC_BITMASK32(0, 7)
> #define NVP2_W2_IPB PPC_BITMASK32(8, 15)
> #define NVP2_W2_LSMFB PPC_BITMASK32(16, 23)
> +#define NVP2_W2_T PPC_BIT32(27)
> +#define NVP2_W2_LGS PPC_BITMASK32(28, 31)
> uint32_t w3;
> uint32_t w4;
> #define NVP2_W4_ESC_ESB_BLOCK PPC_BITMASK32(0, 3) /* N:0 */
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index d1df35e9b3..24e504fce1 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1,10 +1,9 @@
> /*
> * QEMU PowerPC XIVE2 interrupt controller model (POWER10)
> *
> - * Copyright (c) 2019-2022, IBM Corporation..
> + * Copyright (c) 2019-2024, IBM Corporation..
> *
> - * This code is licensed under the GPL version 2 or later. See the
> - * COPYING file in the top-level directory.
> + * SPDX-License-Identifier: GPL-2.0-or-later
> */
>
> #include "qemu/osdep.h"
> @@ -313,7 +312,19 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
>
> nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, regs[TM_IPB]);
> nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, regs[TM_CPPR]);
> - nvp.w2 = xive_set_field32(NVP2_W2_LSMFB, nvp.w2, regs[TM_LSMFB]);
> + if (nvp.w0 & NVP2_W0_L) {
> + /*
> + * Typically not used. If LSMFB is restored with 0, it will
> + * force a backlog rescan
> + */
> + nvp.w2 = xive_set_field32(NVP2_W2_LSMFB, nvp.w2, regs[TM_LSMFB]);
> + }
> + if (nvp.w0 & NVP2_W0_G) {
> + nvp.w2 = xive_set_field32(NVP2_W2_LGS, nvp.w2, regs[TM_LGS]);
> + }
> + if (nvp.w0 & NVP2_W0_T) {
> + nvp.w2 = xive_set_field32(NVP2_W2_T, nvp.w2, regs[TM_T]);
> + }
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
>
> nvp.w1 = xive_set_field32(NVP2_W1_CO, nvp.w1, 0);
> @@ -527,7 +538,9 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 2);
>
> tctx->regs[TM_QW1_OS + TM_CPPR] = cppr;
> - /* we don't model LSMFB */
> + tctx->regs[TM_QW1_OS + TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
> + tctx->regs[TM_QW1_OS + TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
> + tctx->regs[TM_QW1_OS + TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
>
> nvp->w1 = xive_set_field32(NVP2_W1_CO, nvp->w1, 1);
> nvp->w1 = xive_set_field32(NVP2_W1_CO_THRID_VALID, nvp->w1, 1);
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 02/14] ppc/xive2: Add grouping level to notification
2024-12-10 0:05 ` [PATCH v2 02/14] ppc/xive2: Add grouping level to notification Michael Kowal
@ 2025-03-10 3:27 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 3:27 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> The NSR has a (so far unused) grouping level field. When a interrupt
> is presented, that field tells the hypervisor or OS if the interrupt
> is for an individual VP or for a VP-group/crowd. This patch reworks
> the presentation API to allow to set/unset the level when
> raising/accepting an interrupt.
>
> It also renames xive_tctx_ipb_update() to xive_tctx_pipr_update() as
> the IPB is only used for VP-specific target, whereas the PIPR always
> needs to be updated.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
[...]
> @@ -495,16 +502,20 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
> /* Reset the NVT value */
> nvt.w4 = xive_set_field32(NVT_W4_IPB, nvt.w4, 0);
> xive_router_write_nvt(xrtr, nvt_blk, nvt_idx, &nvt, 4);
> - }
> +
> + uint8_t *regs = &tctx->regs[TM_QW1_OS];
> + regs[TM_IPB] |= ipb;
> +}
Indentation bug here.
> +
> /*
> - * Always call xive_tctx_ipb_update(). Even if there were no
> + * Always call xive_tctx_pipr_update(). Even if there were no
> * escalation triggered, there could be a pending interrupt which
> * was saved when the context was pulled and that we need to take
> * into account by recalculating the PIPR (which is not
> * saved/restored).
> * It will also raise the External interrupt signal if needed.
> */
> - xive_tctx_ipb_update(tctx, TM_QW1_OS, ipb);
> + xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
> }
>
> /*
[...]
> @@ -594,15 +596,15 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
> }
> + regs[TM_IPB] = ipb;
> + backlog_prio = xive_ipb_to_pipr(ipb);
> + backlog_level = 0;
There is also a bug here, ipb should be OR'ed into the IPB
reg (as xive1 code above does).
We have this fixed up in our internal tree, I have have just
folded that fix in here (Mike is on vacation so I've been trying
to help wrangle the xive patches...).
Otherwise I think it looks good.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 03/14] ppc/xive2: Add grouping level to notification
2024-12-10 0:05 ` [PATCH v2 03/14] ppc/xive2: Add grouping level to notification Michael Kowal
@ 2025-03-10 3:43 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 3:43 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> The NSR has a (so far unused) grouping level field. When a interrupt
> is presented, that field tells the hypervisor or OS if the interrupt
> is for an individual VP or for a VP-group/crowd. This patch reworks
> the presentation API to allow to set/unset the level when
> raising/accepting an interrupt.
>
> It also renames xive_tctx_ipb_update() to xive_tctx_pipr_update() as
> the IPB is only used for VP-specific target, whereas the PIPR always
> needs to be updated.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
BTW. the series went a bit out of whack, I suspect because of an older
iteration of patches leftover from a previous git format-patch, then
git send-email *.patch picks up some of the old patches if they were
renamed or reordered.
Don't worry this has bitten me before. Would be nice if send-email had
some heuristic to sanity check metadata in cover letter and subject
lines to warn about this...
I think I've been able to untangle it.
Thanks,
Nick
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 02/14] ppc/xive: Rename ipb_to_pipr() to xive_ipb_to_pipr()
2024-12-10 0:05 ` [PATCH v2 02/14] ppc/xive: Rename ipb_to_pipr() to xive_ipb_to_pipr() Michael Kowal
@ 2025-03-10 3:45 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 3:45 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> Renamed function to follow the convention of the other function names.
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> include/hw/ppc/xive.h | 16 ++++++++++++----
> hw/intc/xive.c | 22 ++++++----------------
> 2 files changed, 18 insertions(+), 20 deletions(-)
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index ebee982528..41a4263a9d 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -130,11 +130,9 @@
> * TCTX Thread interrupt Context
> *
> *
> - * Copyright (c) 2017-2018, IBM Corporation.
> - *
> - * This code is licensed under the GPL version 2 or later. See the
> - * COPYING file in the top-level directory.
> + * Copyright (c) 2017-2024, IBM Corporation.
> *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> */
>
> #ifndef PPC_XIVE_H
> @@ -510,6 +508,16 @@ static inline uint8_t xive_priority_to_ipb(uint8_t priority)
> 0 : 1 << (XIVE_PRIORITY_MAX - priority);
> }
>
> +/*
> + * Convert an Interrupt Pending Buffer (IPB) register to a Pending
> + * Interrupt Priority Register (PIPR), which contains the priority of
> + * the most favored pending notification.
> + */
> +static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
> +{
> + return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
> +}
> +
> /*
> * XIVE Thread Interrupt Management Aera (TIMA)
> *
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 245e4d181a..7b06a48139 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -3,8 +3,7 @@
> *
> * Copyright (c) 2017-2018, IBM Corporation.
> *
> - * This code is licensed under the GPL version 2 or later. See the
> - * COPYING file in the top-level directory.
> + * SPDX-License-Identifier: GPL-2.0-or-later
> */
>
> #include "qemu/osdep.h"
> @@ -27,15 +26,6 @@
> * XIVE Thread Interrupt Management context
> */
>
> -/*
> - * Convert an Interrupt Pending Buffer (IPB) register to a Pending
> - * Interrupt Priority Register (PIPR), which contains the priority of
> - * the most favored pending notification.
> - */
> -static uint8_t ipb_to_pipr(uint8_t ibp)
> -{
> - return ibp ? clz32((uint32_t)ibp << 24) : 0xff;
> -}
>
> static uint8_t exception_mask(uint8_t ring)
> {
> @@ -159,7 +149,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> * Recompute the PIPR based on local pending interrupts. The PHYS
> * ring must take the minimum of both the PHYS and POOL PIPR values.
> */
> - pipr_min = ipb_to_pipr(regs[TM_IPB]);
> + pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
> ring_min = ring;
>
> /* PHYS updates also depend on POOL values */
> @@ -169,7 +159,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> /* POOL values only matter if POOL ctx is valid */
> if (pool_regs[TM_WORD2] & 0x80) {
>
> - uint8_t pool_pipr = ipb_to_pipr(pool_regs[TM_IPB]);
> + uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
>
> /*
> * Determine highest priority interrupt and
> @@ -193,7 +183,7 @@ void xive_tctx_ipb_update(XiveTCTX *tctx, uint8_t ring, uint8_t ipb)
> uint8_t *regs = &tctx->regs[ring];
>
> regs[TM_IPB] |= ipb;
> - regs[TM_PIPR] = ipb_to_pipr(regs[TM_IPB]);
> + regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> xive_tctx_notify(tctx, ring);
> }
>
> @@ -841,9 +831,9 @@ void xive_tctx_reset(XiveTCTX *tctx)
> * CPPR is first set.
> */
> tctx->regs[TM_QW1_OS + TM_PIPR] =
> - ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
> + xive_ipb_to_pipr(tctx->regs[TM_QW1_OS + TM_IPB]);
> tctx->regs[TM_QW3_HV_PHYS + TM_PIPR] =
> - ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
> + xive_ipb_to_pipr(tctx->regs[TM_QW3_HV_PHYS + TM_IPB]);
> }
>
> static void xive_tctx_realize(DeviceState *dev, Error **errp)
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 03/14] ppc/xive2: Support group-matching when looking for target
2024-12-10 0:05 ` [PATCH v2 03/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
@ 2025-03-10 3:52 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 3:52 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> If an END has the 'i' bit set (ignore), then it targets a group of
> VPs. The size of the group depends on the VP index of the target
> (first 0 found when looking at the least significant bits of the
> index) so a mask is applied on the VP index of a running thread to
> know if we have a match.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
Looks okay to me. We've iterated a little on some of this code in our
downstream tree but I've not been able to make much progress untangling
that and trying to merge it back, so I think this series is reasonable
as a starting point even if there are a few corner cases to fix later,
it's new functionality that Linux does not use, so existing setups will
not be impacted.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> include/hw/ppc/xive.h | 5 +++-
> include/hw/ppc/xive2.h | 1 +
> hw/intc/pnv_xive2.c | 33 ++++++++++++++-------
> hw/intc/xive.c | 56 +++++++++++++++++++++++++-----------
> hw/intc/xive2.c | 65 ++++++++++++++++++++++++++++++------------
> 5 files changed, 114 insertions(+), 46 deletions(-)
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 971da029eb..21ce5a9df3 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -424,6 +424,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas);
> typedef struct XiveTCTXMatch {
> XiveTCTX *tctx;
> uint8_t ring;
> + bool precluded;
> } XiveTCTXMatch;
>
> #define TYPE_XIVE_PRESENTER "xive-presenter"
> @@ -452,7 +453,9 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv);
> + uint32_t logic_serv, bool *precluded);
> +
> +uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
>
> /*
> * XIVE Fabric (Interface between Interrupt Controller and Machine)
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 5bccf41159..17c31fcb4b 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -121,6 +121,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size);
> void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> +bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
> void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 834d32287b..3fb466bb2c 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -660,21 +660,34 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> logic_serv);
> }
>
> - /*
> - * Save the context and follow on to catch duplicates,
> - * that we don't support yet.
> - */
> if (ring != -1) {
> - if (match->tctx) {
> + /*
> + * For VP-specific match, finding more than one is a
> + * problem. For group notification, it's possible.
> + */
> + if (!cam_ignore && match->tctx) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a "
> "thread context NVT %x/%x\n",
> nvt_blk, nvt_idx);
> - return false;
> + /* Should set a FIR if we ever model it */
> + return -1;
> + }
> + /*
> + * For a group notification, we need to know if the
> + * match is precluded first by checking the current
> + * thread priority. If the interrupt can be delivered,
> + * we always notify the first match (for now).
> + */
> + if (cam_ignore &&
> + xive2_tm_irq_precluded(tctx, ring, priority)) {
> + match->precluded = true;
> + } else {
> + if (!match->tctx) {
> + match->ring = ring;
> + match->tctx = tctx;
> + }
> + count++;
> }
> -
> - match->ring = ring;
> - match->tctx = tctx;
> - count++;
> }
> }
> }
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 6e73f7b063..9345cddead 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -1671,6 +1671,16 @@ static uint32_t xive_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
> return xive_nvt_cam_line(blk, 1 << 7 | (pir & 0x7f));
> }
>
> +uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
> +{
> + /*
> + * Group size is a power of 2. The position of the first 0
> + * (starting with the least significant bits) in the NVP index
> + * gives the size of the group.
> + */
> + return 1 << (ctz32(~nvp_index) + 1);
> +}
> +
> static uint8_t xive_get_group_level(uint32_t nvp_index)
> {
> /* FIXME add crowd encoding */
> @@ -1743,30 +1753,39 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> /*
> * This is our simple Xive Presenter Engine model. It is merged in the
> * Router as it does not require an extra object.
> - *
> - * It receives notification requests sent by the IVRE to find one
> - * matching NVT (or more) dispatched on the processor threads. In case
> - * of a single NVT notification, the process is abbreviated and the
> - * thread is signaled if a match is found. In case of a logical server
> - * notification (bits ignored at the end of the NVT identifier), the
> - * IVPE and IVRE select a winning thread using different filters. This
> - * involves 2 or 3 exchanges on the PowerBus that the model does not
> - * support.
> - *
> - * The parameters represent what is sent on the PowerBus
> */
> bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv)
> + uint32_t logic_serv, bool *precluded)
> {
> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
> - XiveTCTXMatch match = { .tctx = NULL, .ring = 0 };
> + XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false };
> uint8_t group_level;
> int count;
>
> /*
> - * Ask the machine to scan the interrupt controllers for a match
> + * Ask the machine to scan the interrupt controllers for a match.
> + *
> + * For VP-specific notification, we expect at most one match and
> + * one call to the presenters is all we need (abbreviated notify
> + * sequence documented by the architecture).
> + *
> + * For VP-group notification, match_nvt() is the equivalent of the
> + * "histogram" and "poll" commands sent to the power bus to the
> + * presenters. 'count' could be more than one, but we always
> + * select the first match for now. 'precluded' tells if (at least)
> + * one thread matches but can't take the interrupt now because
> + * it's running at a more favored priority. We return the
> + * information to the router so that it can take appropriate
> + * actions (backlog, escalation, broadcast, etc...)
> + *
> + * If we were to implement a better way of dispatching the
> + * interrupt in case of multiple matches (instead of the first
> + * match), we would need a heuristic to elect a thread (for
> + * example, the hardware keeps track of an 'age' in the TIMA) and
> + * a new command to the presenters (the equivalent of the "assign"
> + * power bus command in the documented full notify sequence.
> */
> count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
> priority, logic_serv, &match);
> @@ -1779,6 +1798,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
> trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
> xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
> + } else {
> + *precluded = match.precluded;
> }
>
> return !!count;
> @@ -1818,7 +1839,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> uint8_t nvt_blk;
> uint32_t nvt_idx;
> XiveNVT nvt;
> - bool found;
> + bool found, precluded;
>
> uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
> uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
> @@ -1901,8 +1922,9 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
> xive_get_field32(END_W7_F0_IGNORE, end.w7),
> priority,
> - xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7));
> -
> + xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
> + &precluded);
> + /* we don't support VP-group notification on P9, so precluded is not used */
> /* TODO: Auto EOI. */
>
> if (found) {
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index db372f4b30..2cb03c758e 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -739,6 +739,12 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
> return xrc->write_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, nvgc);
> }
>
> +static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
> + uint32_t vp_mask)
> +{
> + return (cam1 & vp_mask) == (cam2 & vp_mask);
> +}
> +
> /*
> * The thread context register words are in big-endian format.
> */
> @@ -753,44 +759,50 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
> uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
>
> - /*
> - * TODO (PowerNV): ignore mode. The low order bits of the NVT
> - * identifier are ignored in the "CAM" match.
> - */
> + uint32_t vp_mask = 0xFFFFFFFF;
>
> if (format == 0) {
> - if (cam_ignore == true) {
> - /*
> - * F=0 & i=1: Logical server notification (bits ignored at
> - * the end of the NVT identifier)
> - */
> - qemu_log_mask(LOG_UNIMP, "XIVE: no support for LS NVT %x/%x\n",
> - nvt_blk, nvt_idx);
> - return -1;
> + /*
> + * i=0: Specific NVT notification
> + * i=1: VP-group notification (bits ignored at the end of the
> + * NVT identifier)
> + */
> + if (cam_ignore) {
> + vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
> }
>
> - /* F=0 & i=0: Specific NVT notification */
> + /* For VP-group notifications, threads with LGS=0 are excluded */
>
> /* PHYS ring */
> if ((be32_to_cpu(qw3w2) & TM2_QW3W2_VT) &&
> - cam == xive2_tctx_hw_cam_line(xptr, tctx)) {
> + !(cam_ignore && tctx->regs[TM_QW3_HV_PHYS + TM_LGS] == 0) &&
> + xive2_vp_match_mask(cam,
> + xive2_tctx_hw_cam_line(xptr, tctx),
> + vp_mask)) {
> return TM_QW3_HV_PHYS;
> }
>
> /* HV POOL ring */
> if ((be32_to_cpu(qw2w2) & TM2_QW2W2_VP) &&
> - cam == xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2)) {
> + !(cam_ignore && tctx->regs[TM_QW2_HV_POOL + TM_LGS] == 0) &&
> + xive2_vp_match_mask(cam,
> + xive_get_field32(TM2_QW2W2_POOL_CAM, qw2w2),
> + vp_mask)) {
> return TM_QW2_HV_POOL;
> }
>
> /* OS ring */
> if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
> - cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) {
> + !(cam_ignore && tctx->regs[TM_QW1_OS + TM_LGS] == 0) &&
> + xive2_vp_match_mask(cam,
> + xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2),
> + vp_mask)) {
> return TM_QW1_OS;
> }
> } else {
> /* F=1 : User level Event-Based Branch (EBB) notification */
>
> + /* FIXME: what if cam_ignore and LGS = 0 ? */
> /* USER ring */
> if ((be32_to_cpu(qw1w2) & TM2_QW1W2_VO) &&
> (cam == xive_get_field32(TM2_QW1W2_OS_CAM, qw1w2)) &&
> @@ -802,6 +814,22 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> return -1;
> }
>
> +bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> +{
> + uint8_t *regs = &tctx->regs[ring];
> +
> + /*
> + * The xive2_presenter_tctx_match() above tells if there's a match
> + * but for VP-group notification, we still need to look at the
> + * priority to know if the thread can take the interrupt now or if
> + * it is precluded.
> + */
> + if (priority < regs[TM_CPPR]) {
> + return false;
> + }
> + return true;
> +}
> +
> static void xive2_router_realize(DeviceState *dev, Error **errp)
> {
> Xive2Router *xrtr = XIVE2_ROUTER(dev);
> @@ -841,7 +869,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> Xive2End end;
> uint8_t priority;
> uint8_t format;
> - bool found;
> + bool found, precluded;
> Xive2Nvp nvp;
> uint8_t nvp_blk;
> uint32_t nvp_idx;
> @@ -922,7 +950,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> found = xive_presenter_notify(xrtr->xfb, format, nvp_blk, nvp_idx,
> xive2_end_is_ignore(&end),
> priority,
> - xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7));
> + xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
> + &precluded);
>
> /* TODO: Auto EOI. */
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 05/14] ppc/xive2: Add undelivered group interrupt to backlog
2024-12-10 0:05 ` [PATCH v2 05/14] ppc/xive2: Add undelivered group interrupt to backlog Michael Kowal
@ 2025-03-10 4:07 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 4:07 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> When a group interrupt cannot be delivered, we need to:
> - increment the backlog counter for the group in the NVG table
> (if the END is configured to keep a backlog).
> - start a broadcast operation to set the LSMFB field on matching CPUs
> which can't take the interrupt now because they're running at too
> high a priority.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> include/hw/ppc/xive.h | 5 ++
> include/hw/ppc/xive2.h | 1 +
> hw/intc/pnv_xive2.c | 42 +++++++++++++++++
> hw/intc/xive2.c | 105 +++++++++++++++++++++++++++++++++++------
> hw/ppc/pnv.c | 22 ++++++++-
> 5 files changed, 159 insertions(+), 16 deletions(-)
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index ce4eb9726b..f443a39cf1 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -442,6 +442,9 @@ struct XivePresenterClass {
> uint32_t logic_serv, XiveTCTXMatch *match);
> bool (*in_kernel)(const XivePresenter *xptr);
> uint32_t (*get_config)(XivePresenter *xptr);
> + int (*broadcast)(XivePresenter *xptr,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + uint8_t priority);
> };
>
> int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> @@ -472,6 +475,8 @@ struct XiveFabricClass {
> uint8_t nvt_blk, uint32_t nvt_idx,
> bool cam_ignore, uint8_t priority,
> uint32_t logic_serv, XiveTCTXMatch *match);
> + int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
> + uint8_t priority);
> };
>
> /*
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 65154f78d8..ebf301bb5b 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -120,6 +120,7 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> void xive2_tm_pull_os_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
> +void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
> void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 5cdd4fdcc9..41b727d1fb 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -705,6 +705,47 @@ static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
> return cfg;
> }
>
> +static int pnv_xive2_broadcast(XivePresenter *xptr,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + uint8_t priority)
> +{
> + PnvXive2 *xive = PNV_XIVE2(xptr);
> + PnvChip *chip = xive->chip;
> + int i, j;
> + bool gen1_tima_os =
> + xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
> +
> + for (i = 0; i < chip->nr_cores; i++) {
> + PnvCore *pc = chip->cores[i];
> + CPUCore *cc = CPU_CORE(pc);
> +
> + for (j = 0; j < cc->nr_threads; j++) {
> + PowerPCCPU *cpu = pc->threads[j];
> + XiveTCTX *tctx;
> + int ring;
> +
> + if (!pnv_xive2_is_cpu_enabled(xive, cpu)) {
> + continue;
> + }
> +
> + tctx = XIVE_TCTX(pnv_cpu_state(cpu)->intc);
> +
> + if (gen1_tima_os) {
> + ring = xive_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
> + nvt_idx, true, 0);
> + } else {
> + ring = xive2_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
> + nvt_idx, true, 0);
> + }
> +
> + if (ring != -1) {
> + xive2_tm_set_lsmfb(tctx, ring, priority);
> + }
> + }
> + }
> + return 0;
> +}
> +
> static uint8_t pnv_xive2_get_block_id(Xive2Router *xrtr)
> {
> return pnv_xive2_block_id(PNV_XIVE2(xrtr));
> @@ -2445,6 +2486,7 @@ static void pnv_xive2_class_init(ObjectClass *klass, void *data)
>
> xpc->match_nvt = pnv_xive2_match_nvt;
> xpc->get_config = pnv_xive2_presenter_get_config;
> + xpc->broadcast = pnv_xive2_broadcast;
> };
>
> static const TypeInfo pnv_xive2_info = {
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index cffcf3ff05..05cb17518d 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -62,6 +62,30 @@ static uint32_t xive2_nvgc_get_backlog(Xive2Nvgc *nvgc, uint8_t priority)
> return val;
> }
>
> +static void xive2_nvgc_set_backlog(Xive2Nvgc *nvgc, uint8_t priority,
> + uint32_t val)
> +{
> + uint8_t *ptr, i;
> + uint32_t shift;
> +
> + if (priority > 7) {
> + return;
> + }
> +
> + if (val > 0xFFFFFF) {
> + val = 0xFFFFFF;
> + }
Could these conditions have asserts or warnings? Seems like we
saturate a counter or silently drop an interrupt if these things
can happen. Can add something later.
> + /*
> + * The per-priority backlog counters are 24-bit and the structure
> + * is stored in big endian
> + */
> + ptr = (uint8_t *)&nvgc->w2 + priority * 3;
This fits because nvgc is 32 bytes so 24 bytes from w2, and
8 priorities * 3 bytes each is 24. I just added a bit more comment.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> + for (i = 0; i < 3; i++, ptr++) {
> + shift = 8 * (2 - i);
> + *ptr = (val >> shift) & 0xFF;
> + }
> +}
> +
> void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
> {
> if (!xive2_eas_is_valid(eas)) {
> @@ -830,6 +854,19 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> return true;
> }
>
> +void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority)
> +{
> + uint8_t *regs = &tctx->regs[ring];
> +
> + /*
> + * Called by the router during a VP-group notification when the
> + * thread matches but can't take the interrupt because it's
> + * already running at a more favored priority. It then stores the
> + * new interrupt priority in the LSMFB field.
> + */
> + regs[TM_LSMFB] = priority;
> +}
> +
> static void xive2_router_realize(DeviceState *dev, Error **errp)
> {
> Xive2Router *xrtr = XIVE2_ROUTER(dev);
> @@ -962,10 +999,9 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> /*
> * If no matching NVP is dispatched on a HW thread :
> * - specific VP: update the NVP structure if backlog is activated
> - * - logical server : forward request to IVPE (not supported)
> + * - VP-group: update the backlog counter for that priority in the NVG
> */
> if (xive2_end_is_backlog(&end)) {
> - uint8_t ipb;
>
> if (format == 1) {
> qemu_log_mask(LOG_GUEST_ERROR,
> @@ -974,19 +1010,58 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> return;
> }
>
> - /*
> - * Record the IPB in the associated NVP structure for later
> - * use. The presenter will resend the interrupt when the vCPU
> - * is dispatched again on a HW thread.
> - */
> - ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) |
> - xive_priority_to_ipb(priority);
> - nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
> - xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
> -
> - /*
> - * On HW, follows a "Broadcast Backlog" to IVPEs
> - */
> + if (!xive2_end_is_ignore(&end)) {
> + uint8_t ipb;
> + /*
> + * Record the IPB in the associated NVP structure for later
> + * use. The presenter will resend the interrupt when the vCPU
> + * is dispatched again on a HW thread.
> + */
> + ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2) |
> + xive_priority_to_ipb(priority);
> + nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
> + xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
> + } else {
> + Xive2Nvgc nvg;
> + uint32_t backlog;
> +
> + /* For groups, the per-priority backlog counters are in the NVG */
> + if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no NVG %x/%x\n",
> + nvp_blk, nvp_idx);
> + return;
> + }
> +
> + if (!xive2_nvgc_is_valid(&nvg)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVG %x/%x is invalid\n",
> + nvp_blk, nvp_idx);
> + return;
> + }
> +
> + /*
> + * Increment the backlog counter for that priority.
> + * For the precluded case, we only call broadcast the
> + * first time the counter is incremented. broadcast will
> + * set the LSMFB field of the TIMA of relevant threads so
> + * that they know an interrupt is pending.
> + */
> + backlog = xive2_nvgc_get_backlog(&nvg, priority) + 1;
> + xive2_nvgc_set_backlog(&nvg, priority, backlog);
> + xive2_router_write_nvgc(xrtr, false, nvp_blk, nvp_idx, &nvg);
> +
> + if (precluded && backlog == 1) {
> + XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
> + xfc->broadcast(xrtr->xfb, nvp_blk, nvp_idx, priority);
> +
> + if (!xive2_end_is_precluded_escalation(&end)) {
> + /*
> + * The interrupt will be picked up when the
> + * matching thread lowers its priority level
> + */
> + return;
> + }
> + }
> + }
> }
>
> do_escalation:
> diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
> index f0f0d7567d..7c11143749 100644
> --- a/hw/ppc/pnv.c
> +++ b/hw/ppc/pnv.c
> @@ -1,7 +1,9 @@
> /*
> * QEMU PowerPC PowerNV machine model
> *
> - * Copyright (c) 2016, IBM Corporation.
> + * Copyright (c) 2016-2024, IBM Corporation.
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> *
> * This library is free software; you can redistribute it and/or
> * modify it under the terms of the GNU Lesser General Public
> @@ -2639,6 +2641,23 @@ static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
> return total_count;
> }
>
> +static int pnv10_xive_broadcast(XiveFabric *xfb,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + uint8_t priority)
> +{
> + PnvMachineState *pnv = PNV_MACHINE(xfb);
> + int i;
> +
> + for (i = 0; i < pnv->num_chips; i++) {
> + Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
> + XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
> + XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
> +
> + xpc->broadcast(xptr, nvt_blk, nvt_idx, priority);
> + }
> + return 0;
> +}
> +
> static bool pnv_machine_get_big_core(Object *obj, Error **errp)
> {
> PnvMachineState *pnv = PNV_MACHINE(obj);
> @@ -2772,6 +2791,7 @@ static void pnv_machine_p10_common_class_init(ObjectClass *oc, void *data)
> pmc->dt_power_mgt = pnv_dt_power_mgt;
>
> xfc->match_nvt = pnv10_xive_match_nvt;
> + xfc->broadcast = pnv10_xive_broadcast;
>
> machine_class_allow_dynamic_sysbus_dev(mc, TYPE_PNV_PHB);
> }
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 07/14] ppc/xive2: Process group backlog when updating the CPPR
2024-12-10 0:05 ` [PATCH v2 07/14] " Michael Kowal
@ 2025-03-10 4:35 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 4:35 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> When the hypervisor or OS pushes a new value to the CPPR, if the LSMFB
> value is lower than the new CPPR value, there could be a pending group
> interrupt in the backlog, so it needs to be scanned.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 08/14] qtest/xive: Add group-interrupt test
2024-12-10 0:05 ` [PATCH v2 08/14] qtest/xive: Add group-interrupt test Michael Kowal
@ 2025-03-10 4:46 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 4:46 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> Add XIVE2 tests for group interrupts and group interrupts that have
> been backlogged.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> tests/qtest/pnv-xive2-test.c | 160 +++++++++++++++++++++++++++++++++++
> 1 file changed, 160 insertions(+)
>
> diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
> index dd19e88861..a4d06550ee 100644
> --- a/tests/qtest/pnv-xive2-test.c
> +++ b/tests/qtest/pnv-xive2-test.c
> @@ -2,6 +2,8 @@
> * QTest testcase for PowerNV 10 interrupt controller (xive2)
> * - Test irq to hardware thread
> * - Test 'Pull Thread Context to Odd Thread Reporting Line'
> + * - Test irq to hardware group
> + * - Test irq to hardware group going through backlog
> *
> * Copyright (c) 2024, IBM Corporation.
> *
> @@ -315,6 +317,158 @@ static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts)
> word2 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD2);
> g_assert_cmphex(xive_get_field32(TM_QW3W2_VT, word2), ==, 0);
> }
> +
> +static void test_hw_group_irq(QTestState *qts)
> +{
> + uint32_t irq = 100;
> + uint32_t irq_data = 0xdeadbeef;
> + uint32_t end_index = 23;
> + uint32_t chosen_one;
> + uint32_t target_nvp = 0x81; /* group size = 4 */
> + uint8_t priority = 6;
> + uint32_t reg32;
> + uint16_t reg16;
> + uint8_t pq, nsr, cppr;
> +
> + printf("# ============================================================\n");
> + printf("# Testing irq %d to hardware group of size 4\n", irq);
> +
> + /* irq config */
> + set_eas(qts, irq, end_index, irq_data);
> + set_end(qts, end_index, target_nvp, priority, true /* group */);
> +
> + /* enable and trigger irq */
> + get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
> + set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
> +
> + /* check irq is raised on cpu */
> + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
> + g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
> +
> + /* find the targeted vCPU */
> + for (chosen_one = 0; chosen_one < SMT; chosen_one++) {
> + reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
> + nsr = reg32 >> 24;
> + if (nsr == 0x82) {
> + break;
> + }
> + }
> + g_assert_cmphex(chosen_one, <, SMT);
> + cppr = (reg32 >> 16) & 0xFF;
> + g_assert_cmphex(nsr, ==, 0x82);
> + g_assert_cmphex(cppr, ==, 0xFF);
> +
> + /* ack the irq */
> + reg16 = get_tima16(qts, chosen_one, TM_SPC_ACK_HV_REG);
> + nsr = reg16 >> 8;
> + cppr = reg16 & 0xFF;
> + g_assert_cmphex(nsr, ==, 0x82);
> + g_assert_cmphex(cppr, ==, priority);
> +
> + /* check irq data is what was configured */
> + reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
> + g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
> +
> + /* End Of Interrupt */
> + set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
> + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
> + g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
> +
> + /* reset CPPR */
> + set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
> + reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
> + nsr = reg32 >> 24;
> + cppr = (reg32 >> 16) & 0xFF;
> + g_assert_cmphex(nsr, ==, 0x00);
> + g_assert_cmphex(cppr, ==, 0xFF);
> +}
> +
> +static void test_hw_group_irq_backlog(QTestState *qts)
> +{
> + uint32_t irq = 31;
> + uint32_t irq_data = 0x01234567;
> + uint32_t end_index = 129;
> + uint32_t target_nvp = 0x81; /* group size = 4 */
> + uint32_t chosen_one = 3;
> + uint8_t blocking_priority, priority = 3;
> + uint32_t reg32;
> + uint16_t reg16;
> + uint8_t pq, nsr, cppr, lsmfb, i;
> +
> + printf("# ============================================================\n");
> + printf("# Testing irq %d to hardware group of size 4 going through " \
> + "backlog\n",
> + irq);
> +
> + /*
> + * set current priority of all threads in the group to something
> + * higher than what we're about to trigger
> + */
> + blocking_priority = priority - 1;
> + for (i = 0; i < SMT; i++) {
> + set_tima8(qts, i, TM_QW3_HV_PHYS + TM_CPPR, blocking_priority);
> + }
> +
> + /* irq config */
> + set_eas(qts, irq, end_index, irq_data);
> + set_end(qts, end_index, target_nvp, priority, true /* group */);
> +
> + /* enable and trigger irq */
> + get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
> + set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
> +
> + /* check irq is raised on cpu */
> + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
> + g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
> +
> + /* check no interrupt is pending on the 2 possible targets */
> + for (i = 0; i < SMT; i++) {
> + reg32 = get_tima32(qts, i, TM_QW3_HV_PHYS + TM_WORD0);
> + nsr = reg32 >> 24;
> + cppr = (reg32 >> 16) & 0xFF;
> + lsmfb = reg32 & 0xFF;
> + g_assert_cmphex(nsr, ==, 0x0);
> + g_assert_cmphex(cppr, ==, blocking_priority);
> + g_assert_cmphex(lsmfb, ==, priority);
> + }
> +
> + /* lower priority of one thread */
> + set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, priority + 1);
> +
> + /* check backlogged interrupt is presented */
> + reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
> + nsr = reg32 >> 24;
> + cppr = (reg32 >> 16) & 0xFF;
> + g_assert_cmphex(nsr, ==, 0x82);
> + g_assert_cmphex(cppr, ==, priority + 1);
> +
> + /* ack the irq */
> + reg16 = get_tima16(qts, chosen_one, TM_SPC_ACK_HV_REG);
> + nsr = reg16 >> 8;
> + cppr = reg16 & 0xFF;
> + g_assert_cmphex(nsr, ==, 0x82);
> + g_assert_cmphex(cppr, ==, priority);
> +
> + /* check irq data is what was configured */
> + reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
> + g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
> +
> + /* End Of Interrupt */
> + set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
> + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
> + g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
> +
> + /* reset CPPR */
> + set_tima8(qts, chosen_one, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
> + reg32 = get_tima32(qts, chosen_one, TM_QW3_HV_PHYS + TM_WORD0);
> + nsr = reg32 >> 24;
> + cppr = (reg32 >> 16) & 0xFF;
> + lsmfb = reg32 & 0xFF;
> + g_assert_cmphex(nsr, ==, 0x00);
> + g_assert_cmphex(cppr, ==, 0xFF);
> + g_assert_cmphex(lsmfb, ==, 0xFF);
> +}
> +
> static void test_xive(void)
> {
> QTestState *qts;
> @@ -330,6 +484,12 @@ static void test_xive(void)
> /* omit reset_state here and use settings from test_hw_irq */
> test_pull_thread_ctx_to_odd_thread_cl(qts);
>
> + reset_state(qts);
> + test_hw_group_irq(qts);
> +
> + reset_state(qts);
> + test_hw_group_irq_backlog(qts);
> +
> reset_state(qts);
> test_flush_sync_inject(qts);
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 09/14] ppc/xive2: Add support for MMIO operations on the NVPG/NVC BAR
2024-12-10 0:05 ` [PATCH v2 09/14] ppc/xive2: Add support for MMIO operations on the NVPG/NVC BAR Michael Kowal
@ 2025-03-10 5:10 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 5:10 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> Add support for the NVPG and NVC BARs. Access to the BAR pages will
> cause backlog counter operations to either increment or decriment
> the counter.
>
> Also added qtests for the same.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> include/hw/ppc/xive2.h | 9 ++
> include/hw/ppc/xive2_regs.h | 3 +
> tests/qtest/pnv-xive2-common.h | 1 +
> hw/intc/pnv_xive2.c | 80 +++++++++++++---
> hw/intc/xive2.c | 87 +++++++++++++++++
> tests/qtest/pnv-xive2-nvpg_bar.c | 154 +++++++++++++++++++++++++++++++
> tests/qtest/pnv-xive2-test.c | 3 +
> hw/intc/trace-events | 4 +
> tests/qtest/meson.build | 3 +-
> 9 files changed, 329 insertions(+), 15 deletions(-)
> create mode 100644 tests/qtest/pnv-xive2-nvpg_bar.c
>
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index fc7422fea7..c07e23e1d3 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -90,6 +90,15 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> uint8_t nvt_blk, uint32_t nvt_idx,
> bool cam_ignore, uint32_t logic_serv);
>
> +uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
> + uint8_t blk, uint32_t idx,
> + uint16_t offset);
> +
> +uint64_t xive2_presenter_nvgc_backlog_op(XivePresenter *xptr,
> + bool crowd,
> + uint8_t blk, uint32_t idx,
> + uint16_t offset, uint16_t val);
> +
> /*
> * XIVE2 END ESBs (POWER10)
> */
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index e88d6eab1e..9bcf7a8a6f 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -233,4 +233,7 @@ typedef struct Xive2Nvgc {
> void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx,
> GString *buf);
>
> +#define NVx_BACKLOG_OP PPC_BITMASK(52, 53)
> +#define NVx_BACKLOG_PRIO PPC_BITMASK(57, 59)
> +
> #endif /* PPC_XIVE2_REGS_H */
> diff --git a/tests/qtest/pnv-xive2-common.h b/tests/qtest/pnv-xive2-common.h
> index 9ae34771aa..2077c05ebc 100644
> --- a/tests/qtest/pnv-xive2-common.h
> +++ b/tests/qtest/pnv-xive2-common.h
> @@ -107,5 +107,6 @@ extern void set_end(QTestState *qts, uint32_t index, uint32_t nvp_index,
>
>
> void test_flush_sync_inject(QTestState *qts);
> +void test_nvpg_bar(QTestState *qts);
>
> #endif /* TEST_PNV_XIVE2_COMMON_H */
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 41b727d1fb..54abfe3947 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -2202,21 +2202,40 @@ static const MemoryRegionOps pnv_xive2_tm_ops = {
> },
> };
>
> -static uint64_t pnv_xive2_nvc_read(void *opaque, hwaddr offset,
> +static uint64_t pnv_xive2_nvc_read(void *opaque, hwaddr addr,
> unsigned size)
> {
> PnvXive2 *xive = PNV_XIVE2(opaque);
> + XivePresenter *xptr = XIVE_PRESENTER(xive);
> + uint32_t page = addr >> xive->nvpg_shift;
> + uint16_t op = addr & 0xFFF;
> + uint8_t blk = pnv_xive2_block_id(xive);
>
> - xive2_error(xive, "NVC: invalid read @%"HWADDR_PRIx, offset);
> - return -1;
> + if (size != 2) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvc load size %d\n",
> + size);
> + return -1;
> + }
> +
> + return xive2_presenter_nvgc_backlog_op(xptr, true, blk, page, op, 1);
> }
>
> -static void pnv_xive2_nvc_write(void *opaque, hwaddr offset,
> +static void pnv_xive2_nvc_write(void *opaque, hwaddr addr,
> uint64_t val, unsigned size)
> {
> PnvXive2 *xive = PNV_XIVE2(opaque);
> + XivePresenter *xptr = XIVE_PRESENTER(xive);
> + uint32_t page = addr >> xive->nvc_shift;
> + uint16_t op = addr & 0xFFF;
> + uint8_t blk = pnv_xive2_block_id(xive);
>
> - xive2_error(xive, "NVC: invalid write @%"HWADDR_PRIx, offset);
> + if (size != 1) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid nvc write size %d\n",
> + size);
> + return;
> + }
It would be nice to convert these accessors to _with_attrs() variants
that can report access errors like this back through the memory
subsystem. I guess that's something for the todo list rather than an
issue with this patch in particular.
> + /*
> + * op:
> + * 0b00 => increment
> + * 0b01 => decrement
> + * 0b1- => read
> + */
Could use define or enum for these like the qtest has...
> + if (op == 0b00 || op == 0b01) {
> + if (op == 0b00) {
> + count += val;
> + } else {
> + if (count > val) {
> + count -= val;
> + } else {
> + count = 0;
> + }
> + }
> + xive2_nvgc_set_backlog(&nvgc, priority, count);
> + xive2_router_write_nvgc(xrtr, crowd, blk, idx, &nvgc);
> + }
> + trace_xive_nvgc_backlog_op(crowd, blk, idx, op, priority, old_count);
> + return old_count;
> +}
> +
> +uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
> + uint8_t blk, uint32_t idx,
> + uint16_t offset)
> +{
> + Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> + uint8_t priority = GETFIELD(NVx_BACKLOG_PRIO, offset);
> + uint8_t op = GETFIELD(NVx_BACKLOG_OP, offset);
> + Xive2Nvp nvp;
> + uint8_t ipb, old_ipb, rc;
> +
> + if (xive2_router_get_nvp(xrtr, blk, idx, &nvp)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n", blk, idx);
> + return -1;
> + }
> + if (!xive2_nvp_is_valid(&nvp)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVP %x/%x\n", blk, idx);
> + return -1;
> + }
> +
> + old_ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
> + ipb = old_ipb;
> + /*
> + * op:
> + * 0b00 => set priority bit
> + * 0b01 => reset priority bit
> + * 0b1- => read
> + */
> + if (op == 0b00 || op == 0b01) {
> + if (op == 0b00) {
> + ipb |= xive_priority_to_ipb(priority);
> + } else {
> + ipb &= ~xive_priority_to_ipb(priority);
> + }
> + nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, ipb);
> + xive2_router_write_nvp(xrtr, blk, idx, &nvp, 2);
> + }
> + rc = !!(old_ipb & xive_priority_to_ipb(priority));
> + trace_xive_nvp_backlog_op(blk, idx, op, priority, rc);
> + return rc;
> +}
> +
> void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
> {
> if (!xive2_eas_is_valid(eas)) {
> diff --git a/tests/qtest/pnv-xive2-nvpg_bar.c b/tests/qtest/pnv-xive2-nvpg_bar.c
> new file mode 100644
> index 0000000000..10d4962d1e
> --- /dev/null
> +++ b/tests/qtest/pnv-xive2-nvpg_bar.c
> @@ -0,0 +1,154 @@
> +/*
> + * QTest testcase for PowerNV 10 interrupt controller (xive2)
> + * - Test NVPG BAR MMIO operations
> + *
> + * Copyright (c) 2024, IBM Corporation.
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * later. See the COPYING file in the top-level directory.
> + */
> +#include "qemu/osdep.h"
> +#include "libqtest.h"
> +
> +#include "pnv-xive2-common.h"
> +
> +#define NVPG_BACKLOG_OP_SHIFT 10
> +#define NVPG_BACKLOG_PRIO_SHIFT 4
> +
> +#define XIVE_PRIORITY_MAX 7
> +
> +enum NVx {
> + NVP,
> + NVG,
> + NVC
> +};
> +
> +typedef enum {
> + INCR_STORE = 0b100,
> + INCR_LOAD = 0b000,
> + DECR_STORE = 0b101,
> + DECR_LOAD = 0b001,
> + READ_x = 0b010,
> + READ_y = 0b011,
> +} backlog_op;
> +
> +static uint32_t nvpg_backlog_op(QTestState *qts, backlog_op op,
> + enum NVx type, uint64_t index,
> + uint8_t priority, uint8_t delta)
> +{
> + uint64_t addr, offset;
> + uint32_t count = 0;
> +
> + switch (type) {
> + case NVP:
> + addr = XIVE_NVPG_ADDR + (index << (XIVE_PAGE_SHIFT + 1));
> + break;
> + case NVG:
> + addr = XIVE_NVPG_ADDR + (index << (XIVE_PAGE_SHIFT + 1)) +
> + (1 << XIVE_PAGE_SHIFT);
> + break;
> + case NVC:
> + addr = XIVE_NVC_ADDR + (index << XIVE_PAGE_SHIFT);
> + break;
> + default:
> + g_assert_not_reached();
> + }
> +
> + offset = (op & 0b11) << NVPG_BACKLOG_OP_SHIFT;
> + offset |= priority << NVPG_BACKLOG_PRIO_SHIFT;
> + if (op >> 2) {
> + qtest_writeb(qts, addr + offset, delta);
> + } else {
> + count = qtest_readw(qts, addr + offset);
> + }
> + return count;
> +}
> +
> +void test_nvpg_bar(QTestState *qts)
> +{
> + uint32_t nvp_target = 0x11;
> + uint32_t group_target = 0x17; /* size 16 */
> + uint32_t vp_irq = 33, group_irq = 47;
> + uint32_t vp_end = 3, group_end = 97;
> + uint32_t vp_irq_data = 0x33333333;
> + uint32_t group_irq_data = 0x66666666;
> + uint8_t vp_priority = 0, group_priority = 5;
> + uint32_t vp_count[XIVE_PRIORITY_MAX + 1] = { 0 };
> + uint32_t group_count[XIVE_PRIORITY_MAX + 1] = { 0 };
> + uint32_t count, delta;
> + uint8_t i;
> +
> + printf("# ============================================================\n");
> + printf("# Testing NVPG BAR operations\n");
> +
> + set_nvg(qts, group_target, 0);
> + set_nvp(qts, nvp_target, 0x04);
> + set_nvp(qts, group_target, 0x04);
> +
> + /*
> + * Setup: trigger a VP-specific interrupt and a group interrupt
> + * so that the backlog counters are initialized to something else
> + * than 0 for at least one priority level
> + */
> + set_eas(qts, vp_irq, vp_end, vp_irq_data);
> + set_end(qts, vp_end, nvp_target, vp_priority, false /* group */);
> +
> + set_eas(qts, group_irq, group_end, group_irq_data);
> + set_end(qts, group_end, group_target, group_priority, true /* group */);
> +
> + get_esb(qts, vp_irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
> + set_esb(qts, vp_irq, XIVE_TRIGGER_PAGE, 0, 0);
> + vp_count[vp_priority]++;
> +
> + get_esb(qts, group_irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
> + set_esb(qts, group_irq, XIVE_TRIGGER_PAGE, 0, 0);
> + group_count[group_priority]++;
> +
> + /* check the initial counters */
> + for (i = 0; i <= XIVE_PRIORITY_MAX; i++) {
> + count = nvpg_backlog_op(qts, READ_x, NVP, nvp_target, i, 0);
> + g_assert_cmpuint(count, ==, vp_count[i]);
> +
> + count = nvpg_backlog_op(qts, READ_y, NVG, group_target, i, 0);
> + g_assert_cmpuint(count, ==, group_count[i]);
> + }
> +
> + /* do a few ops on the VP. Counter can only be 0 and 1 */
> + vp_priority = 2;
> + delta = 7;
> + nvpg_backlog_op(qts, INCR_STORE, NVP, nvp_target, vp_priority, delta);
> + vp_count[vp_priority] = 1;
> + count = nvpg_backlog_op(qts, INCR_LOAD, NVP, nvp_target, vp_priority, 0);
> + g_assert_cmpuint(count, ==, vp_count[vp_priority]);
> + count = nvpg_backlog_op(qts, READ_y, NVP, nvp_target, vp_priority, 0);
> + g_assert_cmpuint(count, ==, vp_count[vp_priority]);
> +
> + count = nvpg_backlog_op(qts, DECR_LOAD, NVP, nvp_target, vp_priority, 0);
> + g_assert_cmpuint(count, ==, vp_count[vp_priority]);
> + vp_count[vp_priority] = 0;
> + nvpg_backlog_op(qts, DECR_STORE, NVP, nvp_target, vp_priority, delta);
> + count = nvpg_backlog_op(qts, READ_x, NVP, nvp_target, vp_priority, 0);
> + g_assert_cmpuint(count, ==, vp_count[vp_priority]);
It is a bit confusing because NVP ops AFAIKS are set/clear of a priority
bit. But this has inc/dec. The comment is there and set/clear is basically
incrementing/decrementing a saturating 1-bit counter so it makes sense
but might be good to make this terminology match what the model uses.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> +
> + /* do a few ops on the group */
> + group_priority = 2;
> + delta = 9;
> + /* can't go negative */
> + nvpg_backlog_op(qts, DECR_STORE, NVG, group_target, group_priority, delta);
> + count = nvpg_backlog_op(qts, READ_y, NVG, group_target, group_priority, 0);
> + g_assert_cmpuint(count, ==, 0);
> + nvpg_backlog_op(qts, INCR_STORE, NVG, group_target, group_priority, delta);
> + group_count[group_priority] += delta;
> + count = nvpg_backlog_op(qts, INCR_LOAD, NVG, group_target,
> + group_priority, delta);
> + g_assert_cmpuint(count, ==, group_count[group_priority]);
> + group_count[group_priority]++;
> +
> + count = nvpg_backlog_op(qts, DECR_LOAD, NVG, group_target,
> + group_priority, delta);
> + g_assert_cmpuint(count, ==, group_count[group_priority]);
> + group_count[group_priority]--;
> + count = nvpg_backlog_op(qts, READ_x, NVG, group_target, group_priority, 0);
> + g_assert_cmpuint(count, ==, group_count[group_priority]);
> +}
> +
> diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
> index a4d06550ee..a0e9f19313 100644
> --- a/tests/qtest/pnv-xive2-test.c
> +++ b/tests/qtest/pnv-xive2-test.c
> @@ -493,6 +493,9 @@ static void test_xive(void)
> reset_state(qts);
> test_flush_sync_inject(qts);
>
> + reset_state(qts);
> + test_nvpg_bar(qts);
> +
> qtest_quit(qts);
> }
>
> diff --git a/hw/intc/trace-events b/hw/intc/trace-events
> index 7435728c51..7f362c38b0 100644
> --- a/hw/intc/trace-events
> +++ b/hw/intc/trace-events
> @@ -285,6 +285,10 @@ xive_tctx_tm_read(uint32_t index, uint64_t offset, unsigned int size, uint64_t v
> xive_presenter_notify(uint8_t nvt_blk, uint32_t nvt_idx, uint8_t ring, uint8_t group_level) "found NVT 0x%x/0x%x ring=0x%x group_level=%d"
> xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x/0x%x @0x%"PRIx64
>
> +# xive2.c
> +xive_nvp_backlog_op(uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint8_t rc) "NVP 0x%x/0x%x operation=%d priority=%d rc=%d"
> +xive_nvgc_backlog_op(bool c, uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint32_t rc) "NVGC crowd=%d 0x%x/0x%x operation=%d priority=%d rc=%d"
> +
> # pnv_xive.c
> pnv_xive_ic_hw_trigger(uint64_t addr, uint64_t val) "@0x%"PRIx64" val=0x%"PRIx64
>
> diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
> index bd41c9da5f..f7da3df24b 100644
> --- a/tests/qtest/meson.build
> +++ b/tests/qtest/meson.build
> @@ -348,7 +348,8 @@ qtests = {
> 'ivshmem-test': [rt, '../../contrib/ivshmem-server/ivshmem-server.c'],
> 'migration-test': migration_files,
> 'pxe-test': files('boot-sector.c'),
> - 'pnv-xive2-test': files('pnv-xive2-common.c', 'pnv-xive2-flush-sync.c'),
> + 'pnv-xive2-test': files('pnv-xive2-common.c', 'pnv-xive2-flush-sync.c',
> + 'pnv-xive2-nvpg_bar.c'),
> 'qos-test': [chardev, io, qos_test_ss.apply({}).sources()],
> 'tpm-crb-swtpm-test': [io, tpmemu_files],
> 'tpm-crb-test': [io, tpmemu_files],
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16
2024-12-10 0:05 ` [PATCH v2 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16 Michael Kowal
@ 2025-03-10 5:15 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 5:15 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
This one got folded back into the crowd matching patch, so I
will take that version but LGTM.
Thanks,
Nick
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Glenn Miles <milesg@linux.vnet.ibm.com>
>
> XIVE crowd sizes are encoded into a 2-bit field as follows:
> 0: 0b00
> 2: 0b01
> 4: 0b10
> 16: 0b11
>
> A crowd size of 8 is not supported.
>
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 13/14] pnv/xive: Fix problem with treating NVGC as a NVP
2024-12-10 0:05 ` [PATCH v2 13/14] pnv/xive: Fix problem with treating NVGC as a NVP Michael Kowal
@ 2025-03-10 5:19 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 5:19 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> When booting with PHYP, the blk/index for a NVGC was being
> mistakenly treated as the blk/index for a NVP. Renamed
> nvp_blk/nvp_idx throughout the code to nvx_blk/nvx_idx to prevent
> confusion in the future and now we delay loading the NVP until
> the point where we know that the block and index actually point to
> a NVP.
>
> Suggested-by: Michael Kowal <kowal@us.ibm.com>
> Fixes: ("ppc/xive2: Support crowd-matching when looking for target")
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
I think this one should be folded into previous patches.
Thanks,
Nick
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 10/14] ppc/xive2: Support crowd-matching when looking for target
2024-12-10 0:05 ` [PATCH v2 10/14] ppc/xive2: Support crowd-matching when looking for target Michael Kowal
@ 2025-03-10 7:31 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 7:31 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> XIVE crowd sizes are encoded into a 2-bit field as follows:
> 0: 0b00
> 2: 0b01
> 4: 0b10
> 16: 0b11
>
> A crowd size of 8 is not supported.
>
> If an END is defined with the 'crowd' bit set, then a target can be
> running on different blocks. It means that some bits from the block
> VP are masked when looking for a match. It is similar to groups, but
> on the block instead of the VP index.
>
> Most of the changes are due to passing the extra argument 'crowd' all
> the way to the function checking for matches.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> include/hw/ppc/xive.h | 10 +++---
> include/hw/ppc/xive2.h | 3 +-
> hw/intc/pnv_xive.c | 10 +++---
> hw/intc/pnv_xive2.c | 12 +++----
> hw/intc/spapr_xive.c | 8 ++---
> hw/intc/xive.c | 40 ++++++++++++++++++----
> hw/intc/xive2.c | 78 +++++++++++++++++++++++++++++++++---------
> hw/ppc/pnv.c | 15 ++++----
> hw/ppc/spapr.c | 7 ++--
> 9 files changed, 131 insertions(+), 52 deletions(-)
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index f443a39cf1..8317fde0db 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -438,13 +438,13 @@ struct XivePresenterClass {
> InterfaceClass parent;
> int (*match_nvt)(XivePresenter *xptr, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - bool cam_ignore, uint8_t priority,
> + bool crowd, bool cam_ignore, uint8_t priority,
> uint32_t logic_serv, XiveTCTXMatch *match);
> bool (*in_kernel)(const XivePresenter *xptr);
> uint32_t (*get_config)(XivePresenter *xptr);
> int (*broadcast)(XivePresenter *xptr,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - uint8_t priority);
> + bool crowd, bool cam_ignore, uint8_t priority);
> };
>
> int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> @@ -453,7 +453,7 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> bool cam_ignore, uint32_t logic_serv);
> bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - bool cam_ignore, uint8_t priority,
> + bool crowd, bool cam_ignore, uint8_t priority,
> uint32_t logic_serv, bool *precluded);
>
> uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
> @@ -473,10 +473,10 @@ struct XiveFabricClass {
> InterfaceClass parent;
> int (*match_nvt)(XiveFabric *xfb, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - bool cam_ignore, uint8_t priority,
> + bool crowd, bool cam_ignore, uint8_t priority,
> uint32_t logic_serv, XiveTCTXMatch *match);
> int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
> - uint8_t priority);
> + bool crowd, bool cam_ignore, uint8_t priority);
> };
>
> /*
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index c07e23e1d3..8cdf819174 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -88,7 +88,8 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
> int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - bool cam_ignore, uint32_t logic_serv);
> + bool crowd, bool cam_ignore,
> + uint32_t logic_serv);
>
> uint64_t xive2_presenter_nvp_backlog_op(XivePresenter *xptr,
> uint8_t blk, uint32_t idx,
> diff --git a/hw/intc/pnv_xive.c b/hw/intc/pnv_xive.c
> index 5bacbce6a4..d4796ab5a6 100644
> --- a/hw/intc/pnv_xive.c
> +++ b/hw/intc/pnv_xive.c
> @@ -1,10 +1,9 @@
> /*
> * QEMU PowerPC XIVE interrupt controller model
> *
> - * Copyright (c) 2017-2019, IBM Corporation.
> + * Copyright (c) 2017-2024, IBM Corporation.
> *
> - * This code is licensed under the GPL version 2 or later. See the
> - * COPYING file in the top-level directory.
> + * SPDX-License-Identifier: GPL-2.0-or-later
> */
>
> #include "qemu/osdep.h"
> @@ -473,7 +472,7 @@ static bool pnv_xive_is_cpu_enabled(PnvXive *xive, PowerPCCPU *cpu)
>
> static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - bool cam_ignore, uint8_t priority,
> + bool crowd, bool cam_ignore, uint8_t priority,
> uint32_t logic_serv, XiveTCTXMatch *match)
> {
> PnvXive *xive = PNV_XIVE(xptr);
> @@ -500,7 +499,8 @@ static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> * Check the thread context CAM lines and record matches.
> */
> ring = xive_presenter_tctx_match(xptr, tctx, format, nvt_blk,
> - nvt_idx, cam_ignore, logic_serv);
> + nvt_idx, cam_ignore,
> + logic_serv);
> /*
> * Save the context and follow on to catch duplicates, that we
> * don't support yet.
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 54abfe3947..91f3514f93 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -624,7 +624,7 @@ static bool pnv_xive2_is_cpu_enabled(PnvXive2 *xive, PowerPCCPU *cpu)
>
> static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - bool cam_ignore, uint8_t priority,
> + bool crowd, bool cam_ignore, uint8_t priority,
> uint32_t logic_serv, XiveTCTXMatch *match)
> {
> PnvXive2 *xive = PNV_XIVE2(xptr);
> @@ -655,8 +655,8 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> logic_serv);
> } else {
> ring = xive2_presenter_tctx_match(xptr, tctx, format, nvt_blk,
> - nvt_idx, cam_ignore,
> - logic_serv);
> + nvt_idx, crowd, cam_ignore,
> + logic_serv);
> }
>
> if (ring != -1) {
> @@ -707,7 +707,7 @@ static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
>
> static int pnv_xive2_broadcast(XivePresenter *xptr,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - uint8_t priority)
> + bool crowd, bool ignore, uint8_t priority)
> {
> PnvXive2 *xive = PNV_XIVE2(xptr);
> PnvChip *chip = xive->chip;
> @@ -732,10 +732,10 @@ static int pnv_xive2_broadcast(XivePresenter *xptr,
>
> if (gen1_tima_os) {
> ring = xive_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
> - nvt_idx, true, 0);
> + nvt_idx, ignore, 0);
> } else {
> ring = xive2_presenter_tctx_match(xptr, tctx, 0, nvt_blk,
> - nvt_idx, true, 0);
> + nvt_idx, crowd, ignore, 0);
> }
>
> if (ring != -1) {
> diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
> index 283a6b8fd2..0477fdd594 100644
> --- a/hw/intc/spapr_xive.c
> +++ b/hw/intc/spapr_xive.c
> @@ -1,10 +1,9 @@
> /*
> * QEMU PowerPC sPAPR XIVE interrupt controller model
> *
> - * Copyright (c) 2017-2018, IBM Corporation.
> + * Copyright (c) 2017-2024, IBM Corporation.
> *
> - * This code is licensed under the GPL version 2 or later. See the
> - * COPYING file in the top-level directory.
> + * SPDX-License-Identifier: GPL-2.0-or-later
> */
>
> #include "qemu/osdep.h"
> @@ -431,7 +430,8 @@ static int spapr_xive_write_nvt(XiveRouter *xrtr, uint8_t nvt_blk,
>
> static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - bool cam_ignore, uint8_t priority,
> + bool crowd, bool cam_ignore,
> + uint8_t priority,
> uint32_t logic_serv, XiveTCTXMatch *match)
> {
> CPUState *cs;
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 308de5aefc..97d1c42bb2 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -1667,10 +1667,37 @@ uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
> return 1 << (ctz32(~nvp_index) + 1);
> }
>
> -static uint8_t xive_get_group_level(uint32_t nvp_index)
> +static uint8_t xive_get_group_level(bool crowd, bool ignore,
> + uint32_t nvp_blk, uint32_t nvp_index)
> {
> - /* FIXME add crowd encoding */
> - return ctz32(~nvp_index) + 1;
> + uint8_t level = 0;
> +
> + if (crowd) {
> + /* crowd level is bit position of first 0 from the right in nvp_blk */
> + level = ctz32(~nvp_blk) + 1;
> +
> + /*
> + * Supported crowd sizes are 2^1, 2^2, and 2^4. 2^3 is not supported.
> + * HW will encode level 4 as the value 3. See xive2_pgofnext().
> + */
> + switch (level) {
> + case 1:
> + case 2:
> + break;
> + case 4:
> + level = 3;
> + break;
> + default:
> + g_assert_not_reached();
> + }
> +
> + /* Crowd level bits reside in upper 2 bits of the 6 bit group level */
> + level <<= 4;
> + }
> + if (ignore) {
> + level |= (ctz32(~nvp_index) + 1) & 0b1111;
> + }
> + return level;
> }
>
> /*
Crowd implies ignore I think? So it might read better if if (ignore) {
branch was at the top level with the bottom bits calculated first, then
if (crowd) { inside that branch.
> @@ -1742,7 +1769,7 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> */
> bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - bool cam_ignore, uint8_t priority,
> + bool crowd, bool cam_ignore, uint8_t priority,
> uint32_t logic_serv, bool *precluded)
> {
> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
> @@ -1773,7 +1800,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> * a new command to the presenters (the equivalent of the "assign"
> * power bus command in the documented full notify sequence.
> */
> - count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, cam_ignore,
> + count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
> priority, logic_serv, &match);
> if (count < 0) {
> return false;
> @@ -1781,7 +1808,7 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
>
> /* handle CPU exception delivery */
> if (count) {
> - group_level = cam_ignore ? xive_get_group_level(nvt_idx) : 0;
> + group_level = xive_get_group_level(crowd, cam_ignore, nvt_blk, nvt_idx);
> trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
> xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
> } else {
> @@ -1906,6 +1933,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> }
>
> found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
> + false /* crowd */,
> xive_get_field32(END_W7_F0_IGNORE, end.w7),
> priority,
> xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index f4621bdd02..20d63e8f6e 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1120,13 +1120,42 @@ static bool xive2_vp_match_mask(uint32_t cam1, uint32_t cam2,
> return (cam1 & vp_mask) == (cam2 & vp_mask);
> }
>
> +static uint8_t xive2_get_vp_block_mask(uint32_t nvt_blk, bool crowd)
> +{
> + uint8_t size, block_mask = 0b1111;
> +
> + /* 3 supported crowd sizes: 2, 4, 16 */
> + if (crowd) {
> + size = xive_get_vpgroup_size(nvt_blk);
> + if (size == 8) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid crowd size of 8n");
> + return block_mask;
> + }
> + block_mask = ~(size - 1);
> + block_mask &= 0b1111;
This could just be block_mask &= ~(size - 1) ?
> + }
> + return block_mask;
> +}
> +
> +static uint32_t xive2_get_vp_index_mask(uint32_t nvt_index, bool cam_ignore)
> +{
> + uint32_t index_mask = 0xFFFFFF; /* 24 bits */
> +
> + if (cam_ignore) {
> + index_mask = ~(xive_get_vpgroup_size(nvt_index) - 1);
> + index_mask &= 0xFFFFFF;
Similar here.
> + }
> + return index_mask;
> +}
> +
> /*
> * The thread context register words are in big-endian format.
> */
> int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> - bool cam_ignore, uint32_t logic_serv)
> + bool crowd, bool cam_ignore,
> + uint32_t logic_serv)
> {
> uint32_t cam = xive2_nvp_cam_line(nvt_blk, nvt_idx);
> uint32_t qw3w2 = xive_tctx_word2(&tctx->regs[TM_QW3_HV_PHYS]);
> @@ -1134,7 +1163,8 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> uint32_t qw1w2 = xive_tctx_word2(&tctx->regs[TM_QW1_OS]);
> uint32_t qw0w2 = xive_tctx_word2(&tctx->regs[TM_QW0_USER]);
>
> - uint32_t vp_mask = 0xFFFFFFFF;
> + uint32_t index_mask, vp_mask;
> + uint8_t block_mask;
>
> if (format == 0) {
> /*
> @@ -1142,9 +1172,9 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> * i=1: VP-group notification (bits ignored at the end of the
> * NVT identifier)
> */
> - if (cam_ignore) {
> - vp_mask = ~(xive_get_vpgroup_size(nvt_idx) - 1);
> - }
> + block_mask = xive2_get_vp_block_mask(nvt_blk, crowd);
> + index_mask = xive2_get_vp_index_mask(nvt_idx, cam_ignore);
> + vp_mask = xive2_nvp_cam_line(block_mask, index_mask);
Just a small thing but you could have all these be a single function,
vp_mask = xive2_get_vp_mask(nvt_blk, nvt_idx, crowd, cam_ignore);
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 11/14] ppc/xive2: Check crowd backlog when scanning group backlog
2024-12-10 0:05 ` [PATCH v2 11/14] ppc/xive2: Check crowd backlog when scanning group backlog Michael Kowal
@ 2025-03-10 7:32 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 7:32 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Frederic Barrat <fbarrat@linux.ibm.com>
>
> When processing a backlog scan for group interrupts, also take
> into account crowd interrupts.
>
> Signed-off-by: Frederic Barrat <fbarrat@linux.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> include/hw/ppc/xive2_regs.h | 4 ++
> hw/intc/xive2.c | 82 +++++++++++++++++++++++++------------
> 2 files changed, 60 insertions(+), 26 deletions(-)
>
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index 9bcf7a8a6f..b11395c563 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -236,4 +236,8 @@ void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx,
> #define NVx_BACKLOG_OP PPC_BITMASK(52, 53)
> #define NVx_BACKLOG_PRIO PPC_BITMASK(57, 59)
>
> +/* split the 6-bit crowd/group level */
> +#define NVx_CROWD_LVL(level) ((level >> 4) & 0b11)
> +#define NVx_GROUP_LVL(level) (level & 0b1111)
> +
> #endif /* PPC_XIVE2_REGS_H */
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 20d63e8f6e..c29d8e4831 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -366,6 +366,35 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
> end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
> }
>
> +static void xive2_pgofnext(uint8_t *nvgc_blk, uint32_t *nvgc_idx,
> + uint8_t next_level)
> +{
> + uint32_t mask, next_idx;
> + uint8_t next_blk;
> +
> + /*
> + * Adjust the block and index of a VP for the next group/crowd
> + * size (PGofFirst/PGofNext field in the NVP and NVGC structures).
> + *
> + * The 6-bit group level is split into a 2-bit crowd and 4-bit
> + * group levels. Encoding is similar. However, we don't support
> + * crowd size of 8. So a crowd level of 0b11 is bumped to a crowd
> + * size of 16.
> + */
> + next_blk = NVx_CROWD_LVL(next_level);
> + if (next_blk == 3) {
> + next_blk = 4;
> + }
> + mask = (1 << next_blk) - 1;
> + *nvgc_blk &= ~mask;
> + *nvgc_blk |= mask >> 1;
> +
> + next_idx = NVx_GROUP_LVL(next_level);
> + mask = (1 << next_idx) - 1;
> + *nvgc_idx &= ~mask;
> + *nvgc_idx |= mask >> 1;
> +}
> +
> /*
> * Scan the group chain and return the highest priority and group
> * level of pending group interrupts.
> @@ -376,29 +405,28 @@ static uint8_t xive2_presenter_backlog_scan(XivePresenter *xptr,
> uint8_t *out_level)
> {
> Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> - uint32_t nvgc_idx, mask;
> + uint32_t nvgc_idx;
> uint32_t current_level, count;
> - uint8_t prio;
> + uint8_t nvgc_blk, prio;
> Xive2Nvgc nvgc;
>
> for (prio = 0; prio <= XIVE_PRIORITY_MAX; prio++) {
> - current_level = first_group & 0xF;
> + current_level = first_group & 0x3F;
> + nvgc_blk = nvp_blk;
> + nvgc_idx = nvp_idx;
>
> while (current_level) {
> - mask = (1 << current_level) - 1;
> - nvgc_idx = nvp_idx & ~mask;
> - nvgc_idx |= mask >> 1;
> - qemu_log("fxb %s checking backlog for prio %d group idx %x\n",
> - __func__, prio, nvgc_idx);
> -
> - if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
> - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
> - nvp_blk, nvgc_idx);
> + xive2_pgofnext(&nvgc_blk, &nvgc_idx, current_level);
> +
> + if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(current_level),
> + nvgc_blk, nvgc_idx, &nvgc)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVGC %x/%x\n",
> + nvgc_blk, nvgc_idx);
> return 0xFF;
> }
> if (!xive2_nvgc_is_valid(&nvgc)) {
> - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
> - nvp_blk, nvgc_idx);
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVGC %x/%x\n",
> + nvgc_blk, nvgc_idx);
> return 0xFF;
> }
>
> @@ -407,7 +435,7 @@ static uint8_t xive2_presenter_backlog_scan(XivePresenter *xptr,
> *out_level = current_level;
> return prio;
> }
> - current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0xF;
> + current_level = xive_get_field32(NVGC2_W0_PGONEXT, nvgc.w0) & 0x3F;
> }
> }
> return 0xFF;
> @@ -419,22 +447,23 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
> uint8_t group_level)
> {
> Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> - uint32_t nvgc_idx, mask, count;
> + uint32_t nvgc_idx, count;
> + uint8_t nvgc_blk;
> Xive2Nvgc nvgc;
>
> - group_level &= 0xF;
> - mask = (1 << group_level) - 1;
> - nvgc_idx = nvp_idx & ~mask;
> - nvgc_idx |= mask >> 1;
> + nvgc_blk = nvp_blk;
> + nvgc_idx = nvp_idx;
> + xive2_pgofnext(&nvgc_blk, &nvgc_idx, group_level);
>
> - if (xive2_router_get_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc)) {
> - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVG %x/%x\n",
> - nvp_blk, nvgc_idx);
> + if (xive2_router_get_nvgc(xrtr, NVx_CROWD_LVL(group_level),
> + nvgc_blk, nvgc_idx, &nvgc)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVGC %x/%x\n",
> + nvgc_blk, nvgc_idx);
> return;
> }
> if (!xive2_nvgc_is_valid(&nvgc)) {
> - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVG %x/%x\n",
> - nvp_blk, nvgc_idx);
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Invalid NVGC %x/%x\n",
> + nvgc_blk, nvgc_idx);
> return;
> }
> count = xive2_nvgc_get_backlog(&nvgc, group_prio);
> @@ -442,7 +471,8 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
> return;
> }
> xive2_nvgc_set_backlog(&nvgc, group_prio, count - 1);
> - xive2_router_write_nvgc(xrtr, false, nvp_blk, nvgc_idx, &nvgc);
> + xive2_router_write_nvgc(xrtr, NVx_CROWD_LVL(group_level),
> + nvgc_blk, nvgc_idx, &nvgc);
> }
>
> /*
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 12/14] pnv/xive: Support ESB Escalation
2024-12-10 0:05 ` [PATCH v2 12/14] pnv/xive: Support ESB Escalation Michael Kowal
@ 2025-03-10 8:07 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 8:07 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Glenn Miles <milesg@linux.vnet.ibm.com>
>
> END notification processing has an escalation path. The escalation is
> not always an END escalation but can be an ESB escalation.
>
> Also added a check for 'resume' processing which log a message stating it
> needs to be implemented. This is not needed at the time but is part of
> the END notification processing.
>
> This change was taken from a patch provided by Michael Kowal
>
> Suggested-by: Michael Kowal <kowal@us.ibm.com>
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> include/hw/ppc/xive2.h | 1 +
> include/hw/ppc/xive2_regs.h | 13 +++++---
> hw/intc/xive2.c | 61 +++++++++++++++++++++++++++++--------
> 3 files changed, 58 insertions(+), 17 deletions(-)
>
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 8cdf819174..2436ddb5e5 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -80,6 +80,7 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
> uint32_t xive2_router_get_config(Xive2Router *xrtr);
>
> void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
> +void xive2_notify(Xive2Router *xrtr, uint32_t lisn, bool pq_checked);
>
> /*
> * XIVE2 Presenter (POWER10)
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index b11395c563..164d61e605 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -39,15 +39,18 @@
>
> typedef struct Xive2Eas {
> uint64_t w;
> -#define EAS2_VALID PPC_BIT(0)
> -#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
> -#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
> -#define EAS2_MASKED PPC_BIT(32) /* Masked */
> -#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
> +#define EAS2_VALID PPC_BIT(0)
> +#define EAS2_QOS PPC_BIT(1, 2) /* Quality of Service(unimp) */
> +#define EAS2_RESUME PPC_BIT(3) /* END Resume(unimp) */
> +#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
> +#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
> +#define EAS2_MASKED PPC_BIT(32) /* Masked */
> +#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
> } Xive2Eas;
>
> #define xive2_eas_is_valid(eas) (be64_to_cpu((eas)->w) & EAS2_VALID)
> #define xive2_eas_is_masked(eas) (be64_to_cpu((eas)->w) & EAS2_MASKED)
> +#define xive2_eas_is_resume(eas) (be64_to_cpu((eas)->w) & EAS2_RESUME)
>
> void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf);
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index c29d8e4831..44b7743b2b 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1514,18 +1514,39 @@ do_escalation:
> }
> }
>
> - /*
> - * The END trigger becomes an Escalation trigger
> - */
> - xive2_router_end_notify(xrtr,
> - xive_get_field32(END2_W4_END_BLOCK, end.w4),
> - xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> - xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + if (xive2_end_is_escalate_end(&end)) {
> + /*
> + * Perform END Adaptive escalation processing
> + * The END trigger becomes an Escalation trigger
> + */
> + xive2_router_end_notify(xrtr,
> + xive_get_field32(END2_W4_END_BLOCK, end.w4),
> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> + xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + } /* end END adaptive escalation */
Probably don't need that comment there, it's quite a small block
already with a comment.
> +
> + else {
> + uint32_t lisn; /* Logical Interrupt Source Number */
> +
> + /*
> + * Perform ESB escalation processing
> + * E[N] == 1 --> N
> + * Req[Block] <- E[ESB_Block]
> + * Req[Index] <- E[ESB_Index]
> + * Req[Offset] <- 0x000
> + * Execute <ESB Store> Req command
> + */
> + lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
In my XIVE spec, AFAIKS the N=0 ESB block/index layout at W4 is
different than the N1=1 END block/index. I won't change it since
this looks the same in our downstream which is tested, so I might
be missing something... Could perhaps use a comment if so.
> +
> + xive2_notify(xrtr, lisn, true /* pq_checked */);
Is that really right? The escalation should bypass the PQ state
machine?
> + }
> +
> + return;
No need for this return.
> }
>
> -void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> +void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
This can be static.
> {
> - Xive2Router *xrtr = XIVE2_ROUTER(xn);
> uint8_t eas_blk = XIVE_EAS_BLOCK(lisn);
> uint32_t eas_idx = XIVE_EAS_INDEX(lisn);
> Xive2Eas eas;
> @@ -1568,13 +1589,29 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> return;
> }
>
> + /* TODO: add support for EAS resume if ever needed */
Comment is probably unnecessary with the UNIMP log.
> + if (xive2_eas_is_resume(&eas)) {
> + qemu_log_mask(LOG_UNIMP,
> + "XIVE: EAS resume processing unimplemented - LISN %x\n",
> + lisn);
> + return;
> + }
> +
> /*
> * The event trigger becomes an END trigger
> */
> xive2_router_end_notify(xrtr,
> - xive_get_field64(EAS2_END_BLOCK, eas.w),
> - xive_get_field64(EAS2_END_INDEX, eas.w),
> - xive_get_field64(EAS2_END_DATA, eas.w));
> + xive_get_field64(EAS2_END_BLOCK, eas.w),
> + xive_get_field64(EAS2_END_INDEX, eas.w),
> + xive_get_field64(EAS2_END_DATA, eas.w));
> +}
> +
> +void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> +{
> + Xive2Router *xrtr = XIVE2_ROUTER(xn);
> +
> + xive2_notify(xrtr, lisn, pq_checked);
> + return;
Also return unnecessary.
> }
>
> static Property xive2_router_properties[] = {
Thanks,
Nick
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 14/14] qtest/xive: Add test of pool interrupts
2024-12-10 0:05 ` [PATCH v2 14/14] qtest/xive: Add test of pool interrupts Michael Kowal
@ 2025-03-10 8:20 ` Nicholas Piggin
0 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-10 8:20 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Added new test for pool interrupts. Removed all printfs from pnv-xive2-* qtests.
>
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
This looks good. I split the remove-printfs into its own patch.
I would like to see the irq test code merged into one that can
just select CAM rings by argument because it's mostly duplicated.
Should then be able to add an OS ring test with the same code too.
But okay for now.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> tests/qtest/pnv-xive2-flush-sync.c | 6 +-
> tests/qtest/pnv-xive2-nvpg_bar.c | 7 +--
> tests/qtest/pnv-xive2-test.c | 98 +++++++++++++++++++++++++++---
> 3 files changed, 94 insertions(+), 17 deletions(-)
>
> diff --git a/tests/qtest/pnv-xive2-flush-sync.c b/tests/qtest/pnv-xive2-flush-sync.c
> index 3b32446adb..142826bad0 100644
> --- a/tests/qtest/pnv-xive2-flush-sync.c
> +++ b/tests/qtest/pnv-xive2-flush-sync.c
> @@ -178,14 +178,14 @@ void test_flush_sync_inject(QTestState *qts)
> int test_nr;
> uint8_t byte;
>
> - printf("# ============================================================\n");
> - printf("# Starting cache flush/queue sync injection tests...\n");
> + g_test_message("=========================================================");
> + g_test_message("Starting cache flush/queue sync injection tests...");
>
> for (test_nr = 0; test_nr < sizeof(xive_inject_tests);
> test_nr++) {
> int op_type = xive_inject_tests[test_nr];
>
> - printf("# Running test %d\n", test_nr);
> + g_test_message("Running test %d", test_nr);
>
> /* start with status byte set to 0 */
> clr_sync(qts, src_pir, ic_topo_id, op_type);
> diff --git a/tests/qtest/pnv-xive2-nvpg_bar.c b/tests/qtest/pnv-xive2-nvpg_bar.c
> index 10d4962d1e..8481a70f22 100644
> --- a/tests/qtest/pnv-xive2-nvpg_bar.c
> +++ b/tests/qtest/pnv-xive2-nvpg_bar.c
> @@ -4,8 +4,7 @@
> *
> * Copyright (c) 2024, IBM Corporation.
> *
> - * This work is licensed under the terms of the GNU GPL, version 2 or
> - * later. See the COPYING file in the top-level directory.
> + * SPDX-License-Identifier: GPL-2.0-or-later
> */
> #include "qemu/osdep.h"
> #include "libqtest.h"
> @@ -78,8 +77,8 @@ void test_nvpg_bar(QTestState *qts)
> uint32_t count, delta;
> uint8_t i;
>
> - printf("# ============================================================\n");
> - printf("# Testing NVPG BAR operations\n");
> + g_test_message("=========================================================");
> + g_test_message("Testing NVPG BAR operations");
>
> set_nvg(qts, group_target, 0);
> set_nvp(qts, nvp_target, 0x04);
> diff --git a/tests/qtest/pnv-xive2-test.c b/tests/qtest/pnv-xive2-test.c
> index a0e9f19313..5313d4ef18 100644
> --- a/tests/qtest/pnv-xive2-test.c
> +++ b/tests/qtest/pnv-xive2-test.c
> @@ -4,6 +4,7 @@
> * - Test 'Pull Thread Context to Odd Thread Reporting Line'
> * - Test irq to hardware group
> * - Test irq to hardware group going through backlog
> + * - Test irq to pool thread
> *
> * Copyright (c) 2024, IBM Corporation.
> *
> @@ -220,8 +221,8 @@ static void test_hw_irq(QTestState *qts)
> uint16_t reg16;
> uint8_t pq, nsr, cppr;
>
> - printf("# ============================================================\n");
> - printf("# Testing irq %d to hardware thread %d\n", irq, target_pir);
> + g_test_message("=========================================================");
> + g_test_message("Testing irq %d to hardware thread %d", irq, target_pir);
>
> /* irq config */
> set_eas(qts, irq, end_index, irq_data);
> @@ -266,6 +267,79 @@ static void test_hw_irq(QTestState *qts)
> g_assert_cmphex(cppr, ==, 0xFF);
> }
>
> +static void test_pool_irq(QTestState *qts)
> +{
> + uint32_t irq = 2;
> + uint32_t irq_data = 0x600d0d06;
> + uint32_t end_index = 5;
> + uint32_t target_pir = 1;
> + uint32_t target_nvp = 0x100 + target_pir;
> + uint8_t priority = 5;
> + uint32_t reg32;
> + uint16_t reg16;
> + uint8_t pq, nsr, cppr, ipb;
> +
> + g_test_message("=========================================================");
> + g_test_message("Testing irq %d to pool thread %d", irq, target_pir);
> +
> + /* irq config */
> + set_eas(qts, irq, end_index, irq_data);
> + set_end(qts, end_index, target_nvp, priority, false /* group */);
> +
> + /* enable and trigger irq */
> + get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_SET_PQ_00);
> + set_esb(qts, irq, XIVE_TRIGGER_PAGE, 0, 0);
> +
> + /* check irq is raised on cpu */
> + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
> + g_assert_cmpuint(pq, ==, XIVE_ESB_PENDING);
> +
> + /* check TIMA values in the PHYS ring (shared by POOL ring) */
> + reg32 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD0);
> + nsr = reg32 >> 24;
> + cppr = (reg32 >> 16) & 0xFF;
> + g_assert_cmphex(nsr, ==, 0x40);
> + g_assert_cmphex(cppr, ==, 0xFF);
> +
> + /* check TIMA values in the POOL ring */
> + reg32 = get_tima32(qts, target_pir, TM_QW2_HV_POOL + TM_WORD0);
> + nsr = reg32 >> 24;
> + cppr = (reg32 >> 16) & 0xFF;
> + ipb = (reg32 >> 8) & 0xFF;
> + g_assert_cmphex(nsr, ==, 0);
> + g_assert_cmphex(cppr, ==, 0);
> + g_assert_cmphex(ipb, ==, 0x80 >> priority);
> +
> + /* ack the irq */
> + reg16 = get_tima16(qts, target_pir, TM_SPC_ACK_HV_REG);
> + nsr = reg16 >> 8;
> + cppr = reg16 & 0xFF;
> + g_assert_cmphex(nsr, ==, 0x40);
> + g_assert_cmphex(cppr, ==, priority);
> +
> + /* check irq data is what was configured */
> + reg32 = qtest_readl(qts, xive_get_queue_addr(end_index));
> + g_assert_cmphex((reg32 & 0x7fffffff), ==, (irq_data & 0x7fffffff));
> +
> + /* check IPB is cleared in the POOL ring */
> + reg32 = get_tima32(qts, target_pir, TM_QW2_HV_POOL + TM_WORD0);
> + ipb = (reg32 >> 8) & 0xFF;
> + g_assert_cmphex(ipb, ==, 0);
> +
> + /* End Of Interrupt */
> + set_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_STORE_EOI, 0);
> + pq = get_esb(qts, irq, XIVE_EOI_PAGE, XIVE_ESB_GET);
> + g_assert_cmpuint(pq, ==, XIVE_ESB_RESET);
> +
> + /* reset CPPR */
> + set_tima8(qts, target_pir, TM_QW3_HV_PHYS + TM_CPPR, 0xFF);
> + reg32 = get_tima32(qts, target_pir, TM_QW3_HV_PHYS + TM_WORD0);
> + nsr = reg32 >> 24;
> + cppr = (reg32 >> 16) & 0xFF;
> + g_assert_cmphex(nsr, ==, 0x00);
> + g_assert_cmphex(cppr, ==, 0xFF);
> +}
> +
> #define XIVE_ODD_CL 0x80
> static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts)
> {
> @@ -278,8 +352,9 @@ static void test_pull_thread_ctx_to_odd_thread_cl(QTestState *qts)
> uint32_t cl_word;
> uint32_t word2;
>
> - printf("# ============================================================\n");
> - printf("# Testing 'Pull Thread Context to Odd Thread Reporting Line'\n");
> + g_test_message("=========================================================");
> + g_test_message("Testing 'Pull Thread Context to Odd Thread Reporting " \
> + "Line'");
>
> /* clear odd cache line prior to pull operation */
> memset(cl_pair, 0, sizeof(cl_pair));
> @@ -330,8 +405,8 @@ static void test_hw_group_irq(QTestState *qts)
> uint16_t reg16;
> uint8_t pq, nsr, cppr;
>
> - printf("# ============================================================\n");
> - printf("# Testing irq %d to hardware group of size 4\n", irq);
> + g_test_message("=========================================================");
> + g_test_message("Testing irq %d to hardware group of size 4", irq);
>
> /* irq config */
> set_eas(qts, irq, end_index, irq_data);
> @@ -395,10 +470,10 @@ static void test_hw_group_irq_backlog(QTestState *qts)
> uint16_t reg16;
> uint8_t pq, nsr, cppr, lsmfb, i;
>
> - printf("# ============================================================\n");
> - printf("# Testing irq %d to hardware group of size 4 going through " \
> - "backlog\n",
> - irq);
> + g_test_message("=========================================================");
> + g_test_message("Testing irq %d to hardware group of size 4 going " \
> + "through backlog",
> + irq);
>
> /*
> * set current priority of all threads in the group to something
> @@ -484,6 +559,9 @@ static void test_xive(void)
> /* omit reset_state here and use settings from test_hw_irq */
> test_pull_thread_ctx_to_odd_thread_cl(qts);
>
> + reset_state(qts);
> + test_pool_irq(qts);
> +
> reset_state(qts);
> test_hw_group_irq(qts);
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
` (23 preceding siblings ...)
2024-12-10 0:05 ` [PATCH v2 14/14] qtest/xive: Add test of pool interrupts Michael Kowal
@ 2025-03-11 13:16 ` Nicholas Piggin
24 siblings, 0 replies; 41+ messages in thread
From: Nicholas Piggin @ 2025-03-11 13:16 UTC (permalink / raw)
To: Michael Kowal, qemu-devel
Cc: qemu-ppc, fbarrat, milesg, danielhb413, david, harshpb, thuth,
lvivier, pbonzini
Thanks, I merged this series with some tweaks, except
for patch 12 Support ESB Escalation because it has some
outstanding problems as commented.
Thanks,
Nick
On Tue Dec 10, 2024 at 10:05 AM AEST, Michael Kowal wrote:
> XIVE2 has the concepts of a Group of interrupts and a Crowd of interrupts
> (where a crowd is a group of Groups). These patch sets are associated with:
> - NVGC tables
> - Group/Crowd level notification
> - Incrementing backlog countets
> - Backlog processing
> - NVPG and NVC Bar MMIO operations
> - Group/Crowd testing
> - ESB Escalation
> - Pool interrupt testing
>
> version 2:
> - Removed printfs from test models and replaced with g_test_message()
> - Updated XIVE copyrights to use:
> SPDX-License-Identifier: GPL-2.0-or-later
> - Set entire NSR to 0, not just fields
> - Moved rename of xive_ipb_to_pipr() into its own patch set 0002
> - Rename xive2_presenter_backlog_check() to
> xive2_presenter_backlog_scan()
> - Squash patch set 11 (crowd size restrictions) into
> patch set 9 (support crowd-matching)
> - Made xive2_notify() a static rou
>
> Frederic Barrat (10):
> ppc/xive2: Update NVP save/restore for group attributes
> ppc/xive2: Add grouping level to notification
> ppc/xive2: Support group-matching when looking for target
> ppc/xive2: Add undelivered group interrupt to backlog
> ppc/xive2: Process group backlog when pushing an OS context
> ppc/xive2: Process group backlog when updating the CPPR
> qtest/xive: Add group-interrupt test
> ppc/xive2: Add support for MMIO operations on the NVPG/NVC BAR
> ppc/xive2: Support crowd-matching when looking for target
> ppc/xive2: Check crowd backlog when scanning group backlog
>
> Glenn Miles (3):
> pnv/xive: Support ESB Escalation
> pnv/xive: Fix problem with treating NVGC as a NVP
> qtest/xive: Add test of pool interrupts
>
> Michael Kowal (1):
> ppc/xive: Rename ipb_to_pipr() to xive_ipb_to_pipr()
>
> include/hw/ppc/xive.h | 41 +-
> include/hw/ppc/xive2.h | 25 +-
> include/hw/ppc/xive2_regs.h | 30 +-
> include/hw/ppc/xive_regs.h | 25 +-
> tests/qtest/pnv-xive2-common.h | 1 +
> hw/intc/pnv_xive.c | 10 +-
> hw/intc/pnv_xive2.c | 166 +++++--
> hw/intc/spapr_xive.c | 8 +-
> hw/intc/xive.c | 200 +++++---
> hw/intc/xive2.c | 750 +++++++++++++++++++++++++----
> hw/ppc/pnv.c | 35 +-
> hw/ppc/spapr.c | 7 +-
> tests/qtest/pnv-xive2-flush-sync.c | 6 +-
> tests/qtest/pnv-xive2-nvpg_bar.c | 153 ++++++
> tests/qtest/pnv-xive2-test.c | 249 +++++++++-
> hw/intc/trace-events | 6 +-
> tests/qtest/meson.build | 3 +-
> 17 files changed, 1475 insertions(+), 240 deletions(-)
> create mode 100644 tests/qtest/pnv-xive2-nvpg_bar.c
^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2025-03-11 13:18 UTC | newest]
Thread overview: 41+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-10 0:05 [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Michael Kowal
2024-12-10 0:05 ` [PATCH v2 01/14] ppc/xive2: Update NVP save/restore for group attributes Michael Kowal
2025-03-10 3:22 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 02/14] ppc/xive2: Add grouping level to notification Michael Kowal
2025-03-10 3:27 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 02/14] ppc/xive: Rename ipb_to_pipr() to xive_ipb_to_pipr() Michael Kowal
2025-03-10 3:45 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 03/14] ppc/xive2: Add grouping level to notification Michael Kowal
2025-03-10 3:43 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 03/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
2025-03-10 3:52 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 04/14] ppc/xive2: Add undelivered group interrupt to backlog Michael Kowal
2024-12-10 0:05 ` [PATCH v2 04/14] ppc/xive2: Support group-matching when looking for target Michael Kowal
2024-12-10 0:05 ` [PATCH v2 05/14] ppc/xive2: Add undelivered group interrupt to backlog Michael Kowal
2025-03-10 4:07 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 05/14] ppc/xive2: Process group backlog when pushing an OS context Michael Kowal
2024-12-10 0:05 ` [PATCH v2 06/14] " Michael Kowal
2024-12-10 0:05 ` [PATCH v2 06/14] ppc/xive2: Process group backlog when updating the CPPR Michael Kowal
2024-12-10 0:05 ` [PATCH v2 07/14] " Michael Kowal
2025-03-10 4:35 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 07/14] qtest/xive: Add group-interrupt test Michael Kowal
2024-12-10 0:05 ` [PATCH v2 08/14] Add support for MMIO operations on the NVPG/NVC BAR Michael Kowal
2024-12-10 0:05 ` [PATCH v2 08/14] qtest/xive: Add group-interrupt test Michael Kowal
2025-03-10 4:46 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 09/14] ppc/xive2: Add support for MMIO operations on the NVPG/NVC BAR Michael Kowal
2025-03-10 5:10 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 09/14] ppc/xive2: Support crowd-matching when looking for target Michael Kowal
2024-12-10 0:05 ` [PATCH v2 10/14] ppc/xive2: Check crowd backlog when scanning group backlog Michael Kowal
2024-12-10 0:05 ` [PATCH v2 10/14] ppc/xive2: Support crowd-matching when looking for target Michael Kowal
2025-03-10 7:31 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 11/14] pnv/xive: Only support crowd size of 0, 2, 4 and 16 Michael Kowal
2025-03-10 5:15 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 11/14] ppc/xive2: Check crowd backlog when scanning group backlog Michael Kowal
2025-03-10 7:32 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 12/14] pnv/xive: Support ESB Escalation Michael Kowal
2025-03-10 8:07 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 13/14] pnv/xive: Fix problem with treating NVGC as a NVP Michael Kowal
2025-03-10 5:19 ` Nicholas Piggin
2024-12-10 0:05 ` [PATCH v2 14/14] qtest/xive: Add test of pool interrupts Michael Kowal
2025-03-10 8:20 ` Nicholas Piggin
2025-03-11 13:16 ` [PATCH v2 00/14] XIVE2 changes to support Group and Crowd operations Nicholas Piggin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).