* [PATCH 00/50] ppc/xive: updates for PowerVM
@ 2025-05-12 3:10 Nicholas Piggin
2025-05-12 3:10 ` [PATCH 01/50] ppc/xive: Fix xive trace event output Nicholas Piggin
` (51 more replies)
0 siblings, 52 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
These changes gets the powernv xive2 to the point it is able to run
PowerVM with good stability.
* Various bug fixes around lost interrupts particularly.
* Major group interrupt work, in particular around redistributing
interrupts. Upstream group support is not in a complete or usable
state as it is.
* Significant context push/pull improvements, particularly pool and
phys context handling was quite incomplete beyond trivial OPAL
case that pushes at boot.
* Improved tracing and checking for unimp and guest error situations.
* Various other missing feature support.
The ordering and grouping of patches in the series is not perfect,
because it had been an ongoing development, and PowerVM only started
to become stable toward the end. I did try to rearrange and improve
things, but some were not worth rebasing cost (e.g., some of the
pool/phys pull redistribution patches should have ideally been squashed
or moved together), so please bear that in mind. Suggestions for
further rearranging the series are fine, but I might just find they are
too much effort to be worthwhile.
Thanks,
Nick
Glenn Miles (12):
ppc/xive2: Fix calculation of END queue sizes
ppc/xive2: Use fair irq target search algorithm
ppc/xive2: Fix irq preempted by lower priority group irq
ppc/xive2: Fix treatment of PIPR in CPPR update
pnv/xive2: Support ESB Escalation
ppc/xive2: add interrupt priority configuration flags
ppc/xive2: Support redistribution of group interrupts
ppc/xive: Add more interrupt notification tracing
ppc/xive2: Improve pool regs variable name
ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
ppc/xive2: Redistribute group interrupt precluded by CPPR update
ppc/xive2: redistribute irqs for pool and phys ctx pull
Michael Kowal (4):
ppc/xive2: Remote VSDs need to match on forwarding address
ppc/xive2: Reset Generation Flipped bit on END Cache Watch
pnv/xive2: Print value in invalid register write logging
pnv/xive2: Permit valid writes to VC/PC Flush Control registers
Nicholas Piggin (34):
ppc/xive: Fix xive trace event output
ppc/xive: Report access size in XIVE TM operation error logs
ppc/xive2: fix context push calculation of IPB priority
ppc/xive: Fix PHYS NSR ring matching
ppc/xive2: Do not present group interrupt on OS-push if precluded by
CPPR
ppc/xive2: Set CPPR delivery should account for group priority
ppc/xive: tctx_notify should clear the precluded interrupt
ppc/xive: Explicitly zero NSR after accepting
ppc/xive: Move NSR decoding into helper functions
ppc/xive: Fix pulling pool and phys contexts
pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
ppc/xive: Change presenter .match_nvt to match not present
ppc/xive2: Redistribute group interrupt preempted by higher priority
interrupt
ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
ppc/xive: Fix high prio group interrupt being preempted by low prio VP
ppc/xive: Split xive recompute from IPB function
ppc/xive: tctx signaling registers rework
ppc/xive: tctx_accept only lower irq line if an interrupt was
presented
ppc/xive: Add xive_tctx_pipr_set() helper function
ppc/xive2: split tctx presentation processing from set CPPR
ppc/xive2: Consolidate presentation processing in context push
ppc/xive2: Avoid needless interrupt re-check on CPPR set
ppc/xive: Assert group interrupts were redistributed
ppc/xive2: implement NVP context save restore for POOL ring
ppc/xive2: Prevent pulling of pool context losing phys interrupt
ppc/xive: Redistribute phys after pulling of pool context
ppc/xive: Check TIMA operations validity
ppc/xive2: Implement pool context push TIMA op
ppc/xive2: redistribute group interrupts on context push
ppc/xive2: Implement set_os_pending TIMA op
ppc/xive2: Implement POOL LGS push TIMA op
ppc/xive2: Implement PHYS ring VP push TIMA op
ppc/xive: Split need_resend into restore_nvp
ppc/xive2: Enable lower level contexts on VP push
hw/intc/pnv_xive.c | 16 +-
hw/intc/pnv_xive2.c | 139 +++++--
hw/intc/pnv_xive2_regs.h | 1 +
hw/intc/spapr_xive.c | 18 +-
hw/intc/trace-events | 12 +-
hw/intc/xive.c | 555 ++++++++++++++++++----------
hw/intc/xive2.c | 717 +++++++++++++++++++++++++++---------
hw/ppc/pnv.c | 48 +--
hw/ppc/spapr.c | 21 +-
include/hw/ppc/xive.h | 66 +++-
include/hw/ppc/xive2.h | 22 +-
include/hw/ppc/xive2_regs.h | 22 +-
12 files changed, 1145 insertions(+), 492 deletions(-)
--
2.47.1
^ permalink raw reply [flat|nested] 192+ messages in thread
* [PATCH 01/50] ppc/xive: Fix xive trace event output
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:26 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 02/50] ppc/xive: Report access size in XIVE TM operation error logs Nicholas Piggin
` (50 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Typo, IBP should be IPB.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/trace-events | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
index 0ba9a02e73..f77f9733c9 100644
--- a/hw/intc/trace-events
+++ b/hw/intc/trace-events
@@ -274,9 +274,9 @@ kvm_xive_cpu_connect(uint32_t id) "connect CPU%d to KVM device"
kvm_xive_source_reset(uint32_t srcno) "IRQ 0x%x"
# xive.c
-xive_tctx_accept(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x ACK"
-xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x raise !"
-xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
+xive_tctx_accept(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x ACK"
+xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x raise !"
+xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
xive_source_esb_read(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
xive_source_esb_write(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "END 0x%02x/0x%04x -> enqueue 0x%08x"
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 02/50] ppc/xive: Report access size in XIVE TM operation error logs
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
2025-05-12 3:10 ` [PATCH 01/50] ppc/xive: Fix xive trace event output Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 03/50] ppc/xive2: Fix calculation of END queue sizes Nicholas Piggin
` (49 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Report access size in XIVE TM operation error logs.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 3eb28c2265..80b07a0afe 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -326,7 +326,7 @@ static void xive_tm_raw_write(XiveTCTX *tctx, hwaddr offset, uint64_t value,
*/
if (size < 4 || !mask || ring_offset == TM_QW0_USER) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA @%"
- HWADDR_PRIx"\n", offset);
+ HWADDR_PRIx" size %d\n", offset, size);
return;
}
@@ -357,7 +357,7 @@ static uint64_t xive_tm_raw_read(XiveTCTX *tctx, hwaddr offset, unsigned size)
*/
if (size < 4 || !mask || ring_offset == TM_QW0_USER) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access at TIMA @%"
- HWADDR_PRIx"\n", offset);
+ HWADDR_PRIx" size %d\n", offset, size);
return -1;
}
@@ -688,7 +688,7 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
xto = xive_tm_find_op(tctx->xptr, offset, size, true);
if (!xto) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA "
- "@%"HWADDR_PRIx"\n", offset);
+ "@%"HWADDR_PRIx" size %d\n", offset, size);
} else {
xto->write_handler(xptr, tctx, offset, value, size);
}
@@ -727,7 +727,7 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
xto = xive_tm_find_op(tctx->xptr, offset, size, false);
if (!xto) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access to TIMA"
- "@%"HWADDR_PRIx"\n", offset);
+ "@%"HWADDR_PRIx" size %d\n", offset, size);
return -1;
}
ret = xto->read_handler(xptr, tctx, offset, size);
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 03/50] ppc/xive2: Fix calculation of END queue sizes
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
2025-05-12 3:10 ` [PATCH 01/50] ppc/xive: Fix xive trace event output Nicholas Piggin
2025-05-12 3:10 ` [PATCH 02/50] ppc/xive: Report access size in XIVE TM operation error logs Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address Nicholas Piggin
` (48 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
The queue size of an Event Notification Descriptor (END)
is determined by the 'cl' and QsZ fields of the END.
If the cl field is 1, then the queue size (in bytes) will
be the size of a cache line 128B * 2^QsZ and QsZ is limited
to 4. Otherwise, it will be 4096B * 2^QsZ with QsZ limited
to 12.
Fixes: f8a233dedf2 ("ppc/xive2: Introduce a XIVE2 core framework")
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/xive2.c | 25 +++++++++++++++++++------
include/hw/ppc/xive2_regs.h | 1 +
2 files changed, 20 insertions(+), 6 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 7d584dfafa..790152a2a6 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -188,12 +188,27 @@ void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
(uint32_t) xive_get_field64(EAS2_END_DATA, eas->w));
}
+#define XIVE2_QSIZE_CHUNK_CL 128
+#define XIVE2_QSIZE_CHUNK_4k 4096
+/* Calculate max number of queue entries for an END */
+static uint32_t xive2_end_get_qentries(Xive2End *end)
+{
+ uint32_t w3 = end->w3;
+ uint32_t qsize = xive_get_field32(END2_W3_QSIZE, w3);
+ if (xive_get_field32(END2_W3_CL, w3)) {
+ g_assert(qsize <= 4);
+ return (XIVE2_QSIZE_CHUNK_CL << qsize) / sizeof(uint32_t);
+ } else {
+ g_assert(qsize <= 12);
+ return (XIVE2_QSIZE_CHUNK_4k << qsize) / sizeof(uint32_t);
+ }
+}
+
void xive2_end_queue_pic_print_info(Xive2End *end, uint32_t width, GString *buf)
{
uint64_t qaddr_base = xive2_end_qaddr(end);
- uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
- uint32_t qentries = 1 << (qsize + 10);
+ uint32_t qentries = xive2_end_get_qentries(end);
int i;
/*
@@ -223,8 +238,7 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf)
uint64_t qaddr_base = xive2_end_qaddr(end);
uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
uint32_t qgen = xive_get_field32(END2_W1_GENERATION, end->w1);
- uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
- uint32_t qentries = 1 << (qsize + 10);
+ uint32_t qentries = xive2_end_get_qentries(end);
uint32_t nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6);
uint32_t nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6);
@@ -341,13 +355,12 @@ void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx, GString *buf)
static void xive2_end_enqueue(Xive2End *end, uint32_t data)
{
uint64_t qaddr_base = xive2_end_qaddr(end);
- uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
uint32_t qgen = xive_get_field32(END2_W1_GENERATION, end->w1);
uint64_t qaddr = qaddr_base + (qindex << 2);
uint32_t qdata = cpu_to_be32((qgen << 31) | (data & 0x7fffffff));
- uint32_t qentries = 1 << (qsize + 10);
+ uint32_t qentries = xive2_end_get_qentries(end);
if (dma_memory_write(&address_space_memory, qaddr, &qdata, sizeof(qdata),
MEMTXATTRS_UNSPECIFIED)) {
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index b11395c563..3c28de8a30 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -87,6 +87,7 @@ typedef struct Xive2End {
#define END2_W2_EQ_ADDR_HI PPC_BITMASK32(8, 31)
uint32_t w3;
#define END2_W3_EQ_ADDR_LO PPC_BITMASK32(0, 24)
+#define END2_W3_CL PPC_BIT32(27)
#define END2_W3_QSIZE PPC_BITMASK32(28, 31)
uint32_t w4;
#define END2_W4_END_BLOCK PPC_BITMASK32(4, 7)
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (2 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 03/50] ppc/xive2: Fix calculation of END queue sizes Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
` (3 more replies)
2025-05-12 3:10 ` [PATCH 05/50] ppc/xive2: fix context push calculation of IPB priority Nicholas Piggin
` (47 subsequent siblings)
51 siblings, 4 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Michael Kowal <kowal@linux.ibm.com>
In a multi chip environment there will be remote/forwarded VSDs. The check
to find a matching INT controller (XIVE) of the remote block number was
checking the INTs chip number. Block numbers are not tied to a chip number.
The matching remote INT is the one that matches the forwarded VSD address
with VSD types associated MMIO BAR.
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/pnv_xive2.c | 25 +++++++++++++++++--------
1 file changed, 17 insertions(+), 8 deletions(-)
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index d1713b406c..30b4ab2efe 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -102,12 +102,10 @@ static uint32_t pnv_xive2_block_id(PnvXive2 *xive)
}
/*
- * Remote access to controllers. HW uses MMIOs. For now, a simple scan
- * of the chips is good enough.
- *
- * TODO: Block scope support
+ * Remote access to INT controllers. HW uses MMIOs(?). For now, a simple
+ * scan of all the chips INT controller is good enough.
*/
-static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
+static PnvXive2 *pnv_xive2_get_remote(uint32_t vsd_type, hwaddr fwd_addr)
{
PnvMachineState *pnv = PNV_MACHINE(qdev_get_machine());
int i;
@@ -116,10 +114,22 @@ static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
PnvXive2 *xive = &chip10->xive;
- if (pnv_xive2_block_id(xive) == blk) {
+ /*
+ * Is this the XIVE matching the forwarded VSD address is for this
+ * VSD type
+ */
+ if ((vsd_type == VST_ESB && fwd_addr == xive->esb_base) ||
+ (vsd_type == VST_END && fwd_addr == xive->end_base) ||
+ ((vsd_type == VST_NVP ||
+ vsd_type == VST_NVG) && fwd_addr == xive->nvpg_base) ||
+ (vsd_type == VST_NVC && fwd_addr == xive->nvc_base)) {
return xive;
}
}
+
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "XIVE: >>>>> pnv_xive2_get_remote() vsd_type %u fwd_addr 0x%lX NOT FOUND\n",
+ vsd_type, fwd_addr);
return NULL;
}
@@ -252,8 +262,7 @@ static uint64_t pnv_xive2_vst_addr(PnvXive2 *xive, uint32_t type, uint8_t blk,
/* Remote VST access */
if (GETFIELD(VSD_MODE, vsd) == VSD_MODE_FORWARD) {
- xive = pnv_xive2_get_remote(blk);
-
+ xive = pnv_xive2_get_remote(type, (vsd & VSD_ADDRESS_MASK));
return xive ? pnv_xive2_vst_addr(xive, type, blk, idx) : 0;
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 05/50] ppc/xive2: fix context push calculation of IPB priority
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (3 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 06/50] ppc/xive: Fix PHYS NSR ring matching Nicholas Piggin
` (46 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Pushing a context and loading IPB from NVP is defined to merge ('or')
that IPB into the TIMA IPB register. PIPR should therefore be calculated
based on the final IPB value, not just the NVP value.
Fixes: 9d2b6058c5b ("ppc/xive2: Add grouping level to notification")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 790152a2a6..4dd04a0398 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -835,8 +835,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
}
+ /* IPB bits in the backlog are merged with the TIMA IPB bits */
regs[TM_IPB] |= ipb;
- backlog_prio = xive_ipb_to_pipr(ipb);
+ backlog_prio = xive_ipb_to_pipr(regs[TM_IPB]);
backlog_level = 0;
first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 06/50] ppc/xive: Fix PHYS NSR ring matching
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (4 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 05/50] ppc/xive2: fix context push calculation of IPB priority Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch Nicholas Piggin
` (45 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Test that the NSR exception bit field is equal to the pool ring value,
rather than any common bits set, which is more correct (although there
is no practical bug because the LSI NSR type is not implemented and
POOL/PHYS NSR are encoded with exclusive bits).
Fixes: 4c3ccac636 ("pnv/xive: Add special handling for pool targets")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 80b07a0afe..cebe409a1a 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -54,7 +54,8 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
uint8_t *alt_regs;
/* POOL interrupt uses IPB in QW2, POOL ring */
- if ((ring == TM_QW3_HV_PHYS) && (nsr & (TM_QW3_NSR_HE_POOL << 6))) {
+ if ((ring == TM_QW3_HV_PHYS) &&
+ ((nsr & TM_QW3_NSR_HE) == (TM_QW3_NSR_HE_POOL << 6))) {
alt_ring = TM_QW2_HV_POOL;
} else {
alt_ring = ring;
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (5 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 06/50] ppc/xive: Fix PHYS NSR ring matching Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
` (3 more replies)
2025-05-12 3:10 ` [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm Nicholas Piggin
` (44 subsequent siblings)
51 siblings, 4 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Michael Kowal <kowal@linux.ibm.com>
When the END Event Queue wraps the END EQ Generation bit is flipped and the
Generation Flipped bit is set to one. On a END cache Watch read operation,
the Generation Flipped bit needs to be reset.
While debugging an error modified END not valid error messages to include
the method since all were the same.
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/pnv_xive2.c | 3 ++-
hw/intc/xive2.c | 4 ++--
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 30b4ab2efe..72cdf0f20c 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -1325,10 +1325,11 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
case VC_ENDC_WATCH3_DATA0:
/*
* Load DATA registers from cache with data requested by the
- * SPEC register
+ * SPEC register. Clear gen_flipped bit in word 1.
*/
watch_engine = (offset - VC_ENDC_WATCH0_DATA0) >> 6;
pnv_xive2_end_cache_load(xive, watch_engine);
+ xive->vc_regs[reg] &= ~(uint64_t)END2_W1_GEN_FLIPPED;
val = xive->vc_regs[reg];
break;
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 4dd04a0398..453fe37f18 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -374,8 +374,8 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
qgen ^= 1;
end->w1 = xive_set_field32(END2_W1_GENERATION, end->w1, qgen);
- /* TODO(PowerNV): reset GF bit on a cache watch operation */
- end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, qgen);
+ /* Set gen flipped to 1, it gets reset on a cache watch operation */
+ end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, 1);
}
end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (6 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:31 ` Caleb Schlossin
` (3 more replies)
2025-05-12 3:10 ` [PATCH 09/50] ppc/xive2: Fix irq preempted by lower priority group irq Nicholas Piggin
` (43 subsequent siblings)
51 siblings, 4 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
The current xive algorithm for finding a matching group vCPU
target always uses the first vCPU found. And, since it always
starts the search with thread 0 of a core, thread 0 is almost
always used to handle group interrupts. This can lead to additional
interrupt latency and poor performance for interrupt intensive
work loads.
Changing this to use a simple round-robin algorithm for deciding which
thread number to use when starting a search, which leads to a more
distributed use of threads for handling group interrupts.
[npiggin: Also round-robin among threads, not just cores]
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/pnv_xive2.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 72cdf0f20c..d7ca97ecbb 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -643,13 +643,18 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
int i, j;
bool gen1_tima_os =
xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
+ static int next_start_core;
+ static int next_start_thread;
+ int start_core = next_start_core;
+ int start_thread = next_start_thread;
for (i = 0; i < chip->nr_cores; i++) {
- PnvCore *pc = chip->cores[i];
+ PnvCore *pc = chip->cores[(i + start_core) % chip->nr_cores];
CPUCore *cc = CPU_CORE(pc);
for (j = 0; j < cc->nr_threads; j++) {
- PowerPCCPU *cpu = pc->threads[j];
+ /* Start search for match with different thread each call */
+ PowerPCCPU *cpu = pc->threads[(j + start_thread) % cc->nr_threads];
XiveTCTX *tctx;
int ring;
@@ -694,6 +699,15 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
if (!match->tctx) {
match->ring = ring;
match->tctx = tctx;
+
+ next_start_thread = j + start_thread + 1;
+ if (next_start_thread >= cc->nr_threads) {
+ next_start_thread = 0;
+ next_start_core = i + start_core + 1;
+ if (next_start_core >= chip->nr_cores) {
+ next_start_core = 0;
+ }
+ }
}
count++;
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 09/50] ppc/xive2: Fix irq preempted by lower priority group irq
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (7 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:31 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update Nicholas Piggin
` (42 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
A problem was seen where uart interrupts would be lost resulting in the
console hanging. Traces showed that a lower priority interrupt was
preempting a higher priority interrupt, which would result in the higher
priority interrupt never being handled.
The new interrupt's priority was being compared against the CPPR
(Current Processor Priority Register) instead of the PIPR (Post
Interrupt Priority Register), as was required by the XIVE spec.
This allowed for a window between raising an interrupt and ACK'ing
the interrupt where a lower priority interrupt could slip in.
Fixes: 26c55b99418 ("ppc/xive2: Process group backlog when updating the CPPR")
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/xive2.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 453fe37f18..2b4d0f51be 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1283,7 +1283,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
* priority to know if the thread can take the interrupt now or if
* it is precluded.
*/
- if (priority < alt_regs[TM_CPPR]) {
+ if (priority < alt_regs[TM_PIPR]) {
return false;
}
return true;
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (8 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 09/50] ppc/xive2: Fix irq preempted by lower priority group irq Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:32 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR Nicholas Piggin
` (41 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
According to the XIVE spec, updating the CPPR should also update the
PIPR. The final value of the PIPR depends on other factors, but it
should never be set to a value that is above the CPPR.
Also added support for redistributing an active group interrupt when it
is precluded as a result of changing the CPPR value.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/xive2.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 2b4d0f51be..1971c05fa1 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -995,7 +995,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
}
}
}
- regs[TM_PIPR] = pipr_min;
+
+ /* PIPR should not be set to a value greater than CPPR */
+ regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
if (rc) {
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (9 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:32 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 12/50] ppc/xive2: Set CPPR delivery should account for group priority Nicholas Piggin
` (40 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Group interrupts should not be taken from the backlog and presented
if they are precluded by CPPR.
Fixes: 855434b3b8 ("ppc/xive2: Process group backlog when pushing an OS context")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 1971c05fa1..8ede95b671 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -845,7 +845,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
group_prio = xive2_presenter_backlog_scan(xptr, nvp_blk, nvp_idx,
first_group, &group_level);
regs[TM_LSMFB] = group_prio;
- if (regs[TM_LGS] && group_prio < backlog_prio) {
+ if (regs[TM_LGS] && group_prio < backlog_prio &&
+ group_prio < regs[TM_CPPR]) {
+
/* VP can take a group interrupt */
xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
group_prio, group_level);
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 12/50] ppc/xive2: Set CPPR delivery should account for group priority
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (10 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:33 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 13/50] ppc/xive: tctx_notify should clear the precluded interrupt Nicholas Piggin
` (39 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
The group interrupt delivery flow selects the group backlog scan if
LSMFB < IPB, but that scan may find an interrupt with a priority >=
IPB. In that case, the VP-direct interrupt should be chosen. This
extends to selecting the lowest prio between POOL and PHYS rings.
Implement this just by re-starting the selection logic if the
backlog irq was not found or priority did not match LSMFB (LSMFB
is updated so next time around it would see the right value and
not loop infinitely).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 32 ++++++++++++++++++++++----------
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 8ede95b671..de139dcfbf 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -939,7 +939,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
{
uint8_t *regs = &tctx->regs[ring];
Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
- uint8_t old_cppr, backlog_prio, first_group, group_level = 0;
+ uint8_t old_cppr, backlog_prio, first_group, group_level;
uint8_t pipr_min, lsmfb_min, ring_min;
bool group_enabled;
uint32_t nvp_blk, nvp_idx;
@@ -961,10 +961,12 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
* Recompute the PIPR based on local pending interrupts. It will
* be adjusted below if needed in case of pending group interrupts.
*/
+again:
pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
group_enabled = !!regs[TM_LGS];
- lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff;
+ lsmfb_min = group_enabled ? regs[TM_LSMFB] : 0xff;
ring_min = ring;
+ group_level = 0;
/* PHYS updates also depend on POOL values */
if (ring == TM_QW3_HV_PHYS) {
@@ -998,9 +1000,6 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
}
}
- /* PIPR should not be set to a value greater than CPPR */
- regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
-
rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
if (rc) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
@@ -1019,7 +1018,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
if (group_enabled &&
lsmfb_min < cppr &&
- lsmfb_min < regs[TM_PIPR]) {
+ lsmfb_min < pipr_min) {
/*
* Thread has seen a group interrupt with a higher priority
* than the new cppr or pending local interrupt. Check the
@@ -1048,12 +1047,25 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
nvp_blk, nvp_idx,
first_group, &group_level);
tctx->regs[ring_min + TM_LSMFB] = backlog_prio;
- if (backlog_prio != 0xFF) {
- xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
- backlog_prio, group_level);
- regs[TM_PIPR] = backlog_prio;
+ if (backlog_prio != lsmfb_min) {
+ /*
+ * If the group backlog scan finds a less favored or no interrupt,
+ * then re-do the processing which may turn up a more favored
+ * interrupt from IPB or the other pool. Backlog should not
+ * find a priority < LSMFB.
+ */
+ g_assert(backlog_prio >= lsmfb_min);
+ goto again;
}
+
+ xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
+ backlog_prio, group_level);
+ pipr_min = backlog_prio;
}
+
+ /* PIPR should not be set to a value greater than CPPR */
+ regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
+
/* CPPR has changed, check if we need to raise a pending exception */
xive_tctx_notify(tctx, ring_min, group_level);
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 13/50] ppc/xive: tctx_notify should clear the precluded interrupt
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (11 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 12/50] ppc/xive2: Set CPPR delivery should account for group priority Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:33 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting Nicholas Piggin
` (38 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
If CPPR is lowered to preclude the pending interrupt, NSR should be
cleared and the qemu_irq should be lowered. This avoids some cases
of supurious interrupts.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index cebe409a1a..6293ea4361 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -110,6 +110,9 @@ void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
regs[TM_IPB], alt_regs[TM_PIPR],
alt_regs[TM_CPPR], alt_regs[TM_NSR]);
qemu_irq_raise(xive_tctx_output(tctx, ring));
+ } else {
+ alt_regs[TM_NSR] = 0;
+ qemu_irq_lower(xive_tctx_output(tctx, ring));
}
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (12 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 13/50] ppc/xive: tctx_notify should clear the precluded interrupt Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:34 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 15/50] ppc/xive: Move NSR decoding into helper functions Nicholas Piggin
` (37 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Have xive_tctx_accept clear NSR in one shot rather than masking out bits
as they are tested, which makes it clear it's reset to 0, and does not
have a partial NSR value in the register.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 6293ea4361..bb40a69c5b 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -68,13 +68,11 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
* If the interrupt was for a specific VP, reset the pending
* buffer bit, otherwise clear the logical server indicator
*/
- if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
- regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
- } else {
+ if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
}
- /* Drop the exception bit and any group/crowd */
+ /* Clear the exception from NSR */
regs[TM_NSR] = 0;
trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 15/50] ppc/xive: Move NSR decoding into helper functions
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (13 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:35 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 16/50] ppc/xive: Fix pulling pool and phys contexts Nicholas Piggin
` (36 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Rather than functions to return masks to test NSR bits, have functions
to test those bits directly. This should be no functional change, it
just makes the code more readable.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 51 +++++++++++++++++++++++++++++++++++--------
include/hw/ppc/xive.h | 4 ++++
2 files changed, 46 insertions(+), 9 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index bb40a69c5b..c2da23f9ea 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -25,6 +25,45 @@
/*
* XIVE Thread Interrupt Management context
*/
+bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr)
+{
+ switch (ring) {
+ case TM_QW1_OS:
+ return !!(nsr & TM_QW1_NSR_EO);
+ case TM_QW2_HV_POOL:
+ case TM_QW3_HV_PHYS:
+ return !!(nsr & TM_QW3_NSR_HE);
+ default:
+ g_assert_not_reached();
+ }
+}
+
+bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr)
+{
+ if ((nsr & TM_NSR_GRP_LVL) > 0) {
+ g_assert(xive_nsr_indicates_exception(ring, nsr));
+ return true;
+ }
+ return false;
+}
+
+uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr)
+{
+ /* NSR determines if pool/phys ring is for phys or pool interrupt */
+ if ((ring == TM_QW3_HV_PHYS) || (ring == TM_QW2_HV_POOL)) {
+ uint8_t he = (nsr & TM_QW3_NSR_HE) >> 6;
+
+ if (he == TM_QW3_NSR_HE_PHYS) {
+ return TM_QW3_HV_PHYS;
+ } else if (he == TM_QW3_NSR_HE_POOL) {
+ return TM_QW2_HV_POOL;
+ } else {
+ /* Don't support LSI mode */
+ g_assert_not_reached();
+ }
+ }
+ return ring;
+}
static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
{
@@ -48,18 +87,12 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
qemu_irq_lower(xive_tctx_output(tctx, ring));
- if (regs[TM_NSR] != 0) {
+ if (xive_nsr_indicates_exception(ring, nsr)) {
uint8_t cppr = regs[TM_PIPR];
uint8_t alt_ring;
uint8_t *alt_regs;
- /* POOL interrupt uses IPB in QW2, POOL ring */
- if ((ring == TM_QW3_HV_PHYS) &&
- ((nsr & TM_QW3_NSR_HE) == (TM_QW3_NSR_HE_POOL << 6))) {
- alt_ring = TM_QW2_HV_POOL;
- } else {
- alt_ring = ring;
- }
+ alt_ring = xive_nsr_exception_ring(ring, nsr);
alt_regs = &tctx->regs[alt_ring];
regs[TM_CPPR] = cppr;
@@ -68,7 +101,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
* If the interrupt was for a specific VP, reset the pending
* buffer bit, otherwise clear the logical server indicator
*/
- if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
+ if (!xive_nsr_indicates_group_exception(ring, nsr)) {
alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
}
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 538f438681..28f0f1b79a 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -365,6 +365,10 @@ static inline uint32_t xive_tctx_word2(uint8_t *ring)
return *((uint32_t *) &ring[TM_WORD2]);
}
+bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr);
+bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr);
+uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr);
+
/*
* XIVE Router
*/
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 16/50] ppc/xive: Fix pulling pool and phys contexts
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (14 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 15/50] ppc/xive: Move NSR decoding into helper functions Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 17/50] pnv/xive2: Support ESB Escalation Nicholas Piggin
` (35 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
This improves the implementation of pulling pool and phys contexts in
XIVE1, by following closer the OS pulling code.
In particular, the old ring data is returned rather than the modified,
and irq signals are reset on pull.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 66 ++++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 58 insertions(+), 8 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index c2da23f9ea..1a94642c62 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -241,25 +241,75 @@ static uint64_t xive_tm_ack_hv_reg(XivePresenter *xptr, XiveTCTX *tctx,
return xive_tctx_accept(tctx, TM_QW3_HV_PHYS);
}
+static void xive_pool_cam_decode(uint32_t cam, uint8_t *nvt_blk,
+ uint32_t *nvt_idx, bool *vp)
+{
+ if (nvt_blk) {
+ *nvt_blk = xive_nvt_blk(cam);
+ }
+ if (nvt_idx) {
+ *nvt_idx = xive_nvt_idx(cam);
+ }
+ if (vp) {
+ *vp = !!(cam & TM_QW2W2_VP);
+ }
+}
+
+static uint32_t xive_tctx_get_pool_cam(XiveTCTX *tctx, uint8_t *nvt_blk,
+ uint32_t *nvt_idx, bool *vp)
+{
+ uint32_t qw2w2 = xive_tctx_word2(&tctx->regs[TM_QW2_HV_POOL]);
+ uint32_t cam = be32_to_cpu(qw2w2);
+
+ xive_pool_cam_decode(cam, nvt_blk, nvt_idx, vp);
+ return qw2w2;
+}
+
+static void xive_tctx_set_pool_cam(XiveTCTX *tctx, uint32_t qw2w2)
+{
+ memcpy(&tctx->regs[TM_QW2_HV_POOL + TM_WORD2], &qw2w2, 4);
+}
+
static uint64_t xive_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size)
{
- uint32_t qw2w2_prev = xive_tctx_word2(&tctx->regs[TM_QW2_HV_POOL]);
uint32_t qw2w2;
+ uint32_t qw2w2_new;
+ uint8_t nvt_blk;
+ uint32_t nvt_idx;
+ bool vp;
- qw2w2 = xive_set_field32(TM_QW2W2_VP, qw2w2_prev, 0);
- memcpy(&tctx->regs[TM_QW2_HV_POOL + TM_WORD2], &qw2w2, 4);
+ qw2w2 = xive_tctx_get_pool_cam(tctx, &nvt_blk, &nvt_idx, &vp);
+
+ if (!vp) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pull invalid POOL NVT %x/%x !?\n",
+ nvt_blk, nvt_idx);
+ }
+
+ /* Invalidate CAM line */
+ qw2w2_new = xive_set_field32(TM_QW2W2_VP, qw2w2, 0);
+ xive_tctx_set_pool_cam(tctx, qw2w2_new);
+
+ xive_tctx_reset_signal(tctx, TM_QW1_OS);
+ xive_tctx_reset_signal(tctx, TM_QW2_HV_POOL);
return qw2w2;
}
static uint64_t xive_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size)
{
- uint8_t qw3b8_prev = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
- uint8_t qw3b8;
+ uint8_t qw3b8 = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
+ uint8_t qw3b8_new;
+
+ qw3b8 = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
+ if (!(qw3b8 & TM_QW3B8_VT)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid PHYS thread!?\n");
+ }
+ qw3b8_new = qw3b8 & ~TM_QW3B8_VT;
+ tctx->regs[TM_QW3_HV_PHYS + TM_WORD2] = qw3b8_new;
- qw3b8 = qw3b8_prev & ~TM_QW3B8_VT;
- tctx->regs[TM_QW3_HV_PHYS + TM_WORD2] = qw3b8;
+ xive_tctx_reset_signal(tctx, TM_QW1_OS);
+ xive_tctx_reset_signal(tctx, TM_QW3_HV_PHYS);
return qw3b8;
}
@@ -489,7 +539,7 @@ static uint64_t xive_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
qw1w2 = xive_tctx_get_os_cam(tctx, &nvt_blk, &nvt_idx, &vo);
if (!vo) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid NVT %x/%x !?\n",
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pull invalid OS NVT %x/%x !?\n",
nvt_blk, nvt_idx);
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 17/50] pnv/xive2: Support ESB Escalation
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (15 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 16/50] ppc/xive: Fix pulling pool and phys contexts Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 18/50] pnv/xive2: Print value in invalid register write logging Nicholas Piggin
` (34 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin, Glenn Miles
From: Glenn Miles <milesg@linux.vnet.ibm.com>
Add support for XIVE ESB Interrupt Escalation.
Suggested-by: Michael Kowal <kowal@linux.ibm.com>
[This change was taken from a patch provided by Michael Kowal.]
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
---
hw/intc/xive2.c | 62 ++++++++++++++++++++++++++++++-------
include/hw/ppc/xive2.h | 1 +
include/hw/ppc/xive2_regs.h | 13 +++++---
3 files changed, 59 insertions(+), 17 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index de139dcfbf..0993e792cc 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1552,18 +1552,39 @@ do_escalation:
}
}
- /*
- * The END trigger becomes an Escalation trigger
- */
- xive2_router_end_notify(xrtr,
- xive_get_field32(END2_W4_END_BLOCK, end.w4),
- xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
- xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
+ if (xive2_end_is_escalate_end(&end)) {
+ /*
+ * Perform END Adaptive escalation processing
+ * The END trigger becomes an Escalation trigger
+ */
+ xive2_router_end_notify(xrtr,
+ xive_get_field32(END2_W4_END_BLOCK, end.w4),
+ xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
+ xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
+ } /* end END adaptive escalation */
+
+ else {
+ uint32_t lisn; /* Logical Interrupt Source Number */
+
+ /*
+ * Perform ESB escalation processing
+ * E[N] == 1 --> N
+ * Req[Block] <- E[ESB_Block]
+ * Req[Index] <- E[ESB_Index]
+ * Req[Offset] <- 0x000
+ * Execute <ESB Store> Req command
+ */
+ lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
+ xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
+
+ xive2_notify(xrtr, lisn, true /* pq_checked */);
+ }
+
+ return;
}
-void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
+void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
{
- Xive2Router *xrtr = XIVE2_ROUTER(xn);
uint8_t eas_blk = XIVE_EAS_BLOCK(lisn);
uint32_t eas_idx = XIVE_EAS_INDEX(lisn);
Xive2Eas eas;
@@ -1606,13 +1627,30 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
return;
}
+ /* TODO: add support for EAS resume */
+ if (xive2_eas_is_resume(&eas)) {
+ qemu_log_mask(LOG_UNIMP,
+ "XIVE: EAS resume processing unimplemented - LISN %x\n",
+ lisn);
+ return;
+ }
+
/*
* The event trigger becomes an END trigger
*/
xive2_router_end_notify(xrtr,
- xive_get_field64(EAS2_END_BLOCK, eas.w),
- xive_get_field64(EAS2_END_INDEX, eas.w),
- xive_get_field64(EAS2_END_DATA, eas.w));
+ xive_get_field64(EAS2_END_BLOCK, eas.w),
+ xive_get_field64(EAS2_END_INDEX, eas.w),
+ xive_get_field64(EAS2_END_DATA, eas.w));
+ return;
+}
+
+void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xn);
+
+ xive2_notify(xrtr, lisn, pq_checked);
+ return;
}
static const Property xive2_router_properties[] = {
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 8cdf819174..2436ddb5e5 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -80,6 +80,7 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
uint32_t xive2_router_get_config(Xive2Router *xrtr);
void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
+void xive2_notify(Xive2Router *xrtr, uint32_t lisn, bool pq_checked);
/*
* XIVE2 Presenter (POWER10)
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 3c28de8a30..2c535ec0d0 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -39,15 +39,18 @@
typedef struct Xive2Eas {
uint64_t w;
-#define EAS2_VALID PPC_BIT(0)
-#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
-#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
-#define EAS2_MASKED PPC_BIT(32) /* Masked */
-#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
+#define EAS2_VALID PPC_BIT(0)
+#define EAS2_QOS PPC_BIT(1, 2) /* Quality of Service(unimp) */
+#define EAS2_RESUME PPC_BIT(3) /* END Resume(unimp) */
+#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
+#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
+#define EAS2_MASKED PPC_BIT(32) /* Masked */
+#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
} Xive2Eas;
#define xive2_eas_is_valid(eas) (be64_to_cpu((eas)->w) & EAS2_VALID)
#define xive2_eas_is_masked(eas) (be64_to_cpu((eas)->w) & EAS2_MASKED)
+#define xive2_eas_is_resume(eas) (be64_to_cpu((eas)->w) & EAS2_RESUME)
void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf);
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 18/50] pnv/xive2: Print value in invalid register write logging
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (16 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 17/50] pnv/xive2: Support ESB Escalation Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
` (3 more replies)
2025-05-12 3:10 ` [PATCH 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL Nicholas Piggin
` (33 subsequent siblings)
51 siblings, 4 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Michael Kowal <kowal@linux.ibm.com>
This can make it easier to see what the target system is trying to
do.
[npiggin: split from larger patch]
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/pnv_xive2.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index d7ca97ecbb..fcf5b2e75c 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -1197,7 +1197,8 @@ static void pnv_xive2_ic_cq_write(void *opaque, hwaddr offset,
case CQ_FIRMASK_OR: /* FIR error reporting */
break;
default:
- xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx, offset);
+ xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx" value 0x%"PRIx64,
+ offset, val);
return;
}
@@ -1495,7 +1496,8 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
break;
default:
- xive2_error(xive, "VC: invalid write @%"HWADDR_PRIx, offset);
+ xive2_error(xive, "VC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
+ offset, val);
return;
}
@@ -1703,7 +1705,8 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
break;
default:
- xive2_error(xive, "PC: invalid write @%"HWADDR_PRIx, offset);
+ xive2_error(xive, "PC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
+ offset, val);
return;
}
@@ -1790,7 +1793,8 @@ static void pnv_xive2_ic_tctxt_write(void *opaque, hwaddr offset,
xive->tctxt_regs[reg] = val;
break;
default:
- xive2_error(xive, "TCTXT: invalid write @%"HWADDR_PRIx, offset);
+ xive2_error(xive, "TCTXT: invalid write @0x%"HWADDR_PRIx
+ " data 0x%"PRIx64, offset, val);
return;
}
}
@@ -1861,7 +1865,8 @@ static void pnv_xive2_xscom_write(void *opaque, hwaddr offset,
pnv_xive2_ic_tctxt_write(opaque, mmio_offset, val, size);
break;
default:
- xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx, offset);
+ xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx
+ " value 0x%"PRIx64, offset, val);
}
}
@@ -1929,7 +1934,8 @@ static void pnv_xive2_ic_notify_write(void *opaque, hwaddr offset,
break;
default:
- xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx, offset);
+ xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx
+ " value 0x%"PRIx64, offset, val);
}
}
@@ -1971,7 +1977,8 @@ static void pnv_xive2_ic_lsi_write(void *opaque, hwaddr offset,
{
PnvXive2 *xive = PNV_XIVE2(opaque);
- xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx, offset);
+ xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
+ offset, val);
}
static const MemoryRegionOps pnv_xive2_ic_lsi_ops = {
@@ -2074,7 +2081,8 @@ static void pnv_xive2_ic_sync_write(void *opaque, hwaddr offset,
inject_type = PNV_XIVE2_QUEUE_NXC_ST_RMT_CI;
break;
default:
- xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx, offset);
+ xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
+ offset, val);
return;
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (17 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 18/50] pnv/xive2: Print value in invalid register write logging Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:37 ` Caleb Schlossin
` (2 more replies)
2025-05-12 3:10 ` [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers Nicholas Piggin
` (32 subsequent siblings)
51 siblings, 3 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Firmware expects to read back the WATCH_FULL bit from the VC_ENDC_WATCH_SPEC
register, so don't clear it on read.
Don't bother clearing the reads-as-zero CONFLICT bit because it's masked
at write already.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/pnv_xive2.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index fcf5b2e75c..3c26cd6b77 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -1329,7 +1329,6 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
case VC_ENDC_WATCH2_SPEC:
case VC_ENDC_WATCH3_SPEC:
watch_engine = (offset - VC_ENDC_WATCH0_SPEC) >> 6;
- xive->vc_regs[reg] &= ~(VC_ENDC_WATCH_FULL | VC_ENDC_WATCH_CONFLICT);
pnv_xive2_endc_cache_watch_release(xive, watch_engine);
val = xive->vc_regs[reg];
break;
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (18 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 14:37 ` Caleb Schlossin
` (3 more replies)
2025-05-12 3:10 ` [PATCH 21/50] ppc/xive2: add interrupt priority configuration flags Nicholas Piggin
` (31 subsequent siblings)
51 siblings, 4 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Michael Kowal <kowal@linux.ibm.com>
Writes to the Flush Control registers were logged as invalid
when they are allowed. Clearing the unsupported want_cache_disable
feature is supported, so don't log an error in that case.
Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
---
hw/intc/pnv_xive2.c | 36 ++++++++++++++++++++++++++++++++----
1 file changed, 32 insertions(+), 4 deletions(-)
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 3c26cd6b77..c9374f0eee 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -1411,7 +1411,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
/*
* ESB cache updates (not modeled)
*/
- /* case VC_ESBC_FLUSH_CTRL: */
+ case VC_ESBC_FLUSH_CTRL:
+ if (val & VC_ESBC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
+ xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
+ " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
+ offset, val);
+ return;
+ }
+ break;
case VC_ESBC_FLUSH_POLL:
xive->vc_regs[VC_ESBC_FLUSH_CTRL >> 3] |= VC_ESBC_FLUSH_CTRL_POLL_VALID;
/* ESB update */
@@ -1427,7 +1434,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
/*
* EAS cache updates (not modeled)
*/
- /* case VC_EASC_FLUSH_CTRL: */
+ case VC_EASC_FLUSH_CTRL:
+ if (val & VC_EASC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
+ xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
+ " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
+ offset, val);
+ return;
+ }
+ break;
case VC_EASC_FLUSH_POLL:
xive->vc_regs[VC_EASC_FLUSH_CTRL >> 3] |= VC_EASC_FLUSH_CTRL_POLL_VALID;
/* EAS update */
@@ -1466,7 +1480,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
break;
- /* case VC_ENDC_FLUSH_CTRL: */
+ case VC_ENDC_FLUSH_CTRL:
+ if (val & VC_ENDC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
+ xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
+ " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
+ offset, val);
+ return;
+ }
+ break;
case VC_ENDC_FLUSH_POLL:
xive->vc_regs[VC_ENDC_FLUSH_CTRL >> 3] |= VC_ENDC_FLUSH_CTRL_POLL_VALID;
break;
@@ -1687,7 +1708,14 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
pnv_xive2_nxc_update(xive, watch_engine);
break;
- /* case PC_NXC_FLUSH_CTRL: */
+ case PC_NXC_FLUSH_CTRL:
+ if (val & PC_NXC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
+ xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
+ " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
+ offset, val);
+ return;
+ }
+ break;
case PC_NXC_FLUSH_POLL:
xive->pc_regs[PC_NXC_FLUSH_CTRL >> 3] |= PC_NXC_FLUSH_CTRL_POLL_VALID;
break;
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 21/50] ppc/xive2: add interrupt priority configuration flags
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (19 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 19:41 ` Mike Kowal
2025-05-16 0:18 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 22/50] ppc/xive2: Support redistribution of group interrupts Nicholas Piggin
` (30 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
Adds support for extracting additional configuration flags from
the XIVE configuration register that are needed for redistribution
of group interrupts.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/pnv_xive2.c | 16 ++++++++++++----
hw/intc/pnv_xive2_regs.h | 1 +
include/hw/ppc/xive2.h | 8 +++++---
3 files changed, 18 insertions(+), 7 deletions(-)
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index c9374f0eee..96b8851b7e 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -605,20 +605,28 @@ static uint32_t pnv_xive2_get_config(Xive2Router *xrtr)
{
PnvXive2 *xive = PNV_XIVE2(xrtr);
uint32_t cfg = 0;
+ uint64_t reg = xive->cq_regs[CQ_XIVE_CFG >> 3];
- if (xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS) {
+ if (reg & CQ_XIVE_CFG_GEN1_TIMA_OS) {
cfg |= XIVE2_GEN1_TIMA_OS;
}
- if (xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_EN_VP_SAVE_RESTORE) {
+ if (reg & CQ_XIVE_CFG_EN_VP_SAVE_RESTORE) {
cfg |= XIVE2_VP_SAVE_RESTORE;
}
- if (GETFIELD(CQ_XIVE_CFG_HYP_HARD_RANGE,
- xive->cq_regs[CQ_XIVE_CFG >> 3]) == CQ_XIVE_CFG_THREADID_8BITS) {
+ if (GETFIELD(CQ_XIVE_CFG_HYP_HARD_RANGE, reg) ==
+ CQ_XIVE_CFG_THREADID_8BITS) {
cfg |= XIVE2_THREADID_8BITS;
}
+ if (reg & CQ_XIVE_CFG_EN_VP_GRP_PRIORITY) {
+ cfg |= XIVE2_EN_VP_GRP_PRIORITY;
+ }
+
+ cfg = SETFIELD(XIVE2_VP_INT_PRIO, cfg,
+ GETFIELD(CQ_XIVE_CFG_VP_INT_PRIO, reg));
+
return cfg;
}
diff --git a/hw/intc/pnv_xive2_regs.h b/hw/intc/pnv_xive2_regs.h
index e8b87b3d2c..d53300f709 100644
--- a/hw/intc/pnv_xive2_regs.h
+++ b/hw/intc/pnv_xive2_regs.h
@@ -66,6 +66,7 @@
#define CQ_XIVE_CFG_GEN1_TIMA_HYP_BLK0 PPC_BIT(26) /* 0 if bit[25]=0 */
#define CQ_XIVE_CFG_GEN1_TIMA_CROWD_DIS PPC_BIT(27) /* 0 if bit[25]=0 */
#define CQ_XIVE_CFG_GEN1_END_ESX PPC_BIT(28)
+#define CQ_XIVE_CFG_EN_VP_GRP_PRIORITY PPC_BIT(32) /* 0 if bit[25]=1 */
#define CQ_XIVE_CFG_EN_VP_SAVE_RESTORE PPC_BIT(38) /* 0 if bit[25]=1 */
#define CQ_XIVE_CFG_EN_VP_SAVE_REST_STRICT PPC_BIT(39) /* 0 if bit[25]=1 */
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 2436ddb5e5..760b94a962 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -29,9 +29,11 @@ OBJECT_DECLARE_TYPE(Xive2Router, Xive2RouterClass, XIVE2_ROUTER);
* Configuration flags
*/
-#define XIVE2_GEN1_TIMA_OS 0x00000001
-#define XIVE2_VP_SAVE_RESTORE 0x00000002
-#define XIVE2_THREADID_8BITS 0x00000004
+#define XIVE2_GEN1_TIMA_OS 0x00000001
+#define XIVE2_VP_SAVE_RESTORE 0x00000002
+#define XIVE2_THREADID_8BITS 0x00000004
+#define XIVE2_EN_VP_GRP_PRIORITY 0x00000008
+#define XIVE2_VP_INT_PRIO 0x00000030
typedef struct Xive2RouterClass {
SysBusDeviceClass parent;
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 22/50] ppc/xive2: Support redistribution of group interrupts
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (20 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 21/50] ppc/xive2: add interrupt priority configuration flags Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 19:42 ` Mike Kowal
2025-05-16 0:19 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 23/50] ppc/xive: Add more interrupt notification tracing Nicholas Piggin
` (29 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
When an XIVE context is pulled while it has an active, unacknowledged
group interrupt, XIVE will check to see if a context on another thread
can handle the interrupt and, if so, notify that context. If there
are no contexts that can handle the interrupt, then the interrupt is
added to a backlog and XIVE will attempt to escalate the interrupt,
if configured to do so, allowing the higher privileged handler to
activate a context that can handle the original interrupt.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/xive2.c | 84 +++++++++++++++++++++++++++++++++++--
include/hw/ppc/xive2_regs.h | 3 ++
2 files changed, 83 insertions(+), 4 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 0993e792cc..34fc561c9c 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -19,6 +19,10 @@
#include "hw/ppc/xive2_regs.h"
#include "trace.h"
+static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
+ uint32_t end_idx, uint32_t end_data,
+ bool redistribute);
+
uint32_t xive2_router_get_config(Xive2Router *xrtr)
{
Xive2RouterClass *xrc = XIVE2_ROUTER_GET_CLASS(xrtr);
@@ -597,6 +601,68 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
return xive2_nvp_cam_line(blk, 1 << tid_shift | (pir & tid_mask));
}
+static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
+ uint8_t nvp_blk, uint32_t nvp_idx, uint8_t ring)
+{
+ uint8_t nsr = tctx->regs[ring + TM_NSR];
+ uint8_t crowd = NVx_CROWD_LVL(nsr);
+ uint8_t group = NVx_GROUP_LVL(nsr);
+ uint8_t nvgc_blk;
+ uint8_t nvgc_idx;
+ uint8_t end_blk;
+ uint32_t end_idx;
+ uint8_t pipr = tctx->regs[ring + TM_PIPR];
+ Xive2Nvgc nvgc;
+ uint8_t prio_limit;
+ uint32_t cfg;
+
+ /* convert crowd/group to blk/idx */
+ if (group > 0) {
+ nvgc_idx = (nvp_idx & (0xffffffff << group)) |
+ ((1 << (group - 1)) - 1);
+ } else {
+ nvgc_idx = nvp_idx;
+ }
+
+ if (crowd > 0) {
+ crowd = (crowd == 3) ? 4 : crowd;
+ nvgc_blk = (nvp_blk & (0xffffffff << crowd)) |
+ ((1 << (crowd - 1)) - 1);
+ } else {
+ nvgc_blk = nvp_blk;
+ }
+
+ /* Use blk/idx to retrieve the NVGC */
+ if (xive2_router_get_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, &nvgc)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n",
+ crowd ? "NVC" : "NVG", nvgc_blk, nvgc_idx);
+ return;
+ }
+
+ /* retrieve the END blk/idx from the NVGC */
+ end_blk = xive_get_field32(NVGC2_W1_END_BLK, nvgc.w1);
+ end_idx = xive_get_field32(NVGC2_W1_END_IDX, nvgc.w1);
+
+ /* determine number of priorities being used */
+ cfg = xive2_router_get_config(xrtr);
+ if (cfg & XIVE2_EN_VP_GRP_PRIORITY) {
+ prio_limit = 1 << GETFIELD(NVGC2_W1_PSIZE, nvgc.w1);
+ } else {
+ prio_limit = 1 << GETFIELD(XIVE2_VP_INT_PRIO, cfg);
+ }
+
+ /* add priority offset to end index */
+ end_idx += pipr % prio_limit;
+
+ /* trigger the group END */
+ xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
+
+ /* clear interrupt indication for the context */
+ tctx->regs[ring + TM_NSR] = 0;
+ tctx->regs[ring + TM_PIPR] = tctx->regs[ring + TM_CPPR];
+ xive_tctx_reset_signal(tctx, ring);
+}
+
static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size, uint8_t ring)
{
@@ -608,6 +674,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t cur_ring;
bool valid;
bool do_save;
+ uint8_t nsr;
xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &do_save);
@@ -624,6 +691,12 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
}
+ /* Active group/crowd interrupts need to be redistributed */
+ nsr = tctx->regs[ring + TM_NSR];
+ if (xive_nsr_indicates_group_exception(ring, nsr)) {
+ xive2_redistribute(xrtr, tctx, nvp_blk, nvp_idx, ring);
+ }
+
if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
xive2_tctx_save_ctx(xrtr, tctx, nvp_blk, nvp_idx, ring);
}
@@ -1352,7 +1425,8 @@ static bool xive2_router_end_es_notify(Xive2Router *xrtr, uint8_t end_blk,
* message has the same parameters than in the function below.
*/
static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
- uint32_t end_idx, uint32_t end_data)
+ uint32_t end_idx, uint32_t end_data,
+ bool redistribute)
{
Xive2End end;
uint8_t priority;
@@ -1380,7 +1454,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
return;
}
- if (xive2_end_is_enqueue(&end)) {
+ if (!redistribute && xive2_end_is_enqueue(&end)) {
xive2_end_enqueue(&end, end_data);
/* Enqueuing event data modifies the EQ toggle and index */
xive2_router_write_end(xrtr, end_blk, end_idx, &end, 1);
@@ -1560,7 +1634,8 @@ do_escalation:
xive2_router_end_notify(xrtr,
xive_get_field32(END2_W4_END_BLOCK, end.w4),
xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
- xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
+ xive_get_field32(END2_W5_ESC_END_DATA, end.w5),
+ false);
} /* end END adaptive escalation */
else {
@@ -1641,7 +1716,8 @@ void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
xive2_router_end_notify(xrtr,
xive_get_field64(EAS2_END_BLOCK, eas.w),
xive_get_field64(EAS2_END_INDEX, eas.w),
- xive_get_field64(EAS2_END_DATA, eas.w));
+ xive_get_field64(EAS2_END_DATA, eas.w),
+ false);
return;
}
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 2c535ec0d0..e222038143 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -224,6 +224,9 @@ typedef struct Xive2Nvgc {
#define NVGC2_W0_VALID PPC_BIT32(0)
#define NVGC2_W0_PGONEXT PPC_BITMASK32(26, 31)
uint32_t w1;
+#define NVGC2_W1_PSIZE PPC_BITMASK32(0, 1)
+#define NVGC2_W1_END_BLK PPC_BITMASK32(4, 7)
+#define NVGC2_W1_END_IDX PPC_BITMASK32(8, 31)
uint32_t w2;
uint32_t w3;
uint32_t w4;
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 23/50] ppc/xive: Add more interrupt notification tracing
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (21 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 22/50] ppc/xive2: Support redistribution of group interrupts Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 19:46 ` Mike Kowal
2025-05-16 0:19 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 24/50] ppc/xive2: Improve pool regs variable name Nicholas Piggin
` (28 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
Add more tracing around notification, redistribution, and escalation.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/trace-events | 6 ++++++
hw/intc/xive.c | 3 +++
hw/intc/xive2.c | 13 ++++++++-----
3 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
index f77f9733c9..9eca0925b6 100644
--- a/hw/intc/trace-events
+++ b/hw/intc/trace-events
@@ -279,6 +279,8 @@ xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_
xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
xive_source_esb_read(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
xive_source_esb_write(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
+xive_source_notify(uint32_t srcno) "Processing notification for queued IRQ 0x%x"
+xive_source_blocked(uint32_t srcno) "No action needed for IRQ 0x%x currently"
xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "END 0x%02x/0x%04x -> enqueue 0x%08x"
xive_router_end_escalate(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t end_data) "END 0x%02x/0x%04x -> escalate END 0x%02x/0x%04x data 0x%08x"
xive_tctx_tm_write(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
@@ -289,6 +291,10 @@ xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x
# xive2.c
xive_nvp_backlog_op(uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint8_t rc) "NVP 0x%x/0x%x operation=%d priority=%d rc=%d"
xive_nvgc_backlog_op(bool c, uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint32_t rc) "NVGC crowd=%d 0x%x/0x%x operation=%d priority=%d rc=%d"
+xive_redistribute(uint32_t index, uint8_t ring, uint8_t end_blk, uint32_t end_idx) "Redistribute from target=%d ring=0x%x NVP 0x%x/0x%x"
+xive_end_enqueue(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "Queue event for END 0x%x/0x%x data=0x%x"
+xive_escalate_end(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t esc_data) "Escalate from END 0x%x/0x%x to END 0x%x/0x%x data=0x%x"
+xive_escalate_esb(uint8_t end_blk, uint32_t end_idx, uint32_t lisn) "Escalate from END 0x%x/0x%x to LISN=0x%x"
# pnv_xive.c
pnv_xive_ic_hw_trigger(uint64_t addr, uint64_t val) "@0x%"PRIx64" val=0x%"PRIx64
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 1a94642c62..7461dbecb8 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1276,6 +1276,7 @@ static uint64_t xive_source_esb_read(void *opaque, hwaddr addr, unsigned size)
/* Forward the source event notification for routing */
if (ret) {
+ trace_xive_source_notify(srcno);
xive_source_notify(xsrc, srcno);
}
break;
@@ -1371,6 +1372,8 @@ out:
/* Forward the source event notification for routing */
if (notify) {
xive_source_notify(xsrc, srcno);
+ } else {
+ trace_xive_source_blocked(srcno);
}
}
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 34fc561c9c..968b698677 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -616,6 +616,7 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t prio_limit;
uint32_t cfg;
+ trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
/* convert crowd/group to blk/idx */
if (group > 0) {
nvgc_idx = (nvp_idx & (0xffffffff << group)) |
@@ -1455,6 +1456,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
}
if (!redistribute && xive2_end_is_enqueue(&end)) {
+ trace_xive_end_enqueue(end_blk, end_idx, end_data);
xive2_end_enqueue(&end, end_data);
/* Enqueuing event data modifies the EQ toggle and index */
xive2_router_write_end(xrtr, end_blk, end_idx, &end, 1);
@@ -1631,11 +1633,11 @@ do_escalation:
* Perform END Adaptive escalation processing
* The END trigger becomes an Escalation trigger
*/
- xive2_router_end_notify(xrtr,
- xive_get_field32(END2_W4_END_BLOCK, end.w4),
- xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
- xive_get_field32(END2_W5_ESC_END_DATA, end.w5),
- false);
+ uint8_t esc_blk = xive_get_field32(END2_W4_END_BLOCK, end.w4);
+ uint32_t esc_idx = xive_get_field32(END2_W4_ESC_END_INDEX, end.w4);
+ uint32_t esc_data = xive_get_field32(END2_W5_ESC_END_DATA, end.w5);
+ trace_xive_escalate_end(end_blk, end_idx, esc_blk, esc_idx, esc_data);
+ xive2_router_end_notify(xrtr, esc_blk, esc_idx, esc_data, false);
} /* end END adaptive escalation */
else {
@@ -1652,6 +1654,7 @@ do_escalation:
lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
+ trace_xive_escalate_esb(end_blk, end_idx, lisn);
xive2_notify(xrtr, lisn, true /* pq_checked */);
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 24/50] ppc/xive2: Improve pool regs variable name
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (22 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 23/50] ppc/xive: Add more interrupt notification tracing Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 19:47 ` Mike Kowal
2025-05-16 0:19 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op Nicholas Piggin
` (27 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
Change pregs to pool_regs, for clarity.
[npiggin: split from larger patch]
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/xive2.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 968b698677..ec4b9320b4 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1044,13 +1044,12 @@ again:
/* PHYS updates also depend on POOL values */
if (ring == TM_QW3_HV_PHYS) {
- uint8_t *pregs = &tctx->regs[TM_QW2_HV_POOL];
+ uint8_t *pool_regs = &tctx->regs[TM_QW2_HV_POOL];
/* POOL values only matter if POOL ctx is valid */
- if (pregs[TM_WORD2] & 0x80) {
-
- uint8_t pool_pipr = xive_ipb_to_pipr(pregs[TM_IPB]);
- uint8_t pool_lsmfb = pregs[TM_LSMFB];
+ if (pool_regs[TM_WORD2] & 0x80) {
+ uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
+ uint8_t pool_lsmfb = pool_regs[TM_LSMFB];
/*
* Determine highest priority interrupt and
@@ -1064,7 +1063,7 @@ again:
}
/* Values needed for group priority calculation */
- if (pregs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
+ if (pool_regs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
group_enabled = true;
lsmfb_min = pool_lsmfb;
if (lsmfb_min < pipr_min) {
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (23 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 24/50] ppc/xive2: Improve pool regs variable name Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 19:48 ` Mike Kowal
2025-05-16 0:20 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update Nicholas Piggin
` (26 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
Booting AIX in a PowerVM partition requires the use of the "Acknowledge
O/S Interrupt to even O/S reporting line" special operation provided by
the IBM XIVE interrupt controller. This operation is invoked by writing
a byte (data is irrelevant) to offset 0xC10 of the Thread Interrupt
Management Area (TIMA). It can be used by software to notify the XIVE
logic that the interrupt was received.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/xive.c | 8 ++++---
hw/intc/xive2.c | 50 ++++++++++++++++++++++++++++++++++++++++++
include/hw/ppc/xive.h | 1 +
include/hw/ppc/xive2.h | 3 ++-
4 files changed, 58 insertions(+), 4 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 7461dbecb8..9ec1193dfc 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -80,7 +80,7 @@ static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
}
}
-static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
+uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
{
uint8_t *regs = &tctx->regs[ring];
uint8_t nsr = regs[TM_NSR];
@@ -340,14 +340,14 @@ static uint64_t xive_tm_vt_poll(XivePresenter *xptr, XiveTCTX *tctx,
static const uint8_t xive_tm_hw_view[] = {
3, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-0 User */
- 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-1 OS */
+ 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 3, /* QW-1 OS */
0, 0, 3, 3, 0, 3, 3, 0, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-2 POOL */
3, 3, 3, 3, 0, 3, 0, 2, 3, 0, 0, 3, 3, 3, 3, 0, /* QW-3 PHYS */
};
static const uint8_t xive_tm_hv_view[] = {
3, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-0 User */
- 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-1 OS */
+ 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 3, /* QW-1 OS */
0, 0, 3, 3, 0, 3, 3, 0, 0, 3, 3, 3, 0, 0, 0, 0, /* QW-2 POOL */
3, 3, 3, 3, 0, 3, 0, 2, 3, 0, 0, 3, 0, 0, 0, 0, /* QW-3 PHYS */
};
@@ -718,6 +718,8 @@ static const XiveTmOp xive2_tm_operations[] = {
xive_tm_pull_phys_ctx },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, xive2_tm_pull_phys_ctx_ol,
NULL },
+ { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, xive2_tm_ack_os_el,
+ NULL },
};
static const XiveTmOp *xive_tm_find_op(XivePresenter *xptr, hwaddr offset,
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index ec4b9320b4..68be138335 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1009,6 +1009,56 @@ static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
return 0;
}
+static void xive2_tctx_accept_el(XivePresenter *xptr, XiveTCTX *tctx,
+ uint8_t ring, uint8_t cl_ring)
+{
+ uint64_t rd;
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint32_t nvp_blk, nvp_idx, xive2_cfg;
+ Xive2Nvp nvp;
+ uint64_t phys_addr;
+ uint8_t OGen = 0;
+
+ xive2_tctx_get_nvp_indexes(tctx, cl_ring, &nvp_blk, &nvp_idx);
+
+ if (xive2_router_get_nvp(xrtr, (uint8_t)nvp_blk, nvp_idx, &nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+ if (!xive2_nvp_is_valid(&nvp)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
+ nvp_blk, nvp_idx);
+ return;
+ }
+
+
+ rd = xive_tctx_accept(tctx, ring);
+
+ if (ring == TM_QW1_OS) {
+ OGen = tctx->regs[ring + TM_OGEN];
+ }
+ xive2_cfg = xive2_router_get_config(xrtr);
+ phys_addr = xive2_nvp_reporting_addr(&nvp);
+ uint8_t report_data[REPORT_LINE_GEN1_SIZE];
+ memset(report_data, 0xff, sizeof(report_data));
+ if ((OGen == 1) || (xive2_cfg & XIVE2_GEN1_TIMA_OS)) {
+ report_data[8] = (rd >> 8) & 0xff;
+ report_data[9] = rd & 0xff;
+ } else {
+ report_data[0] = (rd >> 8) & 0xff;
+ report_data[1] = rd & 0xff;
+ }
+ cpu_physical_memory_write(phys_addr, report_data, REPORT_LINE_GEN1_SIZE);
+}
+
+void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
+}
+
static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
{
uint8_t *regs = &tctx->regs[ring];
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 28f0f1b79a..46d05d74fb 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -561,6 +561,7 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
uint8_t group_level);
void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
+uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
/*
* KVM XIVE device helpers
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 760b94a962..ff02ce2549 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -142,5 +142,6 @@ void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
-
+void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size);
#endif /* PPC_XIVE2_H */
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (24 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 19:48 ` Mike Kowal
2025-05-16 0:20 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 27/50] ppc/xive2: redistribute irqs for pool and phys ctx pull Nicholas Piggin
` (25 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
Add support for redistributing a presented group interrupt if it
is precluded as a result of changing the CPPR value. Without this,
group interrupts can be lost.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/xive2.c | 82 ++++++++++++++++++++++++++++++++++++-------------
1 file changed, 60 insertions(+), 22 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 68be138335..92dbbad8d4 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -601,20 +601,37 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
return xive2_nvp_cam_line(blk, 1 << tid_shift | (pir & tid_mask));
}
-static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
- uint8_t nvp_blk, uint32_t nvp_idx, uint8_t ring)
+static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
{
- uint8_t nsr = tctx->regs[ring + TM_NSR];
+ uint8_t *regs = &tctx->regs[ring];
+ uint8_t nsr = regs[TM_NSR];
+ uint8_t pipr = regs[TM_PIPR];
uint8_t crowd = NVx_CROWD_LVL(nsr);
uint8_t group = NVx_GROUP_LVL(nsr);
- uint8_t nvgc_blk;
- uint8_t nvgc_idx;
- uint8_t end_blk;
- uint32_t end_idx;
- uint8_t pipr = tctx->regs[ring + TM_PIPR];
+ uint8_t nvgc_blk, end_blk, nvp_blk;
+ uint32_t nvgc_idx, end_idx, nvp_idx;
Xive2Nvgc nvgc;
uint8_t prio_limit;
uint32_t cfg;
+ uint8_t alt_ring;
+ uint32_t target_ringw2;
+ uint32_t cam;
+ bool valid;
+ bool hw;
+
+ /* redistribution is only for group/crowd interrupts */
+ if (!xive_nsr_indicates_group_exception(ring, nsr)) {
+ return;
+ }
+
+ alt_ring = xive_nsr_exception_ring(ring, nsr);
+ target_ringw2 = xive_tctx_word2(&tctx->regs[alt_ring]);
+ cam = be32_to_cpu(target_ringw2);
+
+ /* extract nvp block and index from targeted ring's cam */
+ xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &hw);
+
+ trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
/* convert crowd/group to blk/idx */
@@ -659,8 +676,8 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
/* clear interrupt indication for the context */
- tctx->regs[ring + TM_NSR] = 0;
- tctx->regs[ring + TM_PIPR] = tctx->regs[ring + TM_CPPR];
+ regs[TM_NSR] = 0;
+ regs[TM_PIPR] = regs[TM_CPPR];
xive_tctx_reset_signal(tctx, ring);
}
@@ -695,7 +712,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
/* Active group/crowd interrupts need to be redistributed */
nsr = tctx->regs[ring + TM_NSR];
if (xive_nsr_indicates_group_exception(ring, nsr)) {
- xive2_redistribute(xrtr, tctx, nvp_blk, nvp_idx, ring);
+ xive2_redistribute(xrtr, tctx, ring);
}
if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
@@ -1059,6 +1076,7 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
}
+/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
{
uint8_t *regs = &tctx->regs[ring];
@@ -1069,10 +1087,11 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
uint32_t nvp_blk, nvp_idx;
Xive2Nvp nvp;
int rc;
+ uint8_t nsr = regs[TM_NSR];
trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
regs[TM_IPB], regs[TM_PIPR],
- cppr, regs[TM_NSR]);
+ cppr, nsr);
if (cppr > XIVE_PRIORITY_MAX) {
cppr = 0xff;
@@ -1081,6 +1100,35 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
old_cppr = regs[TM_CPPR];
regs[TM_CPPR] = cppr;
+ /* Handle increased CPPR priority (lower value) */
+ if (cppr < old_cppr) {
+ if (cppr <= regs[TM_PIPR]) {
+ /* CPPR lowered below PIPR, must un-present interrupt */
+ if (xive_nsr_indicates_exception(ring, nsr)) {
+ if (xive_nsr_indicates_group_exception(ring, nsr)) {
+ /* redistribute precluded active grp interrupt */
+ xive2_redistribute(xrtr, tctx, ring);
+ return;
+ }
+ }
+
+ /* interrupt is VP directed, pending in IPB */
+ regs[TM_PIPR] = cppr;
+ xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
+ return;
+ } else {
+ /* CPPR was lowered, but still above PIPR. No action needed. */
+ return;
+ }
+ }
+
+ /* CPPR didn't change, nothing needs to be done */
+ if (cppr == old_cppr) {
+ return;
+ }
+
+ /* CPPR priority decreased (higher value) */
+
/*
* Recompute the PIPR based on local pending interrupts. It will
* be adjusted below if needed in case of pending group interrupts.
@@ -1129,16 +1177,6 @@ again:
return;
}
- if (cppr < old_cppr) {
- /*
- * FIXME: check if there's a group interrupt being presented
- * and if the new cppr prevents it. If so, then the group
- * interrupt needs to be re-added to the backlog and
- * re-triggered (see re-trigger END info in the NVGC
- * structure)
- */
- }
-
if (group_enabled &&
lsmfb_min < cppr &&
lsmfb_min < pipr_min) {
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 27/50] ppc/xive2: redistribute irqs for pool and phys ctx pull
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (25 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 19:51 ` Mike Kowal
2025-05-12 3:10 ` [PATCH 28/50] ppc/xive: Change presenter .match_nvt to match not present Nicholas Piggin
` (24 subsequent siblings)
51 siblings, 1 reply; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
From: Glenn Miles <milesg@linux.ibm.com>
When disabling (pulling) an xive interrupt context, we need
to redistribute any active group interrupts to other threads
that can handle the interrupt if possible. This support had
already been added for the OS context but had not yet been
added to the pool or physical context.
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
---
hw/intc/xive.c | 12 ++---
hw/intc/xive2.c | 94 ++++++++++++++++++++++++++-----------
include/hw/ppc/xive2.h | 4 ++
include/hw/ppc/xive2_regs.h | 4 +-
4 files changed, 79 insertions(+), 35 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 9ec1193dfc..ad30476c17 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -693,7 +693,7 @@ static const XiveTmOp xive2_tm_operations[] = {
/* MMIOs above 2K : special operations with side effects */
{ XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, NULL,
- xive_tm_ack_os_reg },
+ xive_tm_ack_os_reg },
{ XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, xive_tm_set_os_pending,
NULL },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, NULL,
@@ -705,17 +705,17 @@ static const XiveTmOp xive2_tm_operations[] = {
{ XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, NULL,
xive_tm_ack_hv_reg },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2, 4, NULL,
- xive_tm_pull_pool_ctx },
+ xive2_tm_pull_pool_ctx },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, NULL,
- xive_tm_pull_pool_ctx },
+ xive2_tm_pull_pool_ctx },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, NULL,
- xive_tm_pull_pool_ctx },
+ xive2_tm_pull_pool_ctx },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL, 1, xive2_tm_pull_os_ctx_ol,
NULL },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2, 4, NULL,
- xive_tm_pull_phys_ctx },
+ xive2_tm_pull_phys_ctx },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, NULL,
- xive_tm_pull_phys_ctx },
+ xive2_tm_pull_phys_ctx },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, xive2_tm_pull_phys_ctx_ol,
NULL },
{ XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, xive2_tm_ack_os_el,
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 92dbbad8d4..ac94193464 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -23,6 +23,9 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
uint32_t end_idx, uint32_t end_data,
bool redistribute);
+static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
+ uint8_t *nvp_blk, uint32_t *nvp_idx);
+
uint32_t xive2_router_get_config(Xive2Router *xrtr)
{
Xive2RouterClass *xrc = XIVE2_ROUTER_GET_CLASS(xrtr);
@@ -604,8 +607,10 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
{
uint8_t *regs = &tctx->regs[ring];
- uint8_t nsr = regs[TM_NSR];
- uint8_t pipr = regs[TM_PIPR];
+ uint8_t *alt_regs = (ring == TM_QW2_HV_POOL) ? &tctx->regs[TM_QW3_HV_PHYS] :
+ regs;
+ uint8_t nsr = alt_regs[TM_NSR];
+ uint8_t pipr = alt_regs[TM_PIPR];
uint8_t crowd = NVx_CROWD_LVL(nsr);
uint8_t group = NVx_GROUP_LVL(nsr);
uint8_t nvgc_blk, end_blk, nvp_blk;
@@ -614,10 +619,6 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
uint8_t prio_limit;
uint32_t cfg;
uint8_t alt_ring;
- uint32_t target_ringw2;
- uint32_t cam;
- bool valid;
- bool hw;
/* redistribution is only for group/crowd interrupts */
if (!xive_nsr_indicates_group_exception(ring, nsr)) {
@@ -625,11 +626,9 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
}
alt_ring = xive_nsr_exception_ring(ring, nsr);
- target_ringw2 = xive_tctx_word2(&tctx->regs[alt_ring]);
- cam = be32_to_cpu(target_ringw2);
- /* extract nvp block and index from targeted ring's cam */
- xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &hw);
+ /* Don't check return code since ring is expected to be invalidated */
+ xive2_tctx_get_nvp_indexes(tctx, alt_ring, &nvp_blk, &nvp_idx);
trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
@@ -676,11 +675,23 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
/* clear interrupt indication for the context */
- regs[TM_NSR] = 0;
- regs[TM_PIPR] = regs[TM_CPPR];
+ alt_regs[TM_NSR] = 0;
+ alt_regs[TM_PIPR] = alt_regs[TM_CPPR];
xive_tctx_reset_signal(tctx, ring);
}
+static uint8_t xive2_hv_irq_ring(uint8_t nsr)
+{
+ switch (nsr >> 6) {
+ case TM_QW3_NSR_HE_POOL:
+ return TM_QW2_HV_POOL;
+ case TM_QW3_NSR_HE_PHYS:
+ return TM_QW3_HV_PHYS;
+ default:
+ return -1;
+ }
+}
+
static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size, uint8_t ring)
{
@@ -696,7 +707,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &do_save);
- if (!valid) {
+ if (xive2_tctx_get_nvp_indexes(tctx, ring, &nvp_blk, &nvp_idx)) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid NVP %x/%x !?\n",
nvp_blk, nvp_idx);
}
@@ -706,13 +717,25 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
cur_ring += XIVE_TM_RING_SIZE) {
uint32_t ringw2 = xive_tctx_word2(&tctx->regs[cur_ring]);
uint32_t ringw2_new = xive_set_field32(TM2_QW1W2_VO, ringw2, 0);
+ bool is_valid = !!(xive_get_field32(TM2_QW1W2_VO, ringw2));
+ uint8_t alt_ring;
memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
- }
- /* Active group/crowd interrupts need to be redistributed */
- nsr = tctx->regs[ring + TM_NSR];
- if (xive_nsr_indicates_group_exception(ring, nsr)) {
- xive2_redistribute(xrtr, tctx, ring);
+ /* Skip the rest for USER or invalid contexts */
+ if ((cur_ring == TM_QW0_USER) || !is_valid) {
+ continue;
+ }
+
+ /* Active group/crowd interrupts need to be redistributed */
+ alt_ring = (cur_ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : cur_ring;
+ nsr = tctx->regs[alt_ring + TM_NSR];
+ if (xive_nsr_indicates_group_exception(alt_ring, nsr)) {
+ /* For HV rings, only redistribute if cur_ring matches NSR */
+ if ((cur_ring == TM_QW1_OS) ||
+ (cur_ring == xive2_hv_irq_ring(nsr))) {
+ xive2_redistribute(xrtr, tctx, cur_ring);
+ }
+ }
}
if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
@@ -736,6 +759,18 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
return xive2_tm_pull_ctx(xptr, tctx, offset, size, TM_QW1_OS);
}
+uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, unsigned size)
+{
+ return xive2_tm_pull_ctx(xptr, tctx, offset, size, TM_QW2_HV_POOL);
+}
+
+uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, unsigned size)
+{
+ return xive2_tm_pull_ctx(xptr, tctx, offset, size, TM_QW3_HV_PHYS);
+}
+
#define REPORT_LINE_GEN1_SIZE 16
static void xive2_tm_report_line_gen1(XiveTCTX *tctx, uint8_t *data,
@@ -993,37 +1028,40 @@ void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
}
}
+/* returns -1 if ring is invalid, but still populates block and index */
static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
- uint32_t *nvp_blk, uint32_t *nvp_idx)
+ uint8_t *nvp_blk, uint32_t *nvp_idx)
{
- uint32_t w2, cam;
+ uint32_t w2;
+ uint32_t cam = 0;
+ int rc = 0;
w2 = xive_tctx_word2(&tctx->regs[ring]);
switch (ring) {
case TM_QW1_OS:
if (!(be32_to_cpu(w2) & TM2_QW1W2_VO)) {
- return -1;
+ rc = -1;
}
cam = xive_get_field32(TM2_QW1W2_OS_CAM, w2);
break;
case TM_QW2_HV_POOL:
if (!(be32_to_cpu(w2) & TM2_QW2W2_VP)) {
- return -1;
+ rc = -1;
}
cam = xive_get_field32(TM2_QW2W2_POOL_CAM, w2);
break;
case TM_QW3_HV_PHYS:
if (!(be32_to_cpu(w2) & TM2_QW3W2_VT)) {
- return -1;
+ rc = -1;
}
cam = xive2_tctx_hw_cam_line(tctx->xptr, tctx);
break;
default:
- return -1;
+ rc = -1;
}
*nvp_blk = xive2_nvp_blk(cam);
*nvp_idx = xive2_nvp_idx(cam);
- return 0;
+ return rc;
}
static void xive2_tctx_accept_el(XivePresenter *xptr, XiveTCTX *tctx,
@@ -1031,7 +1069,8 @@ static void xive2_tctx_accept_el(XivePresenter *xptr, XiveTCTX *tctx,
{
uint64_t rd;
Xive2Router *xrtr = XIVE2_ROUTER(xptr);
- uint32_t nvp_blk, nvp_idx, xive2_cfg;
+ uint32_t nvp_idx, xive2_cfg;
+ uint8_t nvp_blk;
Xive2Nvp nvp;
uint64_t phys_addr;
uint8_t OGen = 0;
@@ -1084,7 +1123,8 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
uint8_t old_cppr, backlog_prio, first_group, group_level;
uint8_t pipr_min, lsmfb_min, ring_min;
bool group_enabled;
- uint32_t nvp_blk, nvp_idx;
+ uint8_t nvp_blk;
+ uint32_t nvp_idx;
Xive2Nvp nvp;
int rc;
uint8_t nsr = regs[TM_NSR];
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index ff02ce2549..a91b99057c 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -140,6 +140,10 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
+uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, unsigned size);
+uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, unsigned size);
void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index e222038143..f82054661b 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -209,9 +209,9 @@ static inline uint32_t xive2_nvp_idx(uint32_t cam_line)
return cam_line & ((1 << XIVE2_NVP_SHIFT) - 1);
}
-static inline uint32_t xive2_nvp_blk(uint32_t cam_line)
+static inline uint8_t xive2_nvp_blk(uint32_t cam_line)
{
- return (cam_line >> XIVE2_NVP_SHIFT) & 0xf;
+ return (uint8_t)((cam_line >> XIVE2_NVP_SHIFT) & 0xf);
}
void xive2_nvp_pic_print_info(Xive2Nvp *nvp, uint32_t nvp_idx, GString *buf);
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 28/50] ppc/xive: Change presenter .match_nvt to match not present
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (26 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 27/50] ppc/xive2: redistribute irqs for pool and phys ctx pull Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 19:54 ` Mike Kowal
2025-05-15 15:53 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt Nicholas Piggin
` (23 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Have the match_nvt method only perform a TCTX match but don't present
the interrupt, the caller presents. This has no functional change, but
allows for more complicated presentation logic after matching.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/pnv_xive.c | 16 +++++++-------
hw/intc/pnv_xive2.c | 16 +++++++-------
hw/intc/spapr_xive.c | 18 +++++++--------
hw/intc/xive.c | 51 +++++++++++++++----------------------------
hw/intc/xive2.c | 31 +++++++++++++-------------
hw/ppc/pnv.c | 48 ++++++++++++++--------------------------
hw/ppc/spapr.c | 21 +++++++-----------
include/hw/ppc/xive.h | 27 +++++++++++++----------
8 files changed, 97 insertions(+), 131 deletions(-)
diff --git a/hw/intc/pnv_xive.c b/hw/intc/pnv_xive.c
index ccbe95a58e..cdde8d0814 100644
--- a/hw/intc/pnv_xive.c
+++ b/hw/intc/pnv_xive.c
@@ -470,14 +470,13 @@ static bool pnv_xive_is_cpu_enabled(PnvXive *xive, PowerPCCPU *cpu)
return xive->regs[reg >> 3] & PPC_BIT(bit);
}
-static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
- uint8_t nvt_blk, uint32_t nvt_idx,
- bool crowd, bool cam_ignore, uint8_t priority,
- uint32_t logic_serv, XiveTCTXMatch *match)
+static bool pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore, uint8_t priority,
+ uint32_t logic_serv, XiveTCTXMatch *match)
{
PnvXive *xive = PNV_XIVE(xptr);
PnvChip *chip = xive->chip;
- int count = 0;
int i, j;
for (i = 0; i < chip->nr_cores; i++) {
@@ -510,17 +509,18 @@ static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a "
"thread context NVT %x/%x\n",
nvt_blk, nvt_idx);
- return -1;
+ match->count++;
+ continue;
}
match->ring = ring;
match->tctx = tctx;
- count++;
+ match->count++;
}
}
}
- return count;
+ return !!match->count;
}
static uint32_t pnv_xive_presenter_get_config(XivePresenter *xptr)
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 96b8851b7e..59b95e5219 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -640,14 +640,13 @@ static bool pnv_xive2_is_cpu_enabled(PnvXive2 *xive, PowerPCCPU *cpu)
return xive->tctxt_regs[reg >> 3] & PPC_BIT(bit);
}
-static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
- uint8_t nvt_blk, uint32_t nvt_idx,
- bool crowd, bool cam_ignore, uint8_t priority,
- uint32_t logic_serv, XiveTCTXMatch *match)
+static bool pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore, uint8_t priority,
+ uint32_t logic_serv, XiveTCTXMatch *match)
{
PnvXive2 *xive = PNV_XIVE2(xptr);
PnvChip *chip = xive->chip;
- int count = 0;
int i, j;
bool gen1_tima_os =
xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
@@ -692,7 +691,8 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
"thread context NVT %x/%x\n",
nvt_blk, nvt_idx);
/* Should set a FIR if we ever model it */
- return -1;
+ match->count++;
+ continue;
}
/*
* For a group notification, we need to know if the
@@ -717,13 +717,13 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
}
}
}
- count++;
+ match->count++;
}
}
}
}
- return count;
+ return !!match->count;
}
static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
index ce734b03ab..a7475d2f21 100644
--- a/hw/intc/spapr_xive.c
+++ b/hw/intc/spapr_xive.c
@@ -428,14 +428,13 @@ static int spapr_xive_write_nvt(XiveRouter *xrtr, uint8_t nvt_blk,
g_assert_not_reached();
}
-static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
- uint8_t nvt_blk, uint32_t nvt_idx,
- bool crowd, bool cam_ignore,
- uint8_t priority,
- uint32_t logic_serv, XiveTCTXMatch *match)
+static bool spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore,
+ uint8_t priority,
+ uint32_t logic_serv, XiveTCTXMatch *match)
{
CPUState *cs;
- int count = 0;
CPU_FOREACH(cs) {
PowerPCCPU *cpu = POWERPC_CPU(cs);
@@ -463,16 +462,17 @@ static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
if (match->tctx) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a thread "
"context NVT %x/%x\n", nvt_blk, nvt_idx);
- return -1;
+ match->count++;
+ continue;
}
match->ring = ring;
match->tctx = tctx;
- count++;
+ match->count++;
}
}
- return count;
+ return !!match->count;
}
static uint32_t spapr_xive_presenter_get_config(XivePresenter *xptr)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index ad30476c17..27b5a21371 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1762,8 +1762,8 @@ uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
return 1U << (first_zero + 1);
}
-static uint8_t xive_get_group_level(bool crowd, bool ignore,
- uint32_t nvp_blk, uint32_t nvp_index)
+uint8_t xive_get_group_level(bool crowd, bool ignore,
+ uint32_t nvp_blk, uint32_t nvp_index)
{
int first_zero;
uint8_t level;
@@ -1881,15 +1881,14 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
* This is our simple Xive Presenter Engine model. It is merged in the
* Router as it does not require an extra object.
*/
-bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
+bool xive_presenter_match(XiveFabric *xfb, uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
bool crowd, bool cam_ignore, uint8_t priority,
- uint32_t logic_serv, bool *precluded)
+ uint32_t logic_serv, XiveTCTXMatch *match)
{
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
- XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false };
- uint8_t group_level;
- int count;
+
+ memset(match, 0, sizeof(*match));
/*
* Ask the machine to scan the interrupt controllers for a match.
@@ -1914,22 +1913,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
* a new command to the presenters (the equivalent of the "assign"
* power bus command in the documented full notify sequence.
*/
- count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
- priority, logic_serv, &match);
- if (count < 0) {
- return false;
- }
-
- /* handle CPU exception delivery */
- if (count) {
- group_level = xive_get_group_level(crowd, cam_ignore, nvt_blk, nvt_idx);
- trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
- xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
- } else {
- *precluded = match.precluded;
- }
-
- return !!count;
+ return xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
+ priority, logic_serv, match);
}
/*
@@ -1966,7 +1951,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
uint8_t nvt_blk;
uint32_t nvt_idx;
XiveNVT nvt;
- bool found, precluded;
+ XiveTCTXMatch match;
uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
@@ -2046,16 +2031,16 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
return;
}
- found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
- false /* crowd */,
- xive_get_field32(END_W7_F0_IGNORE, end.w7),
- priority,
- xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
- &precluded);
- /* we don't support VP-group notification on P9, so precluded is not used */
/* TODO: Auto EOI. */
-
- if (found) {
+ /* we don't support VP-group notification on P9, so precluded is not used */
+ if (xive_presenter_match(xrtr->xfb, format, nvt_blk, nvt_idx,
+ false /* crowd */,
+ xive_get_field32(END_W7_F0_IGNORE, end.w7),
+ priority,
+ xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
+ &match)) {
+ trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, 0);
+ xive_tctx_pipr_update(match.tctx, match.ring, priority, 0);
return;
}
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index ac94193464..6e136ad2e2 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1559,7 +1559,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
Xive2End end;
uint8_t priority;
uint8_t format;
- bool found, precluded;
+ XiveTCTXMatch match;
+ bool crowd, cam_ignore;
uint8_t nvx_blk;
uint32_t nvx_idx;
@@ -1629,16 +1630,19 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
*/
nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end.w6);
nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end.w6);
-
- found = xive_presenter_notify(xrtr->xfb, format, nvx_blk, nvx_idx,
- xive2_end_is_crowd(&end), xive2_end_is_ignore(&end),
- priority,
- xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
- &precluded);
+ crowd = xive2_end_is_crowd(&end);
+ cam_ignore = xive2_end_is_ignore(&end);
/* TODO: Auto EOI. */
-
- if (found) {
+ if (xive_presenter_match(xrtr->xfb, format, nvx_blk, nvx_idx,
+ crowd, cam_ignore, priority,
+ xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
+ &match)) {
+ uint8_t group_level;
+
+ group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
+ trace_xive_presenter_notify(nvx_blk, nvx_idx, match.ring, group_level);
+ xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
return;
}
@@ -1656,7 +1660,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
return;
}
- if (!xive2_end_is_ignore(&end)) {
+ if (!cam_ignore) {
uint8_t ipb;
Xive2Nvp nvp;
@@ -1685,9 +1689,6 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
} else {
Xive2Nvgc nvgc;
uint32_t backlog;
- bool crowd;
-
- crowd = xive2_end_is_crowd(&end);
/*
* For groups and crowds, the per-priority backlog
@@ -1719,9 +1720,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
if (backlog == 1) {
XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
xfc->broadcast(xrtr->xfb, nvx_blk, nvx_idx,
- xive2_end_is_crowd(&end),
- xive2_end_is_ignore(&end),
- priority);
+ crowd, cam_ignore, priority);
if (!xive2_end_is_precluded_escalation(&end)) {
/*
diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
index deb29a6389..0c17846b38 100644
--- a/hw/ppc/pnv.c
+++ b/hw/ppc/pnv.c
@@ -2619,62 +2619,46 @@ static void pnv_pic_print_info(InterruptStatsProvider *obj, GString *buf)
}
}
-static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
- uint8_t nvt_blk, uint32_t nvt_idx,
- bool crowd, bool cam_ignore, uint8_t priority,
- uint32_t logic_serv,
- XiveTCTXMatch *match)
+static bool pnv_match_nvt(XiveFabric *xfb, uint8_t format,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore, uint8_t priority,
+ uint32_t logic_serv,
+ XiveTCTXMatch *match)
{
PnvMachineState *pnv = PNV_MACHINE(xfb);
- int total_count = 0;
int i;
for (i = 0; i < pnv->num_chips; i++) {
Pnv9Chip *chip9 = PNV9_CHIP(pnv->chips[i]);
XivePresenter *xptr = XIVE_PRESENTER(&chip9->xive);
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
- int count;
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
- cam_ignore, priority, logic_serv, match);
-
- if (count < 0) {
- return count;
- }
-
- total_count += count;
+ xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
+ cam_ignore, priority, logic_serv, match);
}
- return total_count;
+ return !!match->count;
}
-static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
- uint8_t nvt_blk, uint32_t nvt_idx,
- bool crowd, bool cam_ignore, uint8_t priority,
- uint32_t logic_serv,
- XiveTCTXMatch *match)
+static bool pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore, uint8_t priority,
+ uint32_t logic_serv,
+ XiveTCTXMatch *match)
{
PnvMachineState *pnv = PNV_MACHINE(xfb);
- int total_count = 0;
int i;
for (i = 0; i < pnv->num_chips; i++) {
Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
- int count;
-
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
- cam_ignore, priority, logic_serv, match);
-
- if (count < 0) {
- return count;
- }
- total_count += count;
+ xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
+ cam_ignore, priority, logic_serv, match);
}
- return total_count;
+ return !!match->count;
}
static int pnv10_xive_broadcast(XiveFabric *xfb,
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index b0a0f8c689..93574d2a63 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -4468,21 +4468,14 @@ static void spapr_pic_print_info(InterruptStatsProvider *obj, GString *buf)
/*
* This is a XIVE only operation
*/
-static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
- uint8_t nvt_blk, uint32_t nvt_idx,
- bool crowd, bool cam_ignore, uint8_t priority,
- uint32_t logic_serv, XiveTCTXMatch *match)
+static bool spapr_match_nvt(XiveFabric *xfb, uint8_t format,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore, uint8_t priority,
+ uint32_t logic_serv, XiveTCTXMatch *match)
{
SpaprMachineState *spapr = SPAPR_MACHINE(xfb);
XivePresenter *xptr = XIVE_PRESENTER(spapr->active_intc);
XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
- int count;
-
- count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
- priority, logic_serv, match);
- if (count < 0) {
- return count;
- }
/*
* When we implement the save and restore of the thread interrupt
@@ -4493,12 +4486,14 @@ static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
* Until this is done, the sPAPR machine should find at least one
* matching context always.
*/
- if (count == 0) {
+ if (!xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
+ priority, logic_serv, match)) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVT %x/%x is not dispatched\n",
nvt_blk, nvt_idx);
+ return false;
}
- return count;
+ return true;
}
int spapr_get_vcpu_id(PowerPCCPU *cpu)
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 46d05d74fb..8152a9df3d 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -425,6 +425,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas);
typedef struct XiveTCTXMatch {
XiveTCTX *tctx;
+ int count;
uint8_t ring;
bool precluded;
} XiveTCTXMatch;
@@ -440,10 +441,10 @@ DECLARE_CLASS_CHECKERS(XivePresenterClass, XIVE_PRESENTER,
struct XivePresenterClass {
InterfaceClass parent;
- int (*match_nvt)(XivePresenter *xptr, uint8_t format,
- uint8_t nvt_blk, uint32_t nvt_idx,
- bool crowd, bool cam_ignore, uint8_t priority,
- uint32_t logic_serv, XiveTCTXMatch *match);
+ bool (*match_nvt)(XivePresenter *xptr, uint8_t format,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore, uint8_t priority,
+ uint32_t logic_serv, XiveTCTXMatch *match);
bool (*in_kernel)(const XivePresenter *xptr);
uint32_t (*get_config)(XivePresenter *xptr);
int (*broadcast)(XivePresenter *xptr,
@@ -455,12 +456,14 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
uint8_t format,
uint8_t nvt_blk, uint32_t nvt_idx,
bool cam_ignore, uint32_t logic_serv);
-bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
- uint8_t nvt_blk, uint32_t nvt_idx,
- bool crowd, bool cam_ignore, uint8_t priority,
- uint32_t logic_serv, bool *precluded);
+bool xive_presenter_match(XiveFabric *xfb, uint8_t format,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore, uint8_t priority,
+ uint32_t logic_serv, XiveTCTXMatch *match);
uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
+uint8_t xive_get_group_level(bool crowd, bool ignore,
+ uint32_t nvp_blk, uint32_t nvp_index);
/*
* XIVE Fabric (Interface between Interrupt Controller and Machine)
@@ -475,10 +478,10 @@ DECLARE_CLASS_CHECKERS(XiveFabricClass, XIVE_FABRIC,
struct XiveFabricClass {
InterfaceClass parent;
- int (*match_nvt)(XiveFabric *xfb, uint8_t format,
- uint8_t nvt_blk, uint32_t nvt_idx,
- bool crowd, bool cam_ignore, uint8_t priority,
- uint32_t logic_serv, XiveTCTXMatch *match);
+ bool (*match_nvt)(XiveFabric *xfb, uint8_t format,
+ uint8_t nvt_blk, uint32_t nvt_idx,
+ bool crowd, bool cam_ignore, uint8_t priority,
+ uint32_t logic_serv, XiveTCTXMatch *match);
int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
bool crowd, bool cam_ignore, uint8_t priority);
};
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (27 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 28/50] ppc/xive: Change presenter .match_nvt to match not present Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 19:55 ` Mike Kowal
2025-05-15 15:54 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt Nicholas Piggin
` (22 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
A group interrupt that gets preempted by a higher priority interrupt
delivery must be redistributed otherwise it would get lost.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 6e136ad2e2..cae4092198 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1638,11 +1638,21 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
crowd, cam_ignore, priority,
xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
&match)) {
+ XiveTCTX *tctx = match.tctx;
+ uint8_t ring = match.ring;
+ uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+ uint8_t *aregs = &tctx->regs[alt_ring];
+ uint8_t nsr = aregs[TM_NSR];
uint8_t group_level;
+ if (priority < aregs[TM_PIPR] &&
+ xive_nsr_indicates_group_exception(alt_ring, nsr)) {
+ xive2_redistribute(xrtr, tctx, alt_ring);
+ }
+
group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
- trace_xive_presenter_notify(nvx_blk, nvx_idx, match.ring, group_level);
- xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
+ trace_xive_presenter_notify(nvx_blk, nvx_idx, ring, group_level);
+ xive_tctx_pipr_update(tctx, ring, priority, group_level);
return;
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (28 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 20:10 ` Mike Kowal
2025-05-15 15:55 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP Nicholas Piggin
` (21 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
xive_tctx_pipr_update() is used for multiple things. In an effort
to make things simpler and less overloaded, split out the function
that is used to present a new interrupt to the tctx.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 8 +++++++-
hw/intc/xive2.c | 2 +-
include/hw/ppc/xive.h | 2 ++
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 27b5a21371..bf4c0634ca 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -225,6 +225,12 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
xive_tctx_notify(tctx, ring, group_level);
}
+void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+ uint8_t group_level)
+{
+ xive_tctx_pipr_update(tctx, ring, priority, group_level);
+}
+
/*
* XIVE Thread Interrupt Management Area (TIMA)
*/
@@ -2040,7 +2046,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
&match)) {
trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, 0);
- xive_tctx_pipr_update(match.tctx, match.ring, priority, 0);
+ xive_tctx_pipr_present(match.tctx, match.ring, priority, 0);
return;
}
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index cae4092198..f91109b84a 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1652,7 +1652,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
trace_xive_presenter_notify(nvx_blk, nvx_idx, ring, group_level);
- xive_tctx_pipr_update(tctx, ring, priority, group_level);
+ xive_tctx_pipr_present(tctx, ring, priority, group_level);
return;
}
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 8152a9df3d..0d6b11e818 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -562,6 +562,8 @@ void xive_tctx_reset(XiveTCTX *tctx);
void xive_tctx_destroy(XiveTCTX *tctx);
void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
uint8_t group_level);
+void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+ uint8_t group_level);
void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (29 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:21 ` Mike Kowal
2025-05-15 15:55 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 32/50] ppc/xive: Split xive recompute from IPB function Nicholas Piggin
` (20 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
xive_tctx_pipr_present() as implemented with xive_tctx_pipr_update()
causes VP-directed (group==0) interrupt to be presented in PIPR and NSR
despite being a lower priority than the currently presented group
interrupt.
This must not happen. The IPB bit should record the low priority VP
interrupt, but PIPR and NSR must not present the lower priority
interrupt.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index bf4c0634ca..25f6c69c44 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -228,7 +228,23 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
uint8_t group_level)
{
- xive_tctx_pipr_update(tctx, ring, priority, group_level);
+ /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
+ uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+ uint8_t *aregs = &tctx->regs[alt_ring];
+ uint8_t *regs = &tctx->regs[ring];
+ uint8_t pipr = xive_priority_to_pipr(priority);
+
+ if (group_level == 0) {
+ regs[TM_IPB] |= xive_priority_to_ipb(priority);
+ if (pipr >= aregs[TM_PIPR]) {
+ /* VP interrupts can come here with lower priority than PIPR */
+ return;
+ }
+ }
+ g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
+ g_assert(pipr < aregs[TM_PIPR]);
+ aregs[TM_PIPR] = pipr;
+ xive_tctx_notify(tctx, ring, group_level);
}
/*
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 32/50] ppc/xive: Split xive recompute from IPB function
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (30 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 20:42 ` Mike Kowal
2025-05-15 15:56 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 33/50] ppc/xive: tctx signaling registers rework Nicholas Piggin
` (19 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Further split xive_tctx_pipr_update() by splitting out a new function
that is used to re-compute the PIPR from IPB. This is generally only
used with XIVE1, because group interrputs require more logic.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 25 ++++++++++++++++++++++---
1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 25f6c69c44..5ff1b8f024 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -225,6 +225,20 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
xive_tctx_notify(tctx, ring, group_level);
}
+static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
+{
+ /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
+ uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+ uint8_t *aregs = &tctx->regs[alt_ring];
+ uint8_t *regs = &tctx->regs[ring];
+
+ /* Does not support a presented group interrupt */
+ g_assert(!xive_nsr_indicates_group_exception(alt_ring, aregs[TM_NSR]));
+
+ aregs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
+ xive_tctx_notify(tctx, ring, 0);
+}
+
void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
uint8_t group_level)
{
@@ -517,7 +531,12 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size)
{
- xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0);
+ uint8_t ring = TM_QW1_OS;
+ uint8_t *regs = &tctx->regs[ring];
+
+ /* XXX: how should this work exactly? */
+ regs[TM_IPB] |= xive_priority_to_ipb(value & 0xff);
+ xive_tctx_pipr_recompute_from_ipb(tctx, ring);
}
static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk,
@@ -601,14 +620,14 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
}
/*
- * Always call xive_tctx_pipr_update(). Even if there were no
+ * Always call xive_tctx_recompute_from_ipb(). Even if there were no
* escalation triggered, there could be a pending interrupt which
* was saved when the context was pulled and that we need to take
* into account by recalculating the PIPR (which is not
* saved/restored).
* It will also raise the External interrupt signal if needed.
*/
- xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
+ xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
}
/*
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 33/50] ppc/xive: tctx signaling registers rework
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (31 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 32/50] ppc/xive: Split xive recompute from IPB function Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-14 20:49 ` Mike Kowal
2025-05-15 15:58 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented Nicholas Piggin
` (18 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
The tctx "signaling" registers (PIPR, CPPR, NSR) raise an interrupt on
the target CPU thread. The POOL and PHYS rings both raise hypervisor
interrupts, so they both share one set of signaling registers in the
PHYS ring. The PHYS NSR register contains a field that indicates which
ring has presented the interrupt being signaled to the CPU.
This sharing results in all the "alt_regs" throughout the code. alt_regs
is not very descriptive, and worse is that the name is used for
conversions in both directions, i.e., to find the presenting ring from
the signaling ring, and the signaling ring from the presenting ring.
Instead of alt_regs, use the names sig_regs and sig_ring, and regs and
ring for the presenting ring being worked on. Add a helper function to
get the sign_regs, and add some asserts to ensure the POOL regs are
never used to signal interrupts.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 112 ++++++++++++++++++++++--------------------
hw/intc/xive2.c | 94 ++++++++++++++++-------------------
include/hw/ppc/xive.h | 26 +++++++++-
3 files changed, 126 insertions(+), 106 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 5ff1b8f024..4e0c71d684 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -80,69 +80,77 @@ static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
}
}
-uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
+/*
+ * interrupt is accepted on the presentation ring, for PHYS ring the NSR
+ * directs it to the PHYS or POOL rings.
+ */
+uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
{
- uint8_t *regs = &tctx->regs[ring];
- uint8_t nsr = regs[TM_NSR];
+ uint8_t *sig_regs = &tctx->regs[sig_ring];
+ uint8_t nsr = sig_regs[TM_NSR];
- qemu_irq_lower(xive_tctx_output(tctx, ring));
+ g_assert(sig_ring == TM_QW1_OS || sig_ring == TM_QW3_HV_PHYS);
+
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
+
+ qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
- if (xive_nsr_indicates_exception(ring, nsr)) {
- uint8_t cppr = regs[TM_PIPR];
- uint8_t alt_ring;
- uint8_t *alt_regs;
+ if (xive_nsr_indicates_exception(sig_ring, nsr)) {
+ uint8_t cppr = sig_regs[TM_PIPR];
+ uint8_t ring;
+ uint8_t *regs;
- alt_ring = xive_nsr_exception_ring(ring, nsr);
- alt_regs = &tctx->regs[alt_ring];
+ ring = xive_nsr_exception_ring(sig_ring, nsr);
+ regs = &tctx->regs[ring];
- regs[TM_CPPR] = cppr;
+ sig_regs[TM_CPPR] = cppr;
/*
* If the interrupt was for a specific VP, reset the pending
* buffer bit, otherwise clear the logical server indicator
*/
- if (!xive_nsr_indicates_group_exception(ring, nsr)) {
- alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
+ if (!xive_nsr_indicates_group_exception(sig_ring, nsr)) {
+ regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
}
/* Clear the exception from NSR */
- regs[TM_NSR] = 0;
+ sig_regs[TM_NSR] = 0;
- trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
- alt_regs[TM_IPB], regs[TM_PIPR],
- regs[TM_CPPR], regs[TM_NSR]);
+ trace_xive_tctx_accept(tctx->cs->cpu_index, ring,
+ regs[TM_IPB], sig_regs[TM_PIPR],
+ sig_regs[TM_CPPR], sig_regs[TM_NSR]);
}
- return ((uint64_t)nsr << 8) | regs[TM_CPPR];
+ return ((uint64_t)nsr << 8) | sig_regs[TM_CPPR];
}
void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
{
- /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
- uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
- uint8_t *alt_regs = &tctx->regs[alt_ring];
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
uint8_t *regs = &tctx->regs[ring];
- if (alt_regs[TM_PIPR] < alt_regs[TM_CPPR]) {
+ if (sig_regs[TM_PIPR] < sig_regs[TM_CPPR]) {
switch (ring) {
case TM_QW1_OS:
- regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
+ sig_regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
break;
case TM_QW2_HV_POOL:
- alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
+ sig_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
break;
case TM_QW3_HV_PHYS:
- regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
+ sig_regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
break;
default:
g_assert_not_reached();
}
trace_xive_tctx_notify(tctx->cs->cpu_index, ring,
- regs[TM_IPB], alt_regs[TM_PIPR],
- alt_regs[TM_CPPR], alt_regs[TM_NSR]);
+ regs[TM_IPB], sig_regs[TM_PIPR],
+ sig_regs[TM_CPPR], sig_regs[TM_NSR]);
qemu_irq_raise(xive_tctx_output(tctx, ring));
} else {
- alt_regs[TM_NSR] = 0;
+ sig_regs[TM_NSR] = 0;
qemu_irq_lower(xive_tctx_output(tctx, ring));
}
}
@@ -159,25 +167,32 @@ void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring)
static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
{
- uint8_t *regs = &tctx->regs[ring];
+ uint8_t *sig_regs = &tctx->regs[ring];
uint8_t pipr_min;
uint8_t ring_min;
+ g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
+
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
+
+ /* XXX: should show pool IPB for PHYS ring */
trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
- regs[TM_IPB], regs[TM_PIPR],
- cppr, regs[TM_NSR]);
+ sig_regs[TM_IPB], sig_regs[TM_PIPR],
+ cppr, sig_regs[TM_NSR]);
if (cppr > XIVE_PRIORITY_MAX) {
cppr = 0xff;
}
- tctx->regs[ring + TM_CPPR] = cppr;
+ sig_regs[TM_CPPR] = cppr;
/*
* Recompute the PIPR based on local pending interrupts. The PHYS
* ring must take the minimum of both the PHYS and POOL PIPR values.
*/
- pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
+ pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
ring_min = ring;
/* PHYS updates also depend on POOL values */
@@ -186,7 +201,6 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
/* POOL values only matter if POOL ctx is valid */
if (pool_regs[TM_WORD2] & 0x80) {
-
uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
/*
@@ -200,7 +214,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
}
}
- regs[TM_PIPR] = pipr_min;
+ sig_regs[TM_PIPR] = pipr_min;
/* CPPR has changed, check if we need to raise a pending exception */
xive_tctx_notify(tctx, ring_min, 0);
@@ -208,56 +222,50 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
uint8_t group_level)
- {
- /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
- uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
- uint8_t *alt_regs = &tctx->regs[alt_ring];
+{
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
uint8_t *regs = &tctx->regs[ring];
if (group_level == 0) {
/* VP-specific */
regs[TM_IPB] |= xive_priority_to_ipb(priority);
- alt_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
+ sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
} else {
/* VP-group */
- alt_regs[TM_PIPR] = xive_priority_to_pipr(priority);
+ sig_regs[TM_PIPR] = xive_priority_to_pipr(priority);
}
xive_tctx_notify(tctx, ring, group_level);
}
static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
{
- /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
- uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
- uint8_t *aregs = &tctx->regs[alt_ring];
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
uint8_t *regs = &tctx->regs[ring];
/* Does not support a presented group interrupt */
- g_assert(!xive_nsr_indicates_group_exception(alt_ring, aregs[TM_NSR]));
+ g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
- aregs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
+ sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
xive_tctx_notify(tctx, ring, 0);
}
void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
uint8_t group_level)
{
- /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
- uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
- uint8_t *aregs = &tctx->regs[alt_ring];
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
uint8_t *regs = &tctx->regs[ring];
uint8_t pipr = xive_priority_to_pipr(priority);
if (group_level == 0) {
regs[TM_IPB] |= xive_priority_to_ipb(priority);
- if (pipr >= aregs[TM_PIPR]) {
+ if (pipr >= sig_regs[TM_PIPR]) {
/* VP interrupts can come here with lower priority than PIPR */
return;
}
}
g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
- g_assert(pipr < aregs[TM_PIPR]);
- aregs[TM_PIPR] = pipr;
+ g_assert(pipr < sig_regs[TM_PIPR]);
+ sig_regs[TM_PIPR] = pipr;
xive_tctx_notify(tctx, ring, group_level);
}
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index f91109b84a..b9ee8c9e9f 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -606,11 +606,9 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
{
- uint8_t *regs = &tctx->regs[ring];
- uint8_t *alt_regs = (ring == TM_QW2_HV_POOL) ? &tctx->regs[TM_QW3_HV_PHYS] :
- regs;
- uint8_t nsr = alt_regs[TM_NSR];
- uint8_t pipr = alt_regs[TM_PIPR];
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
+ uint8_t nsr = sig_regs[TM_NSR];
+ uint8_t pipr = sig_regs[TM_PIPR];
uint8_t crowd = NVx_CROWD_LVL(nsr);
uint8_t group = NVx_GROUP_LVL(nsr);
uint8_t nvgc_blk, end_blk, nvp_blk;
@@ -618,19 +616,16 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
Xive2Nvgc nvgc;
uint8_t prio_limit;
uint32_t cfg;
- uint8_t alt_ring;
/* redistribution is only for group/crowd interrupts */
if (!xive_nsr_indicates_group_exception(ring, nsr)) {
return;
}
- alt_ring = xive_nsr_exception_ring(ring, nsr);
-
/* Don't check return code since ring is expected to be invalidated */
- xive2_tctx_get_nvp_indexes(tctx, alt_ring, &nvp_blk, &nvp_idx);
+ xive2_tctx_get_nvp_indexes(tctx, ring, &nvp_blk, &nvp_idx);
- trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
+ trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
/* convert crowd/group to blk/idx */
@@ -675,23 +670,11 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
/* clear interrupt indication for the context */
- alt_regs[TM_NSR] = 0;
- alt_regs[TM_PIPR] = alt_regs[TM_CPPR];
+ sig_regs[TM_NSR] = 0;
+ sig_regs[TM_PIPR] = sig_regs[TM_CPPR];
xive_tctx_reset_signal(tctx, ring);
}
-static uint8_t xive2_hv_irq_ring(uint8_t nsr)
-{
- switch (nsr >> 6) {
- case TM_QW3_NSR_HE_POOL:
- return TM_QW2_HV_POOL;
- case TM_QW3_NSR_HE_PHYS:
- return TM_QW3_HV_PHYS;
- default:
- return -1;
- }
-}
-
static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size, uint8_t ring)
{
@@ -718,7 +701,8 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
uint32_t ringw2 = xive_tctx_word2(&tctx->regs[cur_ring]);
uint32_t ringw2_new = xive_set_field32(TM2_QW1W2_VO, ringw2, 0);
bool is_valid = !!(xive_get_field32(TM2_QW1W2_VO, ringw2));
- uint8_t alt_ring;
+ uint8_t *sig_regs;
+
memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
/* Skip the rest for USER or invalid contexts */
@@ -727,12 +711,11 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
}
/* Active group/crowd interrupts need to be redistributed */
- alt_ring = (cur_ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : cur_ring;
- nsr = tctx->regs[alt_ring + TM_NSR];
- if (xive_nsr_indicates_group_exception(alt_ring, nsr)) {
- /* For HV rings, only redistribute if cur_ring matches NSR */
- if ((cur_ring == TM_QW1_OS) ||
- (cur_ring == xive2_hv_irq_ring(nsr))) {
+ sig_regs = xive_tctx_signal_regs(tctx, ring);
+ nsr = sig_regs[TM_NSR];
+ if (xive_nsr_indicates_group_exception(cur_ring, nsr)) {
+ /* Ensure ring matches NSR (for HV NSR POOL vs PHYS rings) */
+ if (cur_ring == xive_nsr_exception_ring(cur_ring, nsr)) {
xive2_redistribute(xrtr, tctx, cur_ring);
}
}
@@ -1118,7 +1101,7 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
{
- uint8_t *regs = &tctx->regs[ring];
+ uint8_t *sig_regs = &tctx->regs[ring];
Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
uint8_t old_cppr, backlog_prio, first_group, group_level;
uint8_t pipr_min, lsmfb_min, ring_min;
@@ -1127,33 +1110,41 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
uint32_t nvp_idx;
Xive2Nvp nvp;
int rc;
- uint8_t nsr = regs[TM_NSR];
+ uint8_t nsr = sig_regs[TM_NSR];
+
+ g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
+
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
+ /* XXX: should show pool IPB for PHYS ring */
trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
- regs[TM_IPB], regs[TM_PIPR],
+ sig_regs[TM_IPB], sig_regs[TM_PIPR],
cppr, nsr);
if (cppr > XIVE_PRIORITY_MAX) {
cppr = 0xff;
}
- old_cppr = regs[TM_CPPR];
- regs[TM_CPPR] = cppr;
+ old_cppr = sig_regs[TM_CPPR];
+ sig_regs[TM_CPPR] = cppr;
/* Handle increased CPPR priority (lower value) */
if (cppr < old_cppr) {
- if (cppr <= regs[TM_PIPR]) {
+ if (cppr <= sig_regs[TM_PIPR]) {
/* CPPR lowered below PIPR, must un-present interrupt */
if (xive_nsr_indicates_exception(ring, nsr)) {
if (xive_nsr_indicates_group_exception(ring, nsr)) {
/* redistribute precluded active grp interrupt */
- xive2_redistribute(xrtr, tctx, ring);
+ xive2_redistribute(xrtr, tctx,
+ xive_nsr_exception_ring(ring, nsr));
return;
}
}
/* interrupt is VP directed, pending in IPB */
- regs[TM_PIPR] = cppr;
+ sig_regs[TM_PIPR] = cppr;
xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
return;
} else {
@@ -1174,9 +1165,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
* be adjusted below if needed in case of pending group interrupts.
*/
again:
- pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
- group_enabled = !!regs[TM_LGS];
- lsmfb_min = group_enabled ? regs[TM_LSMFB] : 0xff;
+ pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
+ group_enabled = !!sig_regs[TM_LGS];
+ lsmfb_min = group_enabled ? sig_regs[TM_LSMFB] : 0xff;
ring_min = ring;
group_level = 0;
@@ -1265,7 +1256,7 @@ again:
}
/* PIPR should not be set to a value greater than CPPR */
- regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
+ sig_regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
/* CPPR has changed, check if we need to raise a pending exception */
xive_tctx_notify(tctx, ring_min, group_level);
@@ -1490,9 +1481,7 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
{
- /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
- uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
- uint8_t *alt_regs = &tctx->regs[alt_ring];
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
/*
* The xive2_presenter_tctx_match() above tells if there's a match
@@ -1500,7 +1489,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
* priority to know if the thread can take the interrupt now or if
* it is precluded.
*/
- if (priority < alt_regs[TM_PIPR]) {
+ if (priority < sig_regs[TM_PIPR]) {
return false;
}
return true;
@@ -1640,14 +1629,13 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
&match)) {
XiveTCTX *tctx = match.tctx;
uint8_t ring = match.ring;
- uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
- uint8_t *aregs = &tctx->regs[alt_ring];
- uint8_t nsr = aregs[TM_NSR];
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
+ uint8_t nsr = sig_regs[TM_NSR];
uint8_t group_level;
- if (priority < aregs[TM_PIPR] &&
- xive_nsr_indicates_group_exception(alt_ring, nsr)) {
- xive2_redistribute(xrtr, tctx, alt_ring);
+ if (priority < sig_regs[TM_PIPR] &&
+ xive_nsr_indicates_group_exception(ring, nsr)) {
+ xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
}
group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 0d6b11e818..a3c2f50ece 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -539,7 +539,7 @@ static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
}
/*
- * XIVE Thread Interrupt Management Aera (TIMA)
+ * XIVE Thread Interrupt Management Area (TIMA)
*
* This region gives access to the registers of the thread interrupt
* management context. It is four page wide, each page providing a
@@ -551,6 +551,30 @@ static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
#define XIVE_TM_OS_PAGE 0x2
#define XIVE_TM_USER_PAGE 0x3
+/*
+ * The TCTX (TIMA) has 4 rings (phys, pool, os, user), but only signals
+ * (raises an interrupt on) the CPU from 3 of them. Phys and pool both
+ * cause a hypervisor privileged interrupt so interrupts presented on
+ * those rings signal using the phys ring. This helper returns the signal
+ * regs from the given ring.
+ */
+static inline uint8_t *xive_tctx_signal_regs(XiveTCTX *tctx, uint8_t ring)
+{
+ /*
+ * This is a good point to add invariants to ensure nothing has tried to
+ * signal using the POOL ring.
+ */
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
+
+ if (ring == TM_QW2_HV_POOL) {
+ /* POOL and PHYS rings share the signal regs (PIPR, NSR, CPPR) */
+ ring = TM_QW3_HV_PHYS;
+ }
+ return &tctx->regs[ring];
+}
+
void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
uint64_t value, unsigned size);
uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (32 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 33/50] ppc/xive: tctx signaling registers rework Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:16 ` Mike Kowal
2025-05-15 16:04 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function Nicholas Piggin
` (17 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
The relationship between an interrupt signaled in the TIMA and the QEMU
irq line to the processor to be 1:1, so they should be raised and
lowered together and "just in case" lowering should be avoided (it could
mask
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 4e0c71d684..d5dbeab6bd 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -95,8 +95,6 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
- qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
-
if (xive_nsr_indicates_exception(sig_ring, nsr)) {
uint8_t cppr = sig_regs[TM_PIPR];
uint8_t ring;
@@ -117,6 +115,7 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
/* Clear the exception from NSR */
sig_regs[TM_NSR] = 0;
+ qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
trace_xive_tctx_accept(tctx->cs->cpu_index, ring,
regs[TM_IPB], sig_regs[TM_PIPR],
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (33 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:18 ` Mike Kowal
2025-05-15 16:05 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 36/50] ppc/xive2: split tctx presentation processing from set CPPR Nicholas Piggin
` (16 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Have xive_tctx_notify() also set the new PIPR value and rename it to
xive_tctx_pipr_set(). This can replace the last xive_tctx_pipr_update()
caller because it does not need to update IPB (it already sets it).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 39 +++++++++++----------------------------
hw/intc/xive2.c | 16 +++++++---------
include/hw/ppc/xive.h | 5 ++---
3 files changed, 20 insertions(+), 40 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index d5dbeab6bd..4659821d4a 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -125,12 +125,16 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
return ((uint64_t)nsr << 8) | sig_regs[TM_CPPR];
}
-void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
+/* Change PIPR and calculate NSR and irq based on PIPR, CPPR, group */
+void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t pipr,
+ uint8_t group_level)
{
uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
uint8_t *regs = &tctx->regs[ring];
- if (sig_regs[TM_PIPR] < sig_regs[TM_CPPR]) {
+ sig_regs[TM_PIPR] = pipr;
+
+ if (pipr < sig_regs[TM_CPPR]) {
switch (ring) {
case TM_QW1_OS:
sig_regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
@@ -145,7 +149,7 @@ void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
g_assert_not_reached();
}
trace_xive_tctx_notify(tctx->cs->cpu_index, ring,
- regs[TM_IPB], sig_regs[TM_PIPR],
+ regs[TM_IPB], pipr,
sig_regs[TM_CPPR], sig_regs[TM_NSR]);
qemu_irq_raise(xive_tctx_output(tctx, ring));
} else {
@@ -213,29 +217,10 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
}
}
- sig_regs[TM_PIPR] = pipr_min;
-
- /* CPPR has changed, check if we need to raise a pending exception */
- xive_tctx_notify(tctx, ring_min, 0);
+ /* CPPR has changed, this may present or preclude a pending exception */
+ xive_tctx_pipr_set(tctx, ring_min, pipr_min, 0);
}
-void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
- uint8_t group_level)
-{
- uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
- uint8_t *regs = &tctx->regs[ring];
-
- if (group_level == 0) {
- /* VP-specific */
- regs[TM_IPB] |= xive_priority_to_ipb(priority);
- sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
- } else {
- /* VP-group */
- sig_regs[TM_PIPR] = xive_priority_to_pipr(priority);
- }
- xive_tctx_notify(tctx, ring, group_level);
- }
-
static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
{
uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
@@ -244,8 +229,7 @@ static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
/* Does not support a presented group interrupt */
g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
- sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
- xive_tctx_notify(tctx, ring, 0);
+ xive_tctx_pipr_set(tctx, ring, xive_ipb_to_pipr(regs[TM_IPB]), 0);
}
void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
@@ -264,8 +248,7 @@ void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
}
g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
g_assert(pipr < sig_regs[TM_PIPR]);
- sig_regs[TM_PIPR] = pipr;
- xive_tctx_notify(tctx, ring, group_level);
+ xive_tctx_pipr_set(tctx, ring, pipr, group_level);
}
/*
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index b9ee8c9e9f..8c8dab3aa2 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -966,10 +966,10 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
}
/*
- * Compute the PIPR based on the restored state.
+ * Set the PIPR/NSR based on the restored state.
* It will raise the External interrupt signal if needed.
*/
- xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
+ xive_tctx_pipr_set(tctx, TM_QW1_OS, backlog_prio, backlog_level);
}
/*
@@ -1144,8 +1144,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
}
/* interrupt is VP directed, pending in IPB */
- sig_regs[TM_PIPR] = cppr;
- xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
+ xive_tctx_pipr_set(tctx, ring, cppr, 0);
return;
} else {
/* CPPR was lowered, but still above PIPR. No action needed. */
@@ -1255,11 +1254,10 @@ again:
pipr_min = backlog_prio;
}
- /* PIPR should not be set to a value greater than CPPR */
- sig_regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
-
- /* CPPR has changed, check if we need to raise a pending exception */
- xive_tctx_notify(tctx, ring_min, group_level);
+ if (pipr_min > cppr) {
+ pipr_min = cppr;
+ }
+ xive_tctx_pipr_set(tctx, ring_min, pipr_min, group_level);
}
void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index a3c2f50ece..2372d1014b 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -584,12 +584,11 @@ void xive_tctx_pic_print_info(XiveTCTX *tctx, GString *buf);
Object *xive_tctx_create(Object *cpu, XivePresenter *xptr, Error **errp);
void xive_tctx_reset(XiveTCTX *tctx);
void xive_tctx_destroy(XiveTCTX *tctx);
-void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
- uint8_t group_level);
+void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+ uint8_t group_level);
void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
uint8_t group_level);
void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
-void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
/*
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 36/50] ppc/xive2: split tctx presentation processing from set CPPR
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (34 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:24 ` Mike Kowal
2025-05-15 16:06 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 37/50] ppc/xive2: Consolidate presentation processing in context push Nicholas Piggin
` (15 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
The second part of the set CPPR operation is to process (or re-present)
any pending interrupts after CPPR is adjusted.
Split this presentation processing out into a standalone function that
can be used in other places.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 137 +++++++++++++++++++++++++++---------------------
1 file changed, 76 insertions(+), 61 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 8c8dab3aa2..aa06bfda77 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1098,66 +1098,19 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
}
-/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
-static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
+/* Re-calculate and present pending interrupts */
+static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
{
- uint8_t *sig_regs = &tctx->regs[ring];
+ uint8_t *sig_regs = &tctx->regs[sig_ring];
Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
- uint8_t old_cppr, backlog_prio, first_group, group_level;
+ uint8_t backlog_prio, first_group, group_level;
uint8_t pipr_min, lsmfb_min, ring_min;
+ uint8_t cppr = sig_regs[TM_CPPR];
bool group_enabled;
- uint8_t nvp_blk;
- uint32_t nvp_idx;
Xive2Nvp nvp;
int rc;
- uint8_t nsr = sig_regs[TM_NSR];
-
- g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
-
- g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
- g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
- g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
-
- /* XXX: should show pool IPB for PHYS ring */
- trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
- sig_regs[TM_IPB], sig_regs[TM_PIPR],
- cppr, nsr);
-
- if (cppr > XIVE_PRIORITY_MAX) {
- cppr = 0xff;
- }
-
- old_cppr = sig_regs[TM_CPPR];
- sig_regs[TM_CPPR] = cppr;
-
- /* Handle increased CPPR priority (lower value) */
- if (cppr < old_cppr) {
- if (cppr <= sig_regs[TM_PIPR]) {
- /* CPPR lowered below PIPR, must un-present interrupt */
- if (xive_nsr_indicates_exception(ring, nsr)) {
- if (xive_nsr_indicates_group_exception(ring, nsr)) {
- /* redistribute precluded active grp interrupt */
- xive2_redistribute(xrtr, tctx,
- xive_nsr_exception_ring(ring, nsr));
- return;
- }
- }
- /* interrupt is VP directed, pending in IPB */
- xive_tctx_pipr_set(tctx, ring, cppr, 0);
- return;
- } else {
- /* CPPR was lowered, but still above PIPR. No action needed. */
- return;
- }
- }
-
- /* CPPR didn't change, nothing needs to be done */
- if (cppr == old_cppr) {
- return;
- }
-
- /* CPPR priority decreased (higher value) */
+ g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
/*
* Recompute the PIPR based on local pending interrupts. It will
@@ -1167,11 +1120,11 @@ again:
pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
group_enabled = !!sig_regs[TM_LGS];
lsmfb_min = group_enabled ? sig_regs[TM_LSMFB] : 0xff;
- ring_min = ring;
+ ring_min = sig_ring;
group_level = 0;
/* PHYS updates also depend on POOL values */
- if (ring == TM_QW3_HV_PHYS) {
+ if (sig_ring == TM_QW3_HV_PHYS) {
uint8_t *pool_regs = &tctx->regs[TM_QW2_HV_POOL];
/* POOL values only matter if POOL ctx is valid */
@@ -1201,20 +1154,25 @@ again:
}
}
- rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
- if (rc) {
- qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
- return;
- }
-
if (group_enabled &&
lsmfb_min < cppr &&
lsmfb_min < pipr_min) {
+
+ uint8_t nvp_blk;
+ uint32_t nvp_idx;
+
/*
* Thread has seen a group interrupt with a higher priority
* than the new cppr or pending local interrupt. Check the
* backlog
*/
+ rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
+ if (rc) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid "
+ "context\n");
+ return;
+ }
+
if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
nvp_blk, nvp_idx);
@@ -1260,6 +1218,63 @@ again:
xive_tctx_pipr_set(tctx, ring_min, pipr_min, group_level);
}
+/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
+static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t sig_ring, uint8_t cppr)
+{
+ uint8_t *sig_regs = &tctx->regs[sig_ring];
+ Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
+ uint8_t old_cppr;
+ uint8_t nsr = sig_regs[TM_NSR];
+
+ g_assert(sig_ring == TM_QW1_OS || sig_ring == TM_QW3_HV_PHYS);
+
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
+ g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
+
+ /* XXX: should show pool IPB for PHYS ring */
+ trace_xive_tctx_set_cppr(tctx->cs->cpu_index, sig_ring,
+ sig_regs[TM_IPB], sig_regs[TM_PIPR],
+ cppr, nsr);
+
+ if (cppr > XIVE_PRIORITY_MAX) {
+ cppr = 0xff;
+ }
+
+ old_cppr = sig_regs[TM_CPPR];
+ sig_regs[TM_CPPR] = cppr;
+
+ /* Handle increased CPPR priority (lower value) */
+ if (cppr < old_cppr) {
+ if (cppr <= sig_regs[TM_PIPR]) {
+ /* CPPR lowered below PIPR, must un-present interrupt */
+ if (xive_nsr_indicates_exception(sig_ring, nsr)) {
+ if (xive_nsr_indicates_group_exception(sig_ring, nsr)) {
+ /* redistribute precluded active grp interrupt */
+ xive2_redistribute(xrtr, tctx,
+ xive_nsr_exception_ring(sig_ring, nsr));
+ return;
+ }
+ }
+
+ /* interrupt is VP directed, pending in IPB */
+ xive_tctx_pipr_set(tctx, sig_ring, cppr, 0);
+ return;
+ } else {
+ /* CPPR was lowered, but still above PIPR. No action needed. */
+ return;
+ }
+ }
+
+ /* CPPR didn't change, nothing needs to be done */
+ if (cppr == old_cppr) {
+ return;
+ }
+
+ /* CPPR priority decreased (higher value) */
+ xive2_tctx_process_pending(tctx, sig_ring);
+}
+
void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size)
{
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 37/50] ppc/xive2: Consolidate presentation processing in context push
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (35 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 36/50] ppc/xive2: split tctx presentation processing from set CPPR Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:25 ` Mike Kowal
2025-05-15 16:06 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set Nicholas Piggin
` (14 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
OS-push operation must re-present pending interrupts. Use the
newly created xive2_tctx_process_pending() function instead of
duplicating the logic.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 42 ++++++++++--------------------------------
1 file changed, 10 insertions(+), 32 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index aa06bfda77..0fdf6a4f20 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -903,18 +903,14 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
return cppr;
}
+static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
+
static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
{
- XivePresenter *xptr = XIVE_PRESENTER(xrtr);
- uint8_t ipb;
- uint8_t backlog_level;
- uint8_t group_level;
- uint8_t first_group;
- uint8_t backlog_prio;
- uint8_t group_prio;
uint8_t *regs = &tctx->regs[TM_QW1_OS];
+ uint8_t ipb;
Xive2Nvp nvp;
/*
@@ -946,30 +942,8 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
}
/* IPB bits in the backlog are merged with the TIMA IPB bits */
regs[TM_IPB] |= ipb;
- backlog_prio = xive_ipb_to_pipr(regs[TM_IPB]);
- backlog_level = 0;
-
- first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
- if (first_group && regs[TM_LSMFB] < backlog_prio) {
- group_prio = xive2_presenter_backlog_scan(xptr, nvp_blk, nvp_idx,
- first_group, &group_level);
- regs[TM_LSMFB] = group_prio;
- if (regs[TM_LGS] && group_prio < backlog_prio &&
- group_prio < regs[TM_CPPR]) {
-
- /* VP can take a group interrupt */
- xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
- group_prio, group_level);
- backlog_prio = group_prio;
- backlog_level = group_level;
- }
- }
- /*
- * Set the PIPR/NSR based on the restored state.
- * It will raise the External interrupt signal if needed.
- */
- xive_tctx_pipr_set(tctx, TM_QW1_OS, backlog_prio, backlog_level);
+ xive2_tctx_process_pending(tctx, TM_QW1_OS);
}
/*
@@ -1103,8 +1077,12 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
{
uint8_t *sig_regs = &tctx->regs[sig_ring];
Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
- uint8_t backlog_prio, first_group, group_level;
- uint8_t pipr_min, lsmfb_min, ring_min;
+ uint8_t backlog_prio;
+ uint8_t first_group;
+ uint8_t group_level;
+ uint8_t pipr_min;
+ uint8_t lsmfb_min;
+ uint8_t ring_min;
uint8_t cppr = sig_regs[TM_CPPR];
bool group_enabled;
Xive2Nvp nvp;
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (36 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 37/50] ppc/xive2: Consolidate presentation processing in context push Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:26 ` Mike Kowal
2025-05-15 16:07 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 39/50] ppc/xive: Assert group interrupts were redistributed Nicholas Piggin
` (13 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
When CPPR priority is decreased, pending interrupts do not need to be
re-checked if one is already presented because by definition that will
be the highest priority.
This prevents a presented group interrupt from being lost.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 0fdf6a4f20..ace5871706 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1250,7 +1250,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t sig_ring, uint8_t cppr)
}
/* CPPR priority decreased (higher value) */
- xive2_tctx_process_pending(tctx, sig_ring);
+ if (!xive_nsr_indicates_exception(sig_ring, nsr)) {
+ xive2_tctx_process_pending(tctx, sig_ring);
+ }
}
void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 39/50] ppc/xive: Assert group interrupts were redistributed
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (37 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:28 ` Mike Kowal
2025-05-15 16:08 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 40/50] ppc/xive2: implement NVP context save restore for POOL ring Nicholas Piggin
` (12 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Add some assertions to try to ensure presented group interrupts do
not get lost without being redistributed, if they become precluded
by CPPR or preempted by a higher priority interrupt.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 2 ++
hw/intc/xive2.c | 1 +
2 files changed, 3 insertions(+)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 4659821d4a..81af59f0ec 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -132,6 +132,8 @@ void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t pipr,
uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
uint8_t *regs = &tctx->regs[ring];
+ g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
+
sig_regs[TM_PIPR] = pipr;
if (pipr < sig_regs[TM_CPPR]) {
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index ace5871706..e3060810d3 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1089,6 +1089,7 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
int rc;
g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
+ g_assert(!xive_nsr_indicates_group_exception(sig_ring, sig_regs[TM_NSR]));
/*
* Recompute the PIPR based on local pending interrupts. It will
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 40/50] ppc/xive2: implement NVP context save restore for POOL ring
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (38 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 39/50] ppc/xive: Assert group interrupts were redistributed Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:36 ` Mike Kowal
2025-05-15 16:09 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt Nicholas Piggin
` (11 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
In preparation to implement POOL context push, add support for POOL
NVP context save/restore.
The NVP p bit is defined in the spec as follows:
If TRUE, the CPPR of a Pool VP in the NVP is updated during store of
the context with the CPPR of the Hard context it was running under.
It's not clear whether non-pool VPs always or never get CPPR updated.
Before this patch, OS contexts always save CPPR, so we will assume that
is the behaviour.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 51 +++++++++++++++++++++++++------------
include/hw/ppc/xive2_regs.h | 1 +
2 files changed, 36 insertions(+), 16 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index e3060810d3..d899c1fb14 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -512,12 +512,13 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
*/
static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
- uint8_t nvp_blk, uint32_t nvp_idx,
- uint8_t ring)
+ uint8_t ring,
+ uint8_t nvp_blk, uint32_t nvp_idx)
{
CPUPPCState *env = &POWERPC_CPU(tctx->cs)->env;
uint32_t pir = env->spr_cb[SPR_PIR].default_value;
Xive2Nvp nvp;
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
uint8_t *regs = &tctx->regs[ring];
if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
@@ -553,7 +554,14 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
}
nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, regs[TM_IPB]);
- nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, regs[TM_CPPR]);
+
+ if ((nvp.w0 & NVP2_W0_P) || ring != TM_QW2_HV_POOL) {
+ /*
+ * Non-pool contexts always save CPPR (ignore p bit). XXX: Clarify
+ * whether that is the correct behaviour.
+ */
+ nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, sig_regs[TM_CPPR]);
+ }
if (nvp.w0 & NVP2_W0_L) {
/*
* Typically not used. If LSMFB is restored with 0, it will
@@ -722,7 +730,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
}
if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
- xive2_tctx_save_ctx(xrtr, tctx, nvp_blk, nvp_idx, ring);
+ xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
}
/*
@@ -863,12 +871,15 @@ void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
xive2_tm_pull_ctx_ol(xptr, tctx, offset, value, size, TM_QW3_HV_PHYS);
}
-static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
- uint8_t nvp_blk, uint32_t nvp_idx,
- Xive2Nvp *nvp)
+static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
+ uint8_t ring,
+ uint8_t nvp_blk, uint32_t nvp_idx,
+ Xive2Nvp *nvp)
{
CPUPPCState *env = &POWERPC_CPU(tctx->cs)->env;
uint32_t pir = env->spr_cb[SPR_PIR].default_value;
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
+ uint8_t *regs = &tctx->regs[ring];
uint8_t cppr;
if (!xive2_nvp_is_hw(nvp)) {
@@ -881,10 +892,10 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
nvp->w2 = xive_set_field32(NVP2_W2_CPPR, nvp->w2, 0);
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 2);
- tctx->regs[TM_QW1_OS + TM_CPPR] = cppr;
- tctx->regs[TM_QW1_OS + TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
- tctx->regs[TM_QW1_OS + TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
- tctx->regs[TM_QW1_OS + TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
+ sig_regs[TM_CPPR] = cppr;
+ regs[TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
+ regs[TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
+ regs[TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
nvp->w1 = xive_set_field32(NVP2_W1_CO, nvp->w1, 1);
nvp->w1 = xive_set_field32(NVP2_W1_CO_THRID_VALID, nvp->w1, 1);
@@ -893,9 +904,18 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
/*
* Checkout privilege: 0:OS, 1:Pool, 2:Hard
*
- * TODO: we only support OS push/pull
+ * TODO: we don't support hard push/pull
*/
- nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 0);
+ switch (ring) {
+ case TM_QW1_OS:
+ nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 0);
+ break;
+ case TM_QW2_HV_POOL:
+ nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 1);
+ break;
+ default:
+ g_assert_not_reached();
+ }
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 1);
@@ -930,9 +950,8 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
}
/* Automatically restore thread context registers */
- if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE &&
- do_restore) {
- xive2_tctx_restore_os_ctx(xrtr, tctx, nvp_blk, nvp_idx, &nvp);
+ if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_restore) {
+ xive2_tctx_restore_ctx(xrtr, tctx, TM_QW1_OS, nvp_blk, nvp_idx, &nvp);
}
ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index f82054661b..2a3e60abad 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -158,6 +158,7 @@ typedef struct Xive2Nvp {
#define NVP2_W0_L PPC_BIT32(8)
#define NVP2_W0_G PPC_BIT32(9)
#define NVP2_W0_T PPC_BIT32(10)
+#define NVP2_W0_P PPC_BIT32(11)
#define NVP2_W0_ESC_END PPC_BIT32(25) /* 'N' bit 0:ESB 1:END */
#define NVP2_W0_PGOFIRST PPC_BITMASK32(26, 31)
uint32_t w1;
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (39 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 40/50] ppc/xive2: implement NVP context save restore for POOL ring Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:43 ` Mike Kowal
2025-05-15 16:10 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 42/50] ppc/xive: Redistribute phys after pulling of pool context Nicholas Piggin
` (10 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
When the pool context is pulled, the shared pool/phys signal is
reset, which loses the qemu irq if a phys interrupt was presented.
Only reset the signal if a poll irq was presented.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index d899c1fb14..aeeb901b6a 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -727,20 +727,22 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
xive2_redistribute(xrtr, tctx, cur_ring);
}
}
+
+ /*
+ * Lower external interrupt line of requested ring and below except for
+ * USER, which doesn't exist.
+ */
+ if (xive_nsr_indicates_exception(cur_ring, nsr)) {
+ if (cur_ring == xive_nsr_exception_ring(cur_ring, nsr)) {
+ xive_tctx_reset_signal(tctx, cur_ring);
+ }
+ }
}
if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
}
- /*
- * Lower external interrupt line of requested ring and below except for
- * USER, which doesn't exist.
- */
- for (cur_ring = TM_QW1_OS; cur_ring <= ring;
- cur_ring += XIVE_TM_RING_SIZE) {
- xive_tctx_reset_signal(tctx, cur_ring);
- }
return target_ringw2;
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 42/50] ppc/xive: Redistribute phys after pulling of pool context
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (40 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:46 ` Mike Kowal
2025-05-15 16:11 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 43/50] ppc/xive: Check TIMA operations validity Nicholas Piggin
` (9 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
After pulling the pool context, if a pool irq had been presented and
was cleared in the process, there could be a pending irq in phys that
should be presented. Process the phys irq ring after pulling pool ring
to catch this case and avoid losing irqs.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 3 +++
hw/intc/xive2.c | 16 ++++++++++++++--
2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 81af59f0ec..aeca66e56e 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -320,6 +320,9 @@ static uint64_t xive_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
xive_tctx_reset_signal(tctx, TM_QW1_OS);
xive_tctx_reset_signal(tctx, TM_QW2_HV_POOL);
+ /* Re-check phys for interrupts if pool was disabled */
+ xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW3_HV_PHYS);
+
return qw2w2;
}
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index aeeb901b6a..917ecbaae4 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -683,6 +683,8 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
xive_tctx_reset_signal(tctx, ring);
}
+static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
+
static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size, uint8_t ring)
{
@@ -739,6 +741,18 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
}
}
+ if (ring == TM_QW2_HV_POOL) {
+ /* Re-check phys for interrupts if pool was disabled */
+ nsr = tctx->regs[TM_QW3_HV_PHYS + TM_NSR];
+ if (xive_nsr_indicates_exception(TM_QW3_HV_PHYS, nsr)) {
+ /* Ring must be PHYS because POOL would have been redistributed */
+ g_assert(xive_nsr_exception_ring(TM_QW3_HV_PHYS, nsr) ==
+ TM_QW3_HV_PHYS);
+ } else {
+ xive2_tctx_process_pending(tctx, TM_QW3_HV_PHYS);
+ }
+ }
+
if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
}
@@ -925,8 +939,6 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
return cppr;
}
-static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
-
static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 43/50] ppc/xive: Check TIMA operations validity
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (41 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 42/50] ppc/xive: Redistribute phys after pulling of pool context Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:47 ` Mike Kowal
2025-05-15 16:12 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 44/50] ppc/xive2: Implement pool context push TIMA op Nicholas Piggin
` (8 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Certain TIMA operations should only be performed when a ring is valid,
others when the ring is invalid, and they are considered undefined if
used incorrectly. Add checks for this condition.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 196 +++++++++++++++++++++++++-----------------
include/hw/ppc/xive.h | 1 +
2 files changed, 116 insertions(+), 81 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index aeca66e56e..d5bbd8f4c6 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -25,6 +25,19 @@
/*
* XIVE Thread Interrupt Management context
*/
+bool xive_ring_valid(XiveTCTX *tctx, uint8_t ring)
+{
+ uint8_t cur_ring;
+
+ for (cur_ring = ring; cur_ring <= TM_QW3_HV_PHYS;
+ cur_ring += XIVE_TM_RING_SIZE) {
+ if (!(tctx->regs[cur_ring + TM_WORD2] & 0x80)) {
+ return false;
+ }
+ }
+ return true;
+}
+
bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr)
{
switch (ring) {
@@ -663,6 +676,8 @@ typedef struct XiveTmOp {
uint8_t page_offset;
uint32_t op_offset;
unsigned size;
+ bool hw_ok;
+ bool sw_ok;
void (*write_handler)(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset,
uint64_t value, unsigned size);
@@ -675,34 +690,34 @@ static const XiveTmOp xive_tm_operations[] = {
* MMIOs below 2K : raw values and special operations without side
* effects
*/
- { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive_tm_set_os_cppr,
- NULL },
- { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive_tm_push_os_ctx,
- NULL },
- { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive_tm_set_hv_cppr,
- NULL },
- { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
- NULL },
- { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, NULL,
- xive_tm_vt_poll },
+ { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, true, true,
+ xive_tm_set_os_cppr, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, true, true,
+ xive_tm_push_os_ctx, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
+ xive_tm_set_hv_cppr, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, false, true,
+ xive_tm_vt_push, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
+ NULL, xive_tm_vt_poll },
/* MMIOs above 2K : special operations with side effects */
- { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, NULL,
- xive_tm_ack_os_reg },
- { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, xive_tm_set_os_pending,
- NULL },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, NULL,
- xive_tm_pull_os_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, NULL,
- xive_tm_pull_os_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, NULL,
- xive_tm_ack_hv_reg },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, NULL,
- xive_tm_pull_pool_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, NULL,
- xive_tm_pull_pool_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, NULL,
- xive_tm_pull_phys_ctx },
+ { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, true, false,
+ NULL, xive_tm_ack_os_reg },
+ { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, true, false,
+ xive_tm_set_os_pending, NULL },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, true, false,
+ NULL, xive_tm_pull_os_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, true, false,
+ NULL, xive_tm_pull_os_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, true, false,
+ NULL, xive_tm_ack_hv_reg },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, true, false,
+ NULL, xive_tm_pull_pool_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, true, false,
+ NULL, xive_tm_pull_pool_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, true, false,
+ NULL, xive_tm_pull_phys_ctx },
};
static const XiveTmOp xive2_tm_operations[] = {
@@ -710,52 +725,48 @@ static const XiveTmOp xive2_tm_operations[] = {
* MMIOs below 2K : raw values and special operations without side
* effects
*/
- { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive2_tm_set_os_cppr,
- NULL },
- { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive2_tm_push_os_ctx,
- NULL },
- { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 8, xive2_tm_push_os_ctx,
- NULL },
- { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, xive_tm_set_os_lgs,
- NULL },
- { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive2_tm_set_hv_cppr,
- NULL },
- { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
- NULL },
- { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, NULL,
- xive_tm_vt_poll },
- { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T, 1, xive2_tm_set_hv_target,
- NULL },
+ { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, true, true,
+ xive2_tm_set_os_cppr, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, true, true,
+ xive2_tm_push_os_ctx, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 8, true, true,
+ xive2_tm_push_os_ctx, NULL },
+ { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, true, true,
+ xive_tm_set_os_lgs, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
+ xive2_tm_set_hv_cppr, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
+ NULL, xive_tm_vt_poll },
+ { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T, 1, true, true,
+ xive2_tm_set_hv_target, NULL },
/* MMIOs above 2K : special operations with side effects */
- { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, NULL,
- xive_tm_ack_os_reg },
- { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, xive_tm_set_os_pending,
- NULL },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, NULL,
- xive2_tm_pull_os_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, NULL,
- xive2_tm_pull_os_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, NULL,
- xive2_tm_pull_os_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, NULL,
- xive_tm_ack_hv_reg },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2, 4, NULL,
- xive2_tm_pull_pool_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, NULL,
- xive2_tm_pull_pool_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, NULL,
- xive2_tm_pull_pool_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL, 1, xive2_tm_pull_os_ctx_ol,
- NULL },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2, 4, NULL,
- xive2_tm_pull_phys_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, NULL,
- xive2_tm_pull_phys_ctx },
- { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, xive2_tm_pull_phys_ctx_ol,
- NULL },
- { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, xive2_tm_ack_os_el,
- NULL },
+ { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, true, false,
+ NULL, xive_tm_ack_os_reg },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, true, false,
+ NULL, xive2_tm_pull_os_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, true, false,
+ NULL, xive2_tm_pull_os_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, true, false,
+ NULL, xive2_tm_pull_os_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, true, false,
+ NULL, xive_tm_ack_hv_reg },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2, 4, true, false,
+ NULL, xive2_tm_pull_pool_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, true, false,
+ NULL, xive2_tm_pull_pool_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, true, false,
+ NULL, xive2_tm_pull_pool_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL, 1, true, false,
+ xive2_tm_pull_os_ctx_ol, NULL },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2, 4, true, false,
+ NULL, xive2_tm_pull_phys_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, true, false,
+ NULL, xive2_tm_pull_phys_ctx },
+ { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, true, false,
+ xive2_tm_pull_phys_ctx_ol, NULL },
+ { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, true, false,
+ xive2_tm_ack_os_el, NULL },
};
static const XiveTmOp *xive_tm_find_op(XivePresenter *xptr, hwaddr offset,
@@ -797,18 +808,28 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
uint64_t value, unsigned size)
{
const XiveTmOp *xto;
+ uint8_t ring = offset & TM_RING_OFFSET;
+ bool is_valid = xive_ring_valid(tctx, ring);
+ bool hw_owned = is_valid;
trace_xive_tctx_tm_write(tctx->cs->cpu_index, offset, size, value);
- /*
- * TODO: check V bit in Q[0-3]W2
- */
-
/*
* First, check for special operations in the 2K region
*/
+ xto = xive_tm_find_op(tctx->xptr, offset, size, true);
+ if (xto) {
+ if (hw_owned && !xto->hw_ok) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to HW TIMA "
+ "@%"HWADDR_PRIx" size %d\n", offset, size);
+ }
+ if (!hw_owned && !xto->sw_ok) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to SW TIMA "
+ "@%"HWADDR_PRIx" size %d\n", offset, size);
+ }
+ }
+
if (offset & TM_SPECIAL_OP) {
- xto = xive_tm_find_op(tctx->xptr, offset, size, true);
if (!xto) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA "
"@%"HWADDR_PRIx" size %d\n", offset, size);
@@ -821,7 +842,6 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
/*
* Then, for special operations in the region below 2K.
*/
- xto = xive_tm_find_op(tctx->xptr, offset, size, true);
if (xto) {
xto->write_handler(xptr, tctx, offset, value, size);
return;
@@ -830,6 +850,11 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
/*
* Finish with raw access to the register values
*/
+ if (hw_owned) {
+ /* Store context operations are dangerous when context is valid */
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to HW TIMA "
+ "@%"HWADDR_PRIx" size %d\n", offset, size);
+ }
xive_tm_raw_write(tctx, offset, value, size);
}
@@ -837,17 +862,27 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
unsigned size)
{
const XiveTmOp *xto;
+ uint8_t ring = offset & TM_RING_OFFSET;
+ bool is_valid = xive_ring_valid(tctx, ring);
+ bool hw_owned = is_valid;
uint64_t ret;
- /*
- * TODO: check V bit in Q[0-3]W2
- */
+ xto = xive_tm_find_op(tctx->xptr, offset, size, false);
+ if (xto) {
+ if (hw_owned && !xto->hw_ok) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined read to HW TIMA "
+ "@%"HWADDR_PRIx" size %d\n", offset, size);
+ }
+ if (!hw_owned && !xto->sw_ok) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined read to SW TIMA "
+ "@%"HWADDR_PRIx" size %d\n", offset, size);
+ }
+ }
/*
* First, check for special operations in the 2K region
*/
if (offset & TM_SPECIAL_OP) {
- xto = xive_tm_find_op(tctx->xptr, offset, size, false);
if (!xto) {
qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access to TIMA"
"@%"HWADDR_PRIx" size %d\n", offset, size);
@@ -860,7 +895,6 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
/*
* Then, for special operations in the region below 2K.
*/
- xto = xive_tm_find_op(tctx->xptr, offset, size, false);
if (xto) {
ret = xto->read_handler(xptr, tctx, offset, size);
goto out;
diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 2372d1014b..b7ca8544e4 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -365,6 +365,7 @@ static inline uint32_t xive_tctx_word2(uint8_t *ring)
return *((uint32_t *) &ring[TM_WORD2]);
}
+bool xive_ring_valid(XiveTCTX *tctx, uint8_t ring);
bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr);
bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr);
uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr);
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 44/50] ppc/xive2: Implement pool context push TIMA op
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (42 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 43/50] ppc/xive: Check TIMA operations validity Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:48 ` Mike Kowal
2025-05-15 16:13 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 45/50] ppc/xive2: redistribute group interrupts on context push Nicholas Piggin
` (7 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Implement pool context push TIMA op.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 4 ++++
hw/intc/xive2.c | 50 ++++++++++++++++++++++++++++--------------
include/hw/ppc/xive2.h | 2 ++
3 files changed, 39 insertions(+), 17 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index d5bbd8f4c6..979031a587 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -733,6 +733,10 @@ static const XiveTmOp xive2_tm_operations[] = {
xive2_tm_push_os_ctx, NULL },
{ XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, true, true,
xive_tm_set_os_lgs, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 4, true, true,
+ xive2_tm_push_pool_ctx, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 8, true, true,
+ xive2_tm_push_pool_ctx, NULL },
{ XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
xive2_tm_set_hv_cppr, NULL },
{ XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 917ecbaae4..21cd07df68 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -583,6 +583,7 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 1);
}
+/* POOL cam is the same as OS cam encoding */
static void xive2_cam_decode(uint32_t cam, uint8_t *nvp_blk,
uint32_t *nvp_idx, bool *valid, bool *hw)
{
@@ -940,10 +941,11 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
}
static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
+ uint8_t ring,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
{
- uint8_t *regs = &tctx->regs[TM_QW1_OS];
+ uint8_t *regs = &tctx->regs[ring];
uint8_t ipb;
Xive2Nvp nvp;
@@ -965,7 +967,7 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
/* Automatically restore thread context registers */
if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_restore) {
- xive2_tctx_restore_ctx(xrtr, tctx, TM_QW1_OS, nvp_blk, nvp_idx, &nvp);
+ xive2_tctx_restore_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx, &nvp);
}
ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
@@ -976,48 +978,62 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
/* IPB bits in the backlog are merged with the TIMA IPB bits */
regs[TM_IPB] |= ipb;
- xive2_tctx_process_pending(tctx, TM_QW1_OS);
+ xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
+ TM_QW3_HV_PHYS : ring);
}
/*
- * Updating the OS CAM line can trigger a resend of interrupt
+ * Updating the ring CAM line can trigger a resend of interrupt
*/
-void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
- hwaddr offset, uint64_t value, unsigned size)
+static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size,
+ uint8_t ring)
{
uint32_t cam;
- uint32_t qw1w2;
- uint64_t qw1dw1;
+ uint32_t w2;
+ uint64_t dw1;
uint8_t nvp_blk;
uint32_t nvp_idx;
- bool vo;
+ bool v;
bool do_restore;
/* First update the thead context */
switch (size) {
case 4:
cam = value;
- qw1w2 = cpu_to_be32(cam);
- memcpy(&tctx->regs[TM_QW1_OS + TM_WORD2], &qw1w2, 4);
+ w2 = cpu_to_be32(cam);
+ memcpy(&tctx->regs[ring + TM_WORD2], &w2, 4);
break;
case 8:
cam = value >> 32;
- qw1dw1 = cpu_to_be64(value);
- memcpy(&tctx->regs[TM_QW1_OS + TM_WORD2], &qw1dw1, 8);
+ dw1 = cpu_to_be64(value);
+ memcpy(&tctx->regs[ring + TM_WORD2], &dw1, 8);
break;
default:
g_assert_not_reached();
}
- xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &vo, &do_restore);
+ xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &v, &do_restore);
/* Check the interrupt pending bits */
- if (vo) {
- xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, nvp_blk, nvp_idx,
- do_restore);
+ if (v) {
+ xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, ring,
+ nvp_blk, nvp_idx, do_restore);
}
}
+void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW1_OS);
+}
+
+void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW2_HV_POOL);
+}
+
/* returns -1 if ring is invalid, but still populates block and index */
static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
uint8_t *nvp_blk, uint32_t *nvp_idx)
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index a91b99057c..c1ab06a55a 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -140,6 +140,8 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
+void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size);
uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size);
uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 45/50] ppc/xive2: redistribute group interrupts on context push
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (43 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 44/50] ppc/xive2: Implement pool context push TIMA op Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:44 ` Mike Kowal
2025-05-15 16:13 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 46/50] ppc/xive2: Implement set_os_pending TIMA op Nicholas Piggin
` (6 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
When pushing a context, any presented group interrupt should be
redistributed before processing pending interrupts to present
highest priority.
This can occur when pushing the POOL ring when the valid PHYS
ring has a group interrupt presented, because they share signal
registers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 21cd07df68..392ac6077e 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -945,8 +945,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
{
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
uint8_t *regs = &tctx->regs[ring];
- uint8_t ipb;
+ uint8_t ipb, nsr = sig_regs[TM_NSR];
Xive2Nvp nvp;
/*
@@ -978,6 +979,11 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
/* IPB bits in the backlog are merged with the TIMA IPB bits */
regs[TM_IPB] |= ipb;
+ if (xive_nsr_indicates_group_exception(ring, nsr)) {
+ /* redistribute precluded active grp interrupt */
+ g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the grp interrupt */
+ xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
+ }
xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
TM_QW3_HV_PHYS : ring);
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 46/50] ppc/xive2: Implement set_os_pending TIMA op
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (44 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 45/50] ppc/xive2: redistribute group interrupts on context push Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:49 ` Mike Kowal
2025-05-15 16:14 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 47/50] ppc/xive2: Implement POOL LGS push " Nicholas Piggin
` (5 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
xive2 must take into account redistribution of group interrupts if
the VP directed priority exceeds the group interrupt priority after
this operation. The xive1 code is not group aware so implement this
for xive2.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 2 ++
hw/intc/xive2.c | 28 ++++++++++++++++++++++++++++
include/hw/ppc/xive2.h | 2 ++
3 files changed, 32 insertions(+)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 979031a587..dc64edf13d 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -747,6 +747,8 @@ static const XiveTmOp xive2_tm_operations[] = {
/* MMIOs above 2K : special operations with side effects */
{ XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, true, false,
NULL, xive_tm_ack_os_reg },
+ { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, true, false,
+ xive2_tm_set_os_pending, NULL },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, true, false,
NULL, xive2_tm_pull_os_ctx },
{ XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, true, false,
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 392ac6077e..de1ccad685 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1323,6 +1323,34 @@ void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
xive2_tctx_set_cppr(tctx, TM_QW1_OS, value & 0xff);
}
+/*
+ * Adjust the IPB to allow a CPU to process event queues of other
+ * priorities during one physical interrupt cycle.
+ */
+void xive2_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint8_t ring = TM_QW1_OS;
+ uint8_t *regs = &tctx->regs[ring];
+ uint8_t priority = value & 0xff;
+
+ /*
+ * XXX: should this simply set a bit in IPB and wait for it to be picked
+ * up next cycle, or is it supposed to present it now? We implement the
+ * latter here.
+ */
+ regs[TM_IPB] |= xive_priority_to_ipb(priority);
+ if (xive_ipb_to_pipr(regs[TM_IPB]) >= regs[TM_PIPR]) {
+ return;
+ }
+ if (xive_nsr_indicates_group_exception(ring, regs[TM_NSR])) {
+ xive2_redistribute(xrtr, tctx, ring);
+ }
+
+ xive_tctx_pipr_present(tctx, ring, priority, 0);
+}
+
static void xive2_tctx_set_target(XiveTCTX *tctx, uint8_t ring, uint8_t target)
{
uint8_t *regs = &tctx->regs[ring];
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index c1ab06a55a..45266c2a8b 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -130,6 +130,8 @@ void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
+void xive2_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size);
void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
uint64_t value, unsigned size);
uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 47/50] ppc/xive2: Implement POOL LGS push TIMA op
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (45 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 46/50] ppc/xive2: Implement set_os_pending TIMA op Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:50 ` Mike Kowal
2025-05-15 16:15 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 48/50] ppc/xive2: Implement PHYS ring VP " Nicholas Piggin
` (4 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Implement set LGS for the POOL ring.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index dc64edf13d..807a1c1c34 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -532,6 +532,12 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
xive_tctx_set_lgs(tctx, TM_QW1_OS, value & 0xff);
}
+static void xive_tm_set_pool_lgs(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive_tctx_set_lgs(tctx, TM_QW2_HV_POOL, value & 0xff);
+}
+
/*
* Adjust the PIPR to allow a CPU to process event queues of other
* priorities during one physical interrupt cycle.
@@ -737,6 +743,8 @@ static const XiveTmOp xive2_tm_operations[] = {
xive2_tm_push_pool_ctx, NULL },
{ XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 8, true, true,
xive2_tm_push_pool_ctx, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_LGS, 1, true, true,
+ xive_tm_set_pool_lgs, NULL },
{ XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
xive2_tm_set_hv_cppr, NULL },
{ XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 48/50] ppc/xive2: Implement PHYS ring VP push TIMA op
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (46 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 47/50] ppc/xive2: Implement POOL LGS push " Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:50 ` Mike Kowal
2025-05-15 16:16 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 49/50] ppc/xive: Split need_resend into restore_nvp Nicholas Piggin
` (3 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
Implement the phys (aka hard) VP push. PowerVM uses this operation.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 2 ++
hw/intc/xive2.c | 11 +++++++++++
include/hw/ppc/xive2.h | 2 ++
3 files changed, 15 insertions(+)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 807a1c1c34..69118999e6 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -747,6 +747,8 @@ static const XiveTmOp xive2_tm_operations[] = {
xive_tm_set_pool_lgs, NULL },
{ XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
xive2_tm_set_hv_cppr, NULL },
+ { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, false, true,
+ xive2_tm_push_phys_ctx, NULL },
{ XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
NULL, xive_tm_vt_poll },
{ XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T, 1, true, true,
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index de1ccad685..a9b188b909 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1005,6 +1005,11 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
/* First update the thead context */
switch (size) {
+ case 1:
+ tctx->regs[ring + TM_WORD2] = value & 0xff;
+ cam = xive2_tctx_hw_cam_line(xptr, tctx);
+ cam |= ((value & 0xc0) << 24); /* V and H bits */
+ break;
case 4:
cam = value;
w2 = cpu_to_be32(cam);
@@ -1040,6 +1045,12 @@ void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW2_HV_POOL);
}
+void xive2_tm_push_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size)
+{
+ xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW3_HV_PHYS);
+}
+
/* returns -1 if ring is invalid, but still populates block and index */
static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
uint8_t *nvp_blk, uint32_t *nvp_idx)
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 45266c2a8b..f4437e2c79 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -146,6 +146,8 @@ void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, uint64_t value, unsigned size);
uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size);
+void xive2_tm_push_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+ hwaddr offset, uint64_t value, unsigned size);
uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
hwaddr offset, unsigned size);
void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 49/50] ppc/xive: Split need_resend into restore_nvp
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (47 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 48/50] ppc/xive2: Implement PHYS ring VP " Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:57 ` Mike Kowal
2025-05-15 16:16 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 50/50] ppc/xive2: Enable lower level contexts on VP push Nicholas Piggin
` (2 subsequent siblings)
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
This is needed by the next patch which will re-send on all lower
rings when pushing a context.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive.c | 24 ++++++++++++------------
hw/intc/xive2.c | 28 ++++++++++++++++------------
2 files changed, 28 insertions(+), 24 deletions(-)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 69118999e6..9ade9ec6c1 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -606,7 +606,7 @@ static uint64_t xive_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
return qw1w2;
}
-static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
+static void xive_tctx_restore_nvp(XiveRouter *xrtr, XiveTCTX *tctx,
uint8_t nvt_blk, uint32_t nvt_idx)
{
XiveNVT nvt;
@@ -632,16 +632,6 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
uint8_t *regs = &tctx->regs[TM_QW1_OS];
regs[TM_IPB] |= ipb;
}
-
- /*
- * Always call xive_tctx_recompute_from_ipb(). Even if there were no
- * escalation triggered, there could be a pending interrupt which
- * was saved when the context was pulled and that we need to take
- * into account by recalculating the PIPR (which is not
- * saved/restored).
- * It will also raise the External interrupt signal if needed.
- */
- xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
}
/*
@@ -663,7 +653,17 @@ static void xive_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
/* Check the interrupt pending bits */
if (vo) {
- xive_tctx_need_resend(XIVE_ROUTER(xptr), tctx, nvt_blk, nvt_idx);
+ xive_tctx_restore_nvp(XIVE_ROUTER(xptr), tctx, nvt_blk, nvt_idx);
+
+ /*
+ * Always call xive_tctx_recompute_from_ipb(). Even if there were no
+ * escalation triggered, there could be a pending interrupt which
+ * was saved when the context was pulled and that we need to take
+ * into account by recalculating the PIPR (which is not
+ * saved/restored).
+ * It will also raise the External interrupt signal if needed.
+ */
+ xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
}
}
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index a9b188b909..53e90b8178 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -940,14 +940,14 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
return cppr;
}
-static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
+/* Restore TIMA VP context from NVP backlog */
+static void xive2_tctx_restore_nvp(Xive2Router *xrtr, XiveTCTX *tctx,
uint8_t ring,
uint8_t nvp_blk, uint32_t nvp_idx,
bool do_restore)
{
- uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
uint8_t *regs = &tctx->regs[ring];
- uint8_t ipb, nsr = sig_regs[TM_NSR];
+ uint8_t ipb;
Xive2Nvp nvp;
/*
@@ -978,14 +978,6 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
}
/* IPB bits in the backlog are merged with the TIMA IPB bits */
regs[TM_IPB] |= ipb;
-
- if (xive_nsr_indicates_group_exception(ring, nsr)) {
- /* redistribute precluded active grp interrupt */
- g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the grp interrupt */
- xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
- }
- xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
- TM_QW3_HV_PHYS : ring);
}
/*
@@ -1028,8 +1020,20 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
/* Check the interrupt pending bits */
if (v) {
- xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, ring,
+ Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
+ uint8_t nsr = sig_regs[TM_NSR];
+
+ xive2_tctx_restore_nvp(xrtr, tctx, ring,
nvp_blk, nvp_idx, do_restore);
+
+ if (xive_nsr_indicates_group_exception(ring, nsr)) {
+ /* redistribute precluded active grp interrupt */
+ g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the interrupt */
+ xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
+ }
+ xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
+ TM_QW3_HV_PHYS : ring);
}
}
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* [PATCH 50/50] ppc/xive2: Enable lower level contexts on VP push
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (48 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 49/50] ppc/xive: Split need_resend into restore_nvp Nicholas Piggin
@ 2025-05-12 3:10 ` Nicholas Piggin
2025-05-15 15:54 ` Mike Kowal
2025-05-15 16:17 ` Miles Glenn
2025-05-15 15:36 ` [PATCH 00/50] ppc/xive: updates for PowerVM Cédric Le Goater
2025-07-03 9:37 ` Gautam Menghani
51 siblings, 2 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-12 3:10 UTC (permalink / raw)
To: qemu-ppc
Cc: Nicholas Piggin, qemu-devel, Frédéric Barrat,
Glenn Miles, Michael Kowal, Caleb Schlossin
When pushing a context, the lower-level context becomes valid if it
had V=1, and so on. Iterate lower level contexts and send them
pending interrupts if they become enabled.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
hw/intc/xive2.c | 36 ++++++++++++++++++++++++++++--------
1 file changed, 28 insertions(+), 8 deletions(-)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 53e90b8178..ded003fa87 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -995,6 +995,12 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
bool v;
bool do_restore;
+ if (xive_ring_valid(tctx, ring)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Attempt to push VP to enabled"
+ " ring 0x%02x\n", ring);
+ return;
+ }
+
/* First update the thead context */
switch (size) {
case 1:
@@ -1021,19 +1027,32 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
/* Check the interrupt pending bits */
if (v) {
Xive2Router *xrtr = XIVE2_ROUTER(xptr);
- uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
- uint8_t nsr = sig_regs[TM_NSR];
+ uint8_t cur_ring;
xive2_tctx_restore_nvp(xrtr, tctx, ring,
nvp_blk, nvp_idx, do_restore);
- if (xive_nsr_indicates_group_exception(ring, nsr)) {
- /* redistribute precluded active grp interrupt */
- g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the interrupt */
- xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
+ for (cur_ring = TM_QW1_OS; cur_ring <= ring;
+ cur_ring += XIVE_TM_RING_SIZE) {
+ uint8_t *sig_regs = xive_tctx_signal_regs(tctx, cur_ring);
+ uint8_t nsr = sig_regs[TM_NSR];
+
+ if (!xive_ring_valid(tctx, cur_ring)) {
+ continue;
+ }
+
+ if (cur_ring == TM_QW2_HV_POOL) {
+ if (xive_nsr_indicates_exception(cur_ring, nsr)) {
+ g_assert(xive_nsr_exception_ring(cur_ring, nsr) ==
+ TM_QW3_HV_PHYS);
+ xive2_redistribute(xrtr, tctx,
+ xive_nsr_exception_ring(ring, nsr));
+ }
+ xive2_tctx_process_pending(tctx, TM_QW3_HV_PHYS);
+ break;
+ }
+ xive2_tctx_process_pending(tctx, cur_ring);
}
- xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
- TM_QW3_HV_PHYS : ring);
}
}
@@ -1159,6 +1178,7 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
int rc;
g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
+ g_assert(sig_regs[TM_WORD2] & 0x80);
g_assert(!xive_nsr_indicates_group_exception(sig_ring, sig_regs[TM_NSR]));
/*
--
2.47.1
^ permalink raw reply related [flat|nested] 192+ messages in thread
* Re: [PATCH 01/50] ppc/xive: Fix xive trace event output
2025-05-12 3:10 ` [PATCH 01/50] ppc/xive: Fix xive trace event output Nicholas Piggin
@ 2025-05-14 14:26 ` Caleb Schlossin
2025-05-14 18:41 ` Mike Kowal
2025-05-15 15:30 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:26 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> Typo, IBP should be IPB.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/trace-events | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/trace-events b/hw/intc/trace-events
> index 0ba9a02e73..f77f9733c9 100644
> --- a/hw/intc/trace-events
> +++ b/hw/intc/trace-events
> @@ -274,9 +274,9 @@ kvm_xive_cpu_connect(uint32_t id) "connect CPU%d to KVM device"
> kvm_xive_source_reset(uint32_t srcno) "IRQ 0x%x"
>
> # xive.c
> -xive_tctx_accept(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x ACK"
> -xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x raise !"
> -xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
> +xive_tctx_accept(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x ACK"
> +xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x raise !"
> +xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
> xive_source_esb_read(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
> xive_source_esb_write(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
> xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "END 0x%02x/0x%04x -> enqueue 0x%08x"
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 02/50] ppc/xive: Report access size in XIVE TM operation error logs
2025-05-12 3:10 ` [PATCH 02/50] ppc/xive: Report access size in XIVE TM operation error logs Nicholas Piggin
@ 2025-05-14 14:27 ` Caleb Schlossin
2025-05-14 18:42 ` Mike Kowal
2025-05-15 15:31 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:27 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> Report access size in XIVE TM operation error logs.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 3eb28c2265..80b07a0afe 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -326,7 +326,7 @@ static void xive_tm_raw_write(XiveTCTX *tctx, hwaddr offset, uint64_t value,
> */
> if (size < 4 || !mask || ring_offset == TM_QW0_USER) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA @%"
> - HWADDR_PRIx"\n", offset);
> + HWADDR_PRIx" size %d\n", offset, size);
> return;
> }
>
> @@ -357,7 +357,7 @@ static uint64_t xive_tm_raw_read(XiveTCTX *tctx, hwaddr offset, unsigned size)
> */
> if (size < 4 || !mask || ring_offset == TM_QW0_USER) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access at TIMA @%"
> - HWADDR_PRIx"\n", offset);
> + HWADDR_PRIx" size %d\n", offset, size);
> return -1;
> }
>
> @@ -688,7 +688,7 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> xto = xive_tm_find_op(tctx->xptr, offset, size, true);
> if (!xto) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA "
> - "@%"HWADDR_PRIx"\n", offset);
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> } else {
> xto->write_handler(xptr, tctx, offset, value, size);
> }
> @@ -727,7 +727,7 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> xto = xive_tm_find_op(tctx->xptr, offset, size, false);
> if (!xto) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access to TIMA"
> - "@%"HWADDR_PRIx"\n", offset);
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> return -1;
> }
> ret = xto->read_handler(xptr, tctx, offset, size);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 03/50] ppc/xive2: Fix calculation of END queue sizes
2025-05-12 3:10 ` [PATCH 03/50] ppc/xive2: Fix calculation of END queue sizes Nicholas Piggin
@ 2025-05-14 14:27 ` Caleb Schlossin
2025-05-14 18:45 ` Mike Kowal
2025-05-16 0:06 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:27 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> The queue size of an Event Notification Descriptor (END)
> is determined by the 'cl' and QsZ fields of the END.
> If the cl field is 1, then the queue size (in bytes) will
> be the size of a cache line 128B * 2^QsZ and QsZ is limited
> to 4. Otherwise, it will be 4096B * 2^QsZ with QsZ limited
> to 12.
>
> Fixes: f8a233dedf2 ("ppc/xive2: Introduce a XIVE2 core framework")
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 25 +++++++++++++++++++------
> include/hw/ppc/xive2_regs.h | 1 +
> 2 files changed, 20 insertions(+), 6 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 7d584dfafa..790152a2a6 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -188,12 +188,27 @@ void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
> (uint32_t) xive_get_field64(EAS2_END_DATA, eas->w));
> }
>
> +#define XIVE2_QSIZE_CHUNK_CL 128
> +#define XIVE2_QSIZE_CHUNK_4k 4096
> +/* Calculate max number of queue entries for an END */
> +static uint32_t xive2_end_get_qentries(Xive2End *end)
> +{
> + uint32_t w3 = end->w3;
> + uint32_t qsize = xive_get_field32(END2_W3_QSIZE, w3);
> + if (xive_get_field32(END2_W3_CL, w3)) {
> + g_assert(qsize <= 4);
> + return (XIVE2_QSIZE_CHUNK_CL << qsize) / sizeof(uint32_t);
> + } else {
> + g_assert(qsize <= 12);
> + return (XIVE2_QSIZE_CHUNK_4k << qsize) / sizeof(uint32_t);
> + }
> +}
> +
> void xive2_end_queue_pic_print_info(Xive2End *end, uint32_t width, GString *buf)
> {
> uint64_t qaddr_base = xive2_end_qaddr(end);
> - uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
> uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
> - uint32_t qentries = 1 << (qsize + 10);
> + uint32_t qentries = xive2_end_get_qentries(end);
> int i;
>
> /*
> @@ -223,8 +238,7 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf)
> uint64_t qaddr_base = xive2_end_qaddr(end);
> uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
> uint32_t qgen = xive_get_field32(END2_W1_GENERATION, end->w1);
> - uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
> - uint32_t qentries = 1 << (qsize + 10);
> + uint32_t qentries = xive2_end_get_qentries(end);
>
> uint32_t nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6);
> uint32_t nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6);
> @@ -341,13 +355,12 @@ void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx, GString *buf)
> static void xive2_end_enqueue(Xive2End *end, uint32_t data)
> {
> uint64_t qaddr_base = xive2_end_qaddr(end);
> - uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
> uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
> uint32_t qgen = xive_get_field32(END2_W1_GENERATION, end->w1);
>
> uint64_t qaddr = qaddr_base + (qindex << 2);
> uint32_t qdata = cpu_to_be32((qgen << 31) | (data & 0x7fffffff));
> - uint32_t qentries = 1 << (qsize + 10);
> + uint32_t qentries = xive2_end_get_qentries(end);
>
> if (dma_memory_write(&address_space_memory, qaddr, &qdata, sizeof(qdata),
> MEMTXATTRS_UNSPECIFIED)) {
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index b11395c563..3c28de8a30 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -87,6 +87,7 @@ typedef struct Xive2End {
> #define END2_W2_EQ_ADDR_HI PPC_BITMASK32(8, 31)
> uint32_t w3;
> #define END2_W3_EQ_ADDR_LO PPC_BITMASK32(0, 24)
> +#define END2_W3_CL PPC_BIT32(27)
> #define END2_W3_QSIZE PPC_BITMASK32(28, 31)
> uint32_t w4;
> #define END2_W4_END_BLOCK PPC_BITMASK32(4, 7)
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address
2025-05-12 3:10 ` [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address Nicholas Piggin
@ 2025-05-14 14:27 ` Caleb Schlossin
2025-05-14 18:46 ` Mike Kowal
` (2 subsequent siblings)
3 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:27 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> In a multi chip environment there will be remote/forwarded VSDs. The check
> to find a matching INT controller (XIVE) of the remote block number was
> checking the INTs chip number. Block numbers are not tied to a chip number.
> The matching remote INT is the one that matches the forwarded VSD address
> with VSD types associated MMIO BAR.
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 25 +++++++++++++++++--------
> 1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index d1713b406c..30b4ab2efe 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -102,12 +102,10 @@ static uint32_t pnv_xive2_block_id(PnvXive2 *xive)
> }
>
> /*
> - * Remote access to controllers. HW uses MMIOs. For now, a simple scan
> - * of the chips is good enough.
> - *
> - * TODO: Block scope support
> + * Remote access to INT controllers. HW uses MMIOs(?). For now, a simple
> + * scan of all the chips INT controller is good enough.
> */
> -static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
> +static PnvXive2 *pnv_xive2_get_remote(uint32_t vsd_type, hwaddr fwd_addr)
> {
> PnvMachineState *pnv = PNV_MACHINE(qdev_get_machine());
> int i;
> @@ -116,10 +114,22 @@ static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
> Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
> PnvXive2 *xive = &chip10->xive;
>
> - if (pnv_xive2_block_id(xive) == blk) {
> + /*
> + * Is this the XIVE matching the forwarded VSD address is for this
> + * VSD type
> + */
> + if ((vsd_type == VST_ESB && fwd_addr == xive->esb_base) ||
> + (vsd_type == VST_END && fwd_addr == xive->end_base) ||
> + ((vsd_type == VST_NVP ||
> + vsd_type == VST_NVG) && fwd_addr == xive->nvpg_base) ||
> + (vsd_type == VST_NVC && fwd_addr == xive->nvc_base)) {
> return xive;
> }
> }
> +
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "XIVE: >>>>> pnv_xive2_get_remote() vsd_type %u fwd_addr 0x%lX NOT FOUND\n",
> + vsd_type, fwd_addr);
> return NULL;
> }
>
> @@ -252,8 +262,7 @@ static uint64_t pnv_xive2_vst_addr(PnvXive2 *xive, uint32_t type, uint8_t blk,
>
> /* Remote VST access */
> if (GETFIELD(VSD_MODE, vsd) == VSD_MODE_FORWARD) {
> - xive = pnv_xive2_get_remote(blk);
> -
> + xive = pnv_xive2_get_remote(type, (vsd & VSD_ADDRESS_MASK));
> return xive ? pnv_xive2_vst_addr(xive, type, blk, idx) : 0;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 05/50] ppc/xive2: fix context push calculation of IPB priority
2025-05-12 3:10 ` [PATCH 05/50] ppc/xive2: fix context push calculation of IPB priority Nicholas Piggin
@ 2025-05-14 14:30 ` Caleb Schlossin
2025-05-14 18:48 ` Mike Kowal
2025-05-15 15:36 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:30 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> Pushing a context and loading IPB from NVP is defined to merge ('or')
> that IPB into the TIMA IPB register. PIPR should therefore be calculated
> based on the final IPB value, not just the NVP value.
>
> Fixes: 9d2b6058c5b ("ppc/xive2: Add grouping level to notification")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 790152a2a6..4dd04a0398 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -835,8 +835,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
> }
> + /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
> - backlog_prio = xive_ipb_to_pipr(ipb);
> + backlog_prio = xive_ipb_to_pipr(regs[TM_IPB]);
> backlog_level = 0;
>
> first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 06/50] ppc/xive: Fix PHYS NSR ring matching
2025-05-12 3:10 ` [PATCH 06/50] ppc/xive: Fix PHYS NSR ring matching Nicholas Piggin
@ 2025-05-14 14:30 ` Caleb Schlossin
2025-05-14 18:49 ` Mike Kowal
2025-05-15 15:39 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:30 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> Test that the NSR exception bit field is equal to the pool ring value,
> rather than any common bits set, which is more correct (although there
> is no practical bug because the LSI NSR type is not implemented and
> POOL/PHYS NSR are encoded with exclusive bits).
>
> Fixes: 4c3ccac636 ("pnv/xive: Add special handling for pool targets")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 80b07a0afe..cebe409a1a 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -54,7 +54,8 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> uint8_t *alt_regs;
>
> /* POOL interrupt uses IPB in QW2, POOL ring */
> - if ((ring == TM_QW3_HV_PHYS) && (nsr & (TM_QW3_NSR_HE_POOL << 6))) {
> + if ((ring == TM_QW3_HV_PHYS) &&
> + ((nsr & TM_QW3_NSR_HE) == (TM_QW3_NSR_HE_POOL << 6))) {
> alt_ring = TM_QW2_HV_POOL;
> } else {
> alt_ring = ring;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch
2025-05-12 3:10 ` [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch Nicholas Piggin
@ 2025-05-14 14:30 ` Caleb Schlossin
2025-05-14 18:50 ` Mike Kowal
` (2 subsequent siblings)
3 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:30 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> When the END Event Queue wraps the END EQ Generation bit is flipped and the
> Generation Flipped bit is set to one. On a END cache Watch read operation,
> the Generation Flipped bit needs to be reset.
>
> While debugging an error modified END not valid error messages to include
> the method since all were the same.
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 3 ++-
> hw/intc/xive2.c | 4 ++--
> 2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 30b4ab2efe..72cdf0f20c 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1325,10 +1325,11 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
> case VC_ENDC_WATCH3_DATA0:
> /*
> * Load DATA registers from cache with data requested by the
> - * SPEC register
> + * SPEC register. Clear gen_flipped bit in word 1.
> */
> watch_engine = (offset - VC_ENDC_WATCH0_DATA0) >> 6;
> pnv_xive2_end_cache_load(xive, watch_engine);
> + xive->vc_regs[reg] &= ~(uint64_t)END2_W1_GEN_FLIPPED;
> val = xive->vc_regs[reg];
> break;
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 4dd04a0398..453fe37f18 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -374,8 +374,8 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
> qgen ^= 1;
> end->w1 = xive_set_field32(END2_W1_GENERATION, end->w1, qgen);
>
> - /* TODO(PowerNV): reset GF bit on a cache watch operation */
> - end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, qgen);
> + /* Set gen flipped to 1, it gets reset on a cache watch operation */
> + end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, 1);
> }
> end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm
2025-05-12 3:10 ` [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm Nicholas Piggin
@ 2025-05-14 14:31 ` Caleb Schlossin
2025-05-14 18:51 ` Mike Kowal
` (2 subsequent siblings)
3 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:31 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> The current xive algorithm for finding a matching group vCPU
> target always uses the first vCPU found. And, since it always
> starts the search with thread 0 of a core, thread 0 is almost
> always used to handle group interrupts. This can lead to additional
> interrupt latency and poor performance for interrupt intensive
> work loads.
>
> Changing this to use a simple round-robin algorithm for deciding which
> thread number to use when starting a search, which leads to a more
> distributed use of threads for handling group interrupts.
>
> [npiggin: Also round-robin among threads, not just cores]
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 72cdf0f20c..d7ca97ecbb 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -643,13 +643,18 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> int i, j;
> bool gen1_tima_os =
> xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
> + static int next_start_core;
> + static int next_start_thread;
> + int start_core = next_start_core;
> + int start_thread = next_start_thread;
>
> for (i = 0; i < chip->nr_cores; i++) {
> - PnvCore *pc = chip->cores[i];
> + PnvCore *pc = chip->cores[(i + start_core) % chip->nr_cores];
> CPUCore *cc = CPU_CORE(pc);
>
> for (j = 0; j < cc->nr_threads; j++) {
> - PowerPCCPU *cpu = pc->threads[j];
> + /* Start search for match with different thread each call */
> + PowerPCCPU *cpu = pc->threads[(j + start_thread) % cc->nr_threads];
> XiveTCTX *tctx;
> int ring;
>
> @@ -694,6 +699,15 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> if (!match->tctx) {
> match->ring = ring;
> match->tctx = tctx;
> +
> + next_start_thread = j + start_thread + 1;
> + if (next_start_thread >= cc->nr_threads) {
> + next_start_thread = 0;
> + next_start_core = i + start_core + 1;
> + if (next_start_core >= chip->nr_cores) {
> + next_start_core = 0;
> + }
> + }
> }
> count++;
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 09/50] ppc/xive2: Fix irq preempted by lower priority group irq
2025-05-12 3:10 ` [PATCH 09/50] ppc/xive2: Fix irq preempted by lower priority group irq Nicholas Piggin
@ 2025-05-14 14:31 ` Caleb Schlossin
2025-05-14 18:52 ` Mike Kowal
2025-05-16 0:12 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:31 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> A problem was seen where uart interrupts would be lost resulting in the
> console hanging. Traces showed that a lower priority interrupt was
> preempting a higher priority interrupt, which would result in the higher
> priority interrupt never being handled.
>
> The new interrupt's priority was being compared against the CPPR
> (Current Processor Priority Register) instead of the PIPR (Post
> Interrupt Priority Register), as was required by the XIVE spec.
> This allowed for a window between raising an interrupt and ACK'ing
> the interrupt where a lower priority interrupt could slip in.
>
> Fixes: 26c55b99418 ("ppc/xive2: Process group backlog when updating the CPPR")
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 453fe37f18..2b4d0f51be 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1283,7 +1283,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> * priority to know if the thread can take the interrupt now or if
> * it is precluded.
> */
> - if (priority < alt_regs[TM_CPPR]) {
> + if (priority < alt_regs[TM_PIPR]) {
> return false;
> }
> return true;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update
2025-05-12 3:10 ` [PATCH 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update Nicholas Piggin
@ 2025-05-14 14:32 ` Caleb Schlossin
2025-05-14 18:53 ` Mike Kowal
2025-05-16 0:15 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:32 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> According to the XIVE spec, updating the CPPR should also update the
> PIPR. The final value of the PIPR depends on other factors, but it
> should never be set to a value that is above the CPPR.
>
> Also added support for redistributing an active group interrupt when it
> is precluded as a result of changing the CPPR value.
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 2b4d0f51be..1971c05fa1 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -995,7 +995,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
> }
> }
> - regs[TM_PIPR] = pipr_min;
> +
> + /* PIPR should not be set to a value greater than CPPR */
> + regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
>
> rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> if (rc) {
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR
2025-05-12 3:10 ` [PATCH 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR Nicholas Piggin
@ 2025-05-14 14:32 ` Caleb Schlossin
2025-05-14 18:54 ` Mike Kowal
2025-05-15 15:43 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:32 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> Group interrupts should not be taken from the backlog and presented
> if they are precluded by CPPR.
>
> Fixes: 855434b3b8 ("ppc/xive2: Process group backlog when pushing an OS context")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 1971c05fa1..8ede95b671 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -845,7 +845,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> group_prio = xive2_presenter_backlog_scan(xptr, nvp_blk, nvp_idx,
> first_group, &group_level);
> regs[TM_LSMFB] = group_prio;
> - if (regs[TM_LGS] && group_prio < backlog_prio) {
> + if (regs[TM_LGS] && group_prio < backlog_prio &&
> + group_prio < regs[TM_CPPR]) {
> +
> /* VP can take a group interrupt */
> xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
> group_prio, group_level);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 12/50] ppc/xive2: Set CPPR delivery should account for group priority
2025-05-12 3:10 ` [PATCH 12/50] ppc/xive2: Set CPPR delivery should account for group priority Nicholas Piggin
@ 2025-05-14 14:33 ` Caleb Schlossin
2025-05-14 18:57 ` Mike Kowal
2025-05-15 15:45 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:33 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> The group interrupt delivery flow selects the group backlog scan if
> LSMFB < IPB, but that scan may find an interrupt with a priority >=
> IPB. In that case, the VP-direct interrupt should be chosen. This
> extends to selecting the lowest prio between POOL and PHYS rings.
>
> Implement this just by re-starting the selection logic if the
> backlog irq was not found or priority did not match LSMFB (LSMFB
> is updated so next time around it would see the right value and
> not loop infinitely).
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 32 ++++++++++++++++++++++----------
> 1 file changed, 22 insertions(+), 10 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 8ede95b671..de139dcfbf 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -939,7 +939,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> uint8_t *regs = &tctx->regs[ring];
> Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> - uint8_t old_cppr, backlog_prio, first_group, group_level = 0;
> + uint8_t old_cppr, backlog_prio, first_group, group_level;
> uint8_t pipr_min, lsmfb_min, ring_min;
> bool group_enabled;
> uint32_t nvp_blk, nvp_idx;
> @@ -961,10 +961,12 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> * Recompute the PIPR based on local pending interrupts. It will
> * be adjusted below if needed in case of pending group interrupts.
> */
> +again:
> pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
> group_enabled = !!regs[TM_LGS];
> - lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff;
> + lsmfb_min = group_enabled ? regs[TM_LSMFB] : 0xff;
> ring_min = ring;
> + group_level = 0;
>
> /* PHYS updates also depend on POOL values */
> if (ring == TM_QW3_HV_PHYS) {
> @@ -998,9 +1000,6 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
> }
>
> - /* PIPR should not be set to a value greater than CPPR */
> - regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
> -
> rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> if (rc) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
> @@ -1019,7 +1018,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
>
> if (group_enabled &&
> lsmfb_min < cppr &&
> - lsmfb_min < regs[TM_PIPR]) {
> + lsmfb_min < pipr_min) {
> /*
> * Thread has seen a group interrupt with a higher priority
> * than the new cppr or pending local interrupt. Check the
> @@ -1048,12 +1047,25 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> nvp_blk, nvp_idx,
> first_group, &group_level);
> tctx->regs[ring_min + TM_LSMFB] = backlog_prio;
> - if (backlog_prio != 0xFF) {
> - xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
> - backlog_prio, group_level);
> - regs[TM_PIPR] = backlog_prio;
> + if (backlog_prio != lsmfb_min) {
> + /*
> + * If the group backlog scan finds a less favored or no interrupt,
> + * then re-do the processing which may turn up a more favored
> + * interrupt from IPB or the other pool. Backlog should not
> + * find a priority < LSMFB.
> + */
> + g_assert(backlog_prio >= lsmfb_min);
> + goto again;
> }
> +
> + xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
> + backlog_prio, group_level);
> + pipr_min = backlog_prio;
> }
> +
> + /* PIPR should not be set to a value greater than CPPR */
> + regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
> +
> /* CPPR has changed, check if we need to raise a pending exception */
> xive_tctx_notify(tctx, ring_min, group_level);
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 13/50] ppc/xive: tctx_notify should clear the precluded interrupt
2025-05-12 3:10 ` [PATCH 13/50] ppc/xive: tctx_notify should clear the precluded interrupt Nicholas Piggin
@ 2025-05-14 14:33 ` Caleb Schlossin
2025-05-14 18:58 ` Mike Kowal
2025-05-15 15:46 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:33 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> If CPPR is lowered to preclude the pending interrupt, NSR should be
> cleared and the qemu_irq should be lowered. This avoids some cases
> of supurious interrupts.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index cebe409a1a..6293ea4361 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -110,6 +110,9 @@ void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
> regs[TM_IPB], alt_regs[TM_PIPR],
> alt_regs[TM_CPPR], alt_regs[TM_NSR]);
> qemu_irq_raise(xive_tctx_output(tctx, ring));
> + } else {
> + alt_regs[TM_NSR] = 0;
> + qemu_irq_lower(xive_tctx_output(tctx, ring));
> }
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting
2025-05-12 3:10 ` [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting Nicholas Piggin
@ 2025-05-14 14:34 ` Caleb Schlossin
2025-05-14 19:07 ` Mike Kowal
2025-05-15 15:47 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:34 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Looks good.
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> Have xive_tctx_accept clear NSR in one shot rather than masking out bits
> as they are tested, which makes it clear it's reset to 0, and does not
> have a partial NSR value in the register.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 6293ea4361..bb40a69c5b 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -68,13 +68,11 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> * If the interrupt was for a specific VP, reset the pending
> * buffer bit, otherwise clear the logical server indicator
> */
> - if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
> - regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
> - } else {
> + if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
> alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> }
>
> - /* Drop the exception bit and any group/crowd */
> + /* Clear the exception from NSR */
> regs[TM_NSR] = 0;
>
> trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 15/50] ppc/xive: Move NSR decoding into helper functions
2025-05-12 3:10 ` [PATCH 15/50] ppc/xive: Move NSR decoding into helper functions Nicholas Piggin
@ 2025-05-14 14:35 ` Caleb Schlossin
2025-05-14 19:04 ` Mike Kowal
2025-05-15 15:48 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:35 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> Rather than functions to return masks to test NSR bits, have functions
> to test those bits directly. This should be no functional change, it
> just makes the code more readable.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 51 +++++++++++++++++++++++++++++++++++--------
> include/hw/ppc/xive.h | 4 ++++
> 2 files changed, 46 insertions(+), 9 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index bb40a69c5b..c2da23f9ea 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -25,6 +25,45 @@
> /*
> * XIVE Thread Interrupt Management context
> */
> +bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr)
> +{
> + switch (ring) {
> + case TM_QW1_OS:
> + return !!(nsr & TM_QW1_NSR_EO);
> + case TM_QW2_HV_POOL:
> + case TM_QW3_HV_PHYS:
> + return !!(nsr & TM_QW3_NSR_HE);
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr)
> +{
> + if ((nsr & TM_NSR_GRP_LVL) > 0) {
> + g_assert(xive_nsr_indicates_exception(ring, nsr));
> + return true;
> + }
> + return false;
> +}
> +
> +uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr)
> +{
> + /* NSR determines if pool/phys ring is for phys or pool interrupt */
> + if ((ring == TM_QW3_HV_PHYS) || (ring == TM_QW2_HV_POOL)) {
> + uint8_t he = (nsr & TM_QW3_NSR_HE) >> 6;
> +
> + if (he == TM_QW3_NSR_HE_PHYS) {
> + return TM_QW3_HV_PHYS;
> + } else if (he == TM_QW3_NSR_HE_POOL) {
> + return TM_QW2_HV_POOL;
> + } else {
> + /* Don't support LSI mode */
> + g_assert_not_reached();
> + }
> + }
> + return ring;
> +}
>
> static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
> {
> @@ -48,18 +87,12 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
>
> qemu_irq_lower(xive_tctx_output(tctx, ring));
>
> - if (regs[TM_NSR] != 0) {
> + if (xive_nsr_indicates_exception(ring, nsr)) {
> uint8_t cppr = regs[TM_PIPR];
> uint8_t alt_ring;
> uint8_t *alt_regs;
>
> - /* POOL interrupt uses IPB in QW2, POOL ring */
> - if ((ring == TM_QW3_HV_PHYS) &&
> - ((nsr & TM_QW3_NSR_HE) == (TM_QW3_NSR_HE_POOL << 6))) {
> - alt_ring = TM_QW2_HV_POOL;
> - } else {
> - alt_ring = ring;
> - }
> + alt_ring = xive_nsr_exception_ring(ring, nsr);
> alt_regs = &tctx->regs[alt_ring];
>
> regs[TM_CPPR] = cppr;
> @@ -68,7 +101,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> * If the interrupt was for a specific VP, reset the pending
> * buffer bit, otherwise clear the logical server indicator
> */
> - if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
> + if (!xive_nsr_indicates_group_exception(ring, nsr)) {
> alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> }
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 538f438681..28f0f1b79a 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -365,6 +365,10 @@ static inline uint32_t xive_tctx_word2(uint8_t *ring)
> return *((uint32_t *) &ring[TM_WORD2]);
> }
>
> +bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr);
> +bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr);
> +uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr);
> +
> /*
> * XIVE Router
> */
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 16/50] ppc/xive: Fix pulling pool and phys contexts
2025-05-12 3:10 ` [PATCH 16/50] ppc/xive: Fix pulling pool and phys contexts Nicholas Piggin
@ 2025-05-14 14:36 ` Caleb Schlossin
2025-05-14 19:01 ` Mike Kowal
2025-05-15 15:49 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:36 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> This improves the implementation of pulling pool and phys contexts in
> XIVE1, by following closer the OS pulling code.
>
> In particular, the old ring data is returned rather than the modified,
> and irq signals are reset on pull.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 66 ++++++++++++++++++++++++++++++++++++++++++++------
> 1 file changed, 58 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index c2da23f9ea..1a94642c62 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -241,25 +241,75 @@ static uint64_t xive_tm_ack_hv_reg(XivePresenter *xptr, XiveTCTX *tctx,
> return xive_tctx_accept(tctx, TM_QW3_HV_PHYS);
> }
>
> +static void xive_pool_cam_decode(uint32_t cam, uint8_t *nvt_blk,
> + uint32_t *nvt_idx, bool *vp)
> +{
> + if (nvt_blk) {
> + *nvt_blk = xive_nvt_blk(cam);
> + }
> + if (nvt_idx) {
> + *nvt_idx = xive_nvt_idx(cam);
> + }
> + if (vp) {
> + *vp = !!(cam & TM_QW2W2_VP);
> + }
> +}
> +
> +static uint32_t xive_tctx_get_pool_cam(XiveTCTX *tctx, uint8_t *nvt_blk,
> + uint32_t *nvt_idx, bool *vp)
> +{
> + uint32_t qw2w2 = xive_tctx_word2(&tctx->regs[TM_QW2_HV_POOL]);
> + uint32_t cam = be32_to_cpu(qw2w2);
> +
> + xive_pool_cam_decode(cam, nvt_blk, nvt_idx, vp);
> + return qw2w2;
> +}
> +
> +static void xive_tctx_set_pool_cam(XiveTCTX *tctx, uint32_t qw2w2)
> +{
> + memcpy(&tctx->regs[TM_QW2_HV_POOL + TM_WORD2], &qw2w2, 4);
> +}
> +
> static uint64_t xive_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size)
> {
> - uint32_t qw2w2_prev = xive_tctx_word2(&tctx->regs[TM_QW2_HV_POOL]);
> uint32_t qw2w2;
> + uint32_t qw2w2_new;
> + uint8_t nvt_blk;
> + uint32_t nvt_idx;
> + bool vp;
>
> - qw2w2 = xive_set_field32(TM_QW2W2_VP, qw2w2_prev, 0);
> - memcpy(&tctx->regs[TM_QW2_HV_POOL + TM_WORD2], &qw2w2, 4);
> + qw2w2 = xive_tctx_get_pool_cam(tctx, &nvt_blk, &nvt_idx, &vp);
> +
> + if (!vp) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pull invalid POOL NVT %x/%x !?\n",
> + nvt_blk, nvt_idx);
> + }
> +
> + /* Invalidate CAM line */
> + qw2w2_new = xive_set_field32(TM_QW2W2_VP, qw2w2, 0);
> + xive_tctx_set_pool_cam(tctx, qw2w2_new);
> +
> + xive_tctx_reset_signal(tctx, TM_QW1_OS);
> + xive_tctx_reset_signal(tctx, TM_QW2_HV_POOL);
> return qw2w2;
> }
>
> static uint64_t xive_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size)
> {
> - uint8_t qw3b8_prev = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
> - uint8_t qw3b8;
> + uint8_t qw3b8 = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
> + uint8_t qw3b8_new;
> +
> + qw3b8 = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
> + if (!(qw3b8 & TM_QW3B8_VT)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid PHYS thread!?\n");
> + }
> + qw3b8_new = qw3b8 & ~TM_QW3B8_VT;
> + tctx->regs[TM_QW3_HV_PHYS + TM_WORD2] = qw3b8_new;
>
> - qw3b8 = qw3b8_prev & ~TM_QW3B8_VT;
> - tctx->regs[TM_QW3_HV_PHYS + TM_WORD2] = qw3b8;
> + xive_tctx_reset_signal(tctx, TM_QW1_OS);
> + xive_tctx_reset_signal(tctx, TM_QW3_HV_PHYS);
> return qw3b8;
> }
>
> @@ -489,7 +539,7 @@ static uint64_t xive_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> qw1w2 = xive_tctx_get_os_cam(tctx, &nvt_blk, &nvt_idx, &vo);
>
> if (!vo) {
> - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid NVT %x/%x !?\n",
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pull invalid OS NVT %x/%x !?\n",
> nvt_blk, nvt_idx);
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 17/50] pnv/xive2: Support ESB Escalation
2025-05-12 3:10 ` [PATCH 17/50] pnv/xive2: Support ESB Escalation Nicholas Piggin
@ 2025-05-14 14:36 ` Caleb Schlossin
2025-05-14 19:00 ` Mike Kowal
2025-05-16 0:05 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:36 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin, Glenn Miles
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.vnet.ibm.com>
>
> Add support for XIVE ESB Interrupt Escalation.
>
> Suggested-by: Michael Kowal <kowal@linux.ibm.com>
> [This change was taken from a patch provided by Michael Kowal.]
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> ---
> hw/intc/xive2.c | 62 ++++++++++++++++++++++++++++++-------
> include/hw/ppc/xive2.h | 1 +
> include/hw/ppc/xive2_regs.h | 13 +++++---
> 3 files changed, 59 insertions(+), 17 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index de139dcfbf..0993e792cc 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1552,18 +1552,39 @@ do_escalation:
> }
> }
>
> - /*
> - * The END trigger becomes an Escalation trigger
> - */
> - xive2_router_end_notify(xrtr,
> - xive_get_field32(END2_W4_END_BLOCK, end.w4),
> - xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> - xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + if (xive2_end_is_escalate_end(&end)) {
> + /*
> + * Perform END Adaptive escalation processing
> + * The END trigger becomes an Escalation trigger
> + */
> + xive2_router_end_notify(xrtr,
> + xive_get_field32(END2_W4_END_BLOCK, end.w4),
> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> + xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + } /* end END adaptive escalation */
> +
> + else {
> + uint32_t lisn; /* Logical Interrupt Source Number */
> +
> + /*
> + * Perform ESB escalation processing
> + * E[N] == 1 --> N
> + * Req[Block] <- E[ESB_Block]
> + * Req[Index] <- E[ESB_Index]
> + * Req[Offset] <- 0x000
> + * Execute <ESB Store> Req command
> + */
> + lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
> +
> + xive2_notify(xrtr, lisn, true /* pq_checked */);
> + }
> +
> + return;
> }
>
> -void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> +void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
> {
> - Xive2Router *xrtr = XIVE2_ROUTER(xn);
> uint8_t eas_blk = XIVE_EAS_BLOCK(lisn);
> uint32_t eas_idx = XIVE_EAS_INDEX(lisn);
> Xive2Eas eas;
> @@ -1606,13 +1627,30 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> return;
> }
>
> + /* TODO: add support for EAS resume */
> + if (xive2_eas_is_resume(&eas)) {
> + qemu_log_mask(LOG_UNIMP,
> + "XIVE: EAS resume processing unimplemented - LISN %x\n",
> + lisn);
> + return;
> + }
> +
> /*
> * The event trigger becomes an END trigger
> */
> xive2_router_end_notify(xrtr,
> - xive_get_field64(EAS2_END_BLOCK, eas.w),
> - xive_get_field64(EAS2_END_INDEX, eas.w),
> - xive_get_field64(EAS2_END_DATA, eas.w));
> + xive_get_field64(EAS2_END_BLOCK, eas.w),
> + xive_get_field64(EAS2_END_INDEX, eas.w),
> + xive_get_field64(EAS2_END_DATA, eas.w));
> + return;
> +}
> +
> +void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> +{
> + Xive2Router *xrtr = XIVE2_ROUTER(xn);
> +
> + xive2_notify(xrtr, lisn, pq_checked);
> + return;
> }
>
> static const Property xive2_router_properties[] = {
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 8cdf819174..2436ddb5e5 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -80,6 +80,7 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
> uint32_t xive2_router_get_config(Xive2Router *xrtr);
>
> void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
> +void xive2_notify(Xive2Router *xrtr, uint32_t lisn, bool pq_checked);
>
> /*
> * XIVE2 Presenter (POWER10)
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index 3c28de8a30..2c535ec0d0 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -39,15 +39,18 @@
>
> typedef struct Xive2Eas {
> uint64_t w;
> -#define EAS2_VALID PPC_BIT(0)
> -#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
> -#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
> -#define EAS2_MASKED PPC_BIT(32) /* Masked */
> -#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
> +#define EAS2_VALID PPC_BIT(0)
> +#define EAS2_QOS PPC_BIT(1, 2) /* Quality of Service(unimp) */
> +#define EAS2_RESUME PPC_BIT(3) /* END Resume(unimp) */
> +#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
> +#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
> +#define EAS2_MASKED PPC_BIT(32) /* Masked */
> +#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
> } Xive2Eas;
>
> #define xive2_eas_is_valid(eas) (be64_to_cpu((eas)->w) & EAS2_VALID)
> #define xive2_eas_is_masked(eas) (be64_to_cpu((eas)->w) & EAS2_MASKED)
> +#define xive2_eas_is_resume(eas) (be64_to_cpu((eas)->w) & EAS2_RESUME)
>
> void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf);
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 18/50] pnv/xive2: Print value in invalid register write logging
2025-05-12 3:10 ` [PATCH 18/50] pnv/xive2: Print value in invalid register write logging Nicholas Piggin
@ 2025-05-14 14:36 ` Caleb Schlossin
2025-05-14 19:09 ` Mike Kowal
` (2 subsequent siblings)
3 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:36 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> This can make it easier to see what the target system is trying to
> do.
>
> [npiggin: split from larger patch]
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 24 ++++++++++++++++--------
> 1 file changed, 16 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index d7ca97ecbb..fcf5b2e75c 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1197,7 +1197,8 @@ static void pnv_xive2_ic_cq_write(void *opaque, hwaddr offset,
> case CQ_FIRMASK_OR: /* FIR error reporting */
> break;
> default:
> - xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx, offset);
> + xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1495,7 +1496,8 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "VC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "VC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1703,7 +1705,8 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "PC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "PC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1790,7 +1793,8 @@ static void pnv_xive2_ic_tctxt_write(void *opaque, hwaddr offset,
> xive->tctxt_regs[reg] = val;
> break;
> default:
> - xive2_error(xive, "TCTXT: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "TCTXT: invalid write @0x%"HWADDR_PRIx
> + " data 0x%"PRIx64, offset, val);
> return;
> }
> }
> @@ -1861,7 +1865,8 @@ static void pnv_xive2_xscom_write(void *opaque, hwaddr offset,
> pnv_xive2_ic_tctxt_write(opaque, mmio_offset, val, size);
> break;
> default:
> - xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx
> + " value 0x%"PRIx64, offset, val);
> }
> }
>
> @@ -1929,7 +1934,8 @@ static void pnv_xive2_ic_notify_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx
> + " value 0x%"PRIx64, offset, val);
> }
> }
>
> @@ -1971,7 +1977,8 @@ static void pnv_xive2_ic_lsi_write(void *opaque, hwaddr offset,
> {
> PnvXive2 *xive = PNV_XIVE2(opaque);
>
> - xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> }
>
> static const MemoryRegionOps pnv_xive2_ic_lsi_ops = {
> @@ -2074,7 +2081,8 @@ static void pnv_xive2_ic_sync_write(void *opaque, hwaddr offset,
> inject_type = PNV_XIVE2_QUEUE_NXC_ST_RMT_CI;
> break;
> default:
> - xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
2025-05-12 3:10 ` [PATCH 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL Nicholas Piggin
@ 2025-05-14 14:37 ` Caleb Schlossin
2025-05-14 19:10 ` Mike Kowal
2025-05-15 15:51 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:37 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> Firmware expects to read back the WATCH_FULL bit from the VC_ENDC_WATCH_SPEC
> register, so don't clear it on read.
>
> Don't bother clearing the reads-as-zero CONFLICT bit because it's masked
> at write already.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/pnv_xive2.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index fcf5b2e75c..3c26cd6b77 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1329,7 +1329,6 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
> case VC_ENDC_WATCH2_SPEC:
> case VC_ENDC_WATCH3_SPEC:
> watch_engine = (offset - VC_ENDC_WATCH0_SPEC) >> 6;
> - xive->vc_regs[reg] &= ~(VC_ENDC_WATCH_FULL | VC_ENDC_WATCH_CONFLICT);
> pnv_xive2_endc_cache_watch_release(xive, watch_engine);
> val = xive->vc_regs[reg];
> break;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers
2025-05-12 3:10 ` [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers Nicholas Piggin
@ 2025-05-14 14:37 ` Caleb Schlossin
2025-05-14 19:11 ` Mike Kowal
` (2 subsequent siblings)
3 siblings, 0 replies; 192+ messages in thread
From: Caleb Schlossin @ 2025-05-14 14:37 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
On 5/11/25 10:10 PM, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> Writes to the Flush Control registers were logged as invalid
> when they are allowed. Clearing the unsupported want_cache_disable
> feature is supported, so don't log an error in that case.
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 36 ++++++++++++++++++++++++++++++++----
> 1 file changed, 32 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 3c26cd6b77..c9374f0eee 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1411,7 +1411,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> /*
> * ESB cache updates (not modeled)
> */
> - /* case VC_ESBC_FLUSH_CTRL: */
> + case VC_ESBC_FLUSH_CTRL:
> + if (val & VC_ESBC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_ESBC_FLUSH_POLL:
> xive->vc_regs[VC_ESBC_FLUSH_CTRL >> 3] |= VC_ESBC_FLUSH_CTRL_POLL_VALID;
> /* ESB update */
> @@ -1427,7 +1434,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> /*
> * EAS cache updates (not modeled)
> */
> - /* case VC_EASC_FLUSH_CTRL: */
> + case VC_EASC_FLUSH_CTRL:
> + if (val & VC_EASC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_EASC_FLUSH_POLL:
> xive->vc_regs[VC_EASC_FLUSH_CTRL >> 3] |= VC_EASC_FLUSH_CTRL_POLL_VALID;
> /* EAS update */
> @@ -1466,7 +1480,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> break;
>
>
> - /* case VC_ENDC_FLUSH_CTRL: */
> + case VC_ENDC_FLUSH_CTRL:
> + if (val & VC_ENDC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_ENDC_FLUSH_POLL:
> xive->vc_regs[VC_ENDC_FLUSH_CTRL >> 3] |= VC_ENDC_FLUSH_CTRL_POLL_VALID;
> break;
> @@ -1687,7 +1708,14 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
> pnv_xive2_nxc_update(xive, watch_engine);
> break;
>
> - /* case PC_NXC_FLUSH_CTRL: */
> + case PC_NXC_FLUSH_CTRL:
> + if (val & PC_NXC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case PC_NXC_FLUSH_POLL:
> xive->pc_regs[PC_NXC_FLUSH_CTRL >> 3] |= PC_NXC_FLUSH_CTRL_POLL_VALID;
> break;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 01/50] ppc/xive: Fix xive trace event output
2025-05-12 3:10 ` [PATCH 01/50] ppc/xive: Fix xive trace event output Nicholas Piggin
2025-05-14 14:26 ` Caleb Schlossin
@ 2025-05-14 18:41 ` Mike Kowal
2025-05-15 15:30 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:41 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Typo, IBP should be IPB.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/trace-events | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/trace-events b/hw/intc/trace-events
> index 0ba9a02e73..f77f9733c9 100644
> --- a/hw/intc/trace-events
> +++ b/hw/intc/trace-events
> @@ -274,9 +274,9 @@ kvm_xive_cpu_connect(uint32_t id) "connect CPU%d to KVM device"
> kvm_xive_source_reset(uint32_t srcno) "IRQ 0x%x"
>
> # xive.c
> -xive_tctx_accept(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x ACK"
> -xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x raise !"
> -xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
> +xive_tctx_accept(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x ACK"
> +xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x raise !"
> +xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
> xive_source_esb_read(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
> xive_source_esb_write(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
> xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "END 0x%02x/0x%04x -> enqueue 0x%08x"
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 02/50] ppc/xive: Report access size in XIVE TM operation error logs
2025-05-12 3:10 ` [PATCH 02/50] ppc/xive: Report access size in XIVE TM operation error logs Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
@ 2025-05-14 18:42 ` Mike Kowal
2025-05-15 15:31 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:42 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Report access size in XIVE TM operation error logs.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 3eb28c2265..80b07a0afe 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -326,7 +326,7 @@ static void xive_tm_raw_write(XiveTCTX *tctx, hwaddr offset, uint64_t value,
> */
> if (size < 4 || !mask || ring_offset == TM_QW0_USER) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA @%"
> - HWADDR_PRIx"\n", offset);
> + HWADDR_PRIx" size %d\n", offset, size);
> return;
> }
>
> @@ -357,7 +357,7 @@ static uint64_t xive_tm_raw_read(XiveTCTX *tctx, hwaddr offset, unsigned size)
> */
> if (size < 4 || !mask || ring_offset == TM_QW0_USER) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access at TIMA @%"
> - HWADDR_PRIx"\n", offset);
> + HWADDR_PRIx" size %d\n", offset, size);
> return -1;
> }
>
> @@ -688,7 +688,7 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> xto = xive_tm_find_op(tctx->xptr, offset, size, true);
> if (!xto) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA "
> - "@%"HWADDR_PRIx"\n", offset);
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> } else {
> xto->write_handler(xptr, tctx, offset, value, size);
> }
> @@ -727,7 +727,7 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> xto = xive_tm_find_op(tctx->xptr, offset, size, false);
> if (!xto) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access to TIMA"
> - "@%"HWADDR_PRIx"\n", offset);
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> return -1;
> }
> ret = xto->read_handler(xptr, tctx, offset, size);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 03/50] ppc/xive2: Fix calculation of END queue sizes
2025-05-12 3:10 ` [PATCH 03/50] ppc/xive2: Fix calculation of END queue sizes Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
@ 2025-05-14 18:45 ` Mike Kowal
2025-05-16 0:06 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:45 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> The queue size of an Event Notification Descriptor (END)
> is determined by the 'cl' and QsZ fields of the END.
> If the cl field is 1, then the queue size (in bytes) will
> be the size of a cache line 128B * 2^QsZ and QsZ is limited
> to 4. Otherwise, it will be 4096B * 2^QsZ with QsZ limited
> to 12.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Fixes: f8a233dedf2 ("ppc/xive2: Introduce a XIVE2 core framework")
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 25 +++++++++++++++++++------
> include/hw/ppc/xive2_regs.h | 1 +
> 2 files changed, 20 insertions(+), 6 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 7d584dfafa..790152a2a6 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -188,12 +188,27 @@ void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
> (uint32_t) xive_get_field64(EAS2_END_DATA, eas->w));
> }
>
> +#define XIVE2_QSIZE_CHUNK_CL 128
> +#define XIVE2_QSIZE_CHUNK_4k 4096
> +/* Calculate max number of queue entries for an END */
> +static uint32_t xive2_end_get_qentries(Xive2End *end)
> +{
> + uint32_t w3 = end->w3;
> + uint32_t qsize = xive_get_field32(END2_W3_QSIZE, w3);
> + if (xive_get_field32(END2_W3_CL, w3)) {
> + g_assert(qsize <= 4);
> + return (XIVE2_QSIZE_CHUNK_CL << qsize) / sizeof(uint32_t);
> + } else {
> + g_assert(qsize <= 12);
> + return (XIVE2_QSIZE_CHUNK_4k << qsize) / sizeof(uint32_t);
> + }
> +}
> +
> void xive2_end_queue_pic_print_info(Xive2End *end, uint32_t width, GString *buf)
> {
> uint64_t qaddr_base = xive2_end_qaddr(end);
> - uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
> uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
> - uint32_t qentries = 1 << (qsize + 10);
> + uint32_t qentries = xive2_end_get_qentries(end);
> int i;
>
> /*
> @@ -223,8 +238,7 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf)
> uint64_t qaddr_base = xive2_end_qaddr(end);
> uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
> uint32_t qgen = xive_get_field32(END2_W1_GENERATION, end->w1);
> - uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
> - uint32_t qentries = 1 << (qsize + 10);
> + uint32_t qentries = xive2_end_get_qentries(end);
>
> uint32_t nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6);
> uint32_t nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6);
> @@ -341,13 +355,12 @@ void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx, GString *buf)
> static void xive2_end_enqueue(Xive2End *end, uint32_t data)
> {
> uint64_t qaddr_base = xive2_end_qaddr(end);
> - uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
> uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
> uint32_t qgen = xive_get_field32(END2_W1_GENERATION, end->w1);
>
> uint64_t qaddr = qaddr_base + (qindex << 2);
> uint32_t qdata = cpu_to_be32((qgen << 31) | (data & 0x7fffffff));
> - uint32_t qentries = 1 << (qsize + 10);
> + uint32_t qentries = xive2_end_get_qentries(end);
>
> if (dma_memory_write(&address_space_memory, qaddr, &qdata, sizeof(qdata),
> MEMTXATTRS_UNSPECIFIED)) {
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index b11395c563..3c28de8a30 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -87,6 +87,7 @@ typedef struct Xive2End {
> #define END2_W2_EQ_ADDR_HI PPC_BITMASK32(8, 31)
> uint32_t w3;
> #define END2_W3_EQ_ADDR_LO PPC_BITMASK32(0, 24)
> +#define END2_W3_CL PPC_BIT32(27)
> #define END2_W3_QSIZE PPC_BITMASK32(28, 31)
> uint32_t w4;
> #define END2_W4_END_BLOCK PPC_BITMASK32(4, 7)
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address
2025-05-12 3:10 ` [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
@ 2025-05-14 18:46 ` Mike Kowal
2025-05-15 15:34 ` Miles Glenn
2025-05-16 0:08 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:46 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> In a multi chip environment there will be remote/forwarded VSDs. The check
> to find a matching INT controller (XIVE) of the remote block number was
> checking the INTs chip number. Block numbers are not tied to a chip number.
> The matching remote INT is the one that matches the forwarded VSD address
> with VSD types associated MMIO BAR.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 25 +++++++++++++++++--------
> 1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index d1713b406c..30b4ab2efe 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -102,12 +102,10 @@ static uint32_t pnv_xive2_block_id(PnvXive2 *xive)
> }
>
> /*
> - * Remote access to controllers. HW uses MMIOs. For now, a simple scan
> - * of the chips is good enough.
> - *
> - * TODO: Block scope support
> + * Remote access to INT controllers. HW uses MMIOs(?). For now, a simple
> + * scan of all the chips INT controller is good enough.
> */
> -static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
> +static PnvXive2 *pnv_xive2_get_remote(uint32_t vsd_type, hwaddr fwd_addr)
> {
> PnvMachineState *pnv = PNV_MACHINE(qdev_get_machine());
> int i;
> @@ -116,10 +114,22 @@ static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
> Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
> PnvXive2 *xive = &chip10->xive;
>
> - if (pnv_xive2_block_id(xive) == blk) {
> + /*
> + * Is this the XIVE matching the forwarded VSD address is for this
> + * VSD type
> + */
> + if ((vsd_type == VST_ESB && fwd_addr == xive->esb_base) ||
> + (vsd_type == VST_END && fwd_addr == xive->end_base) ||
> + ((vsd_type == VST_NVP ||
> + vsd_type == VST_NVG) && fwd_addr == xive->nvpg_base) ||
> + (vsd_type == VST_NVC && fwd_addr == xive->nvc_base)) {
> return xive;
> }
> }
> +
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "XIVE: >>>>> pnv_xive2_get_remote() vsd_type %u fwd_addr 0x%lX NOT FOUND\n",
> + vsd_type, fwd_addr);
> return NULL;
> }
>
> @@ -252,8 +262,7 @@ static uint64_t pnv_xive2_vst_addr(PnvXive2 *xive, uint32_t type, uint8_t blk,
>
> /* Remote VST access */
> if (GETFIELD(VSD_MODE, vsd) == VSD_MODE_FORWARD) {
> - xive = pnv_xive2_get_remote(blk);
> -
> + xive = pnv_xive2_get_remote(type, (vsd & VSD_ADDRESS_MASK));
> return xive ? pnv_xive2_vst_addr(xive, type, blk, idx) : 0;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 05/50] ppc/xive2: fix context push calculation of IPB priority
2025-05-12 3:10 ` [PATCH 05/50] ppc/xive2: fix context push calculation of IPB priority Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
@ 2025-05-14 18:48 ` Mike Kowal
2025-05-15 15:36 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:48 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Pushing a context and loading IPB from NVP is defined to merge ('or')
> that IPB into the TIMA IPB register. PIPR should therefore be calculated
> based on the final IPB value, not just the NVP value.
>
> Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
>
> Thanks MAK
>
> Fixes: 9d2b6058c5b ("ppc/xive2: Add grouping level to notification")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 790152a2a6..4dd04a0398 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -835,8 +835,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
> }
> + /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
> - backlog_prio = xive_ipb_to_pipr(ipb);
> + backlog_prio = xive_ipb_to_pipr(regs[TM_IPB]);
> backlog_level = 0;
>
> first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 06/50] ppc/xive: Fix PHYS NSR ring matching
2025-05-12 3:10 ` [PATCH 06/50] ppc/xive: Fix PHYS NSR ring matching Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
@ 2025-05-14 18:49 ` Mike Kowal
2025-05-15 15:39 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:49 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Test that the NSR exception bit field is equal to the pool ring value,
> rather than any common bits set, which is more correct (although there
> is no practical bug because the LSI NSR type is not implemented and
> POOL/PHYS NSR are encoded with exclusive bits).
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Fixes: 4c3ccac636 ("pnv/xive: Add special handling for pool targets")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 80b07a0afe..cebe409a1a 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -54,7 +54,8 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> uint8_t *alt_regs;
>
> /* POOL interrupt uses IPB in QW2, POOL ring */
> - if ((ring == TM_QW3_HV_PHYS) && (nsr & (TM_QW3_NSR_HE_POOL << 6))) {
> + if ((ring == TM_QW3_HV_PHYS) &&
> + ((nsr & TM_QW3_NSR_HE) == (TM_QW3_NSR_HE_POOL << 6))) {
> alt_ring = TM_QW2_HV_POOL;
> } else {
> alt_ring = ring;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch
2025-05-12 3:10 ` [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
@ 2025-05-14 18:50 ` Mike Kowal
2025-05-15 15:41 ` Miles Glenn
2025-05-16 0:09 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:50 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> When the END Event Queue wraps the END EQ Generation bit is flipped and the
> Generation Flipped bit is set to one. On a END cache Watch read operation,
> the Generation Flipped bit needs to be reset.
>
> While debugging an error modified END not valid error messages to include
> the method since all were the same.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 3 ++-
> hw/intc/xive2.c | 4 ++--
> 2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 30b4ab2efe..72cdf0f20c 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1325,10 +1325,11 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
> case VC_ENDC_WATCH3_DATA0:
> /*
> * Load DATA registers from cache with data requested by the
> - * SPEC register
> + * SPEC register. Clear gen_flipped bit in word 1.
> */
> watch_engine = (offset - VC_ENDC_WATCH0_DATA0) >> 6;
> pnv_xive2_end_cache_load(xive, watch_engine);
> + xive->vc_regs[reg] &= ~(uint64_t)END2_W1_GEN_FLIPPED;
> val = xive->vc_regs[reg];
> break;
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 4dd04a0398..453fe37f18 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -374,8 +374,8 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
> qgen ^= 1;
> end->w1 = xive_set_field32(END2_W1_GENERATION, end->w1, qgen);
>
> - /* TODO(PowerNV): reset GF bit on a cache watch operation */
> - end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, qgen);
> + /* Set gen flipped to 1, it gets reset on a cache watch operation */
> + end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, 1);
> }
> end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm
2025-05-12 3:10 ` [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm Nicholas Piggin
2025-05-14 14:31 ` Caleb Schlossin
@ 2025-05-14 18:51 ` Mike Kowal
2025-05-15 15:42 ` Miles Glenn
2025-05-16 0:12 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:51 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> The current xive algorithm for finding a matching group vCPU
> target always uses the first vCPU found. And, since it always
> starts the search with thread 0 of a core, thread 0 is almost
> always used to handle group interrupts. This can lead to additional
> interrupt latency and poor performance for interrupt intensive
> work loads.
>
> Changing this to use a simple round-robin algorithm for deciding which
> thread number to use when starting a search, which leads to a more
> distributed use of threads for handling group interrupts.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> [npiggin: Also round-robin among threads, not just cores]
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 72cdf0f20c..d7ca97ecbb 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -643,13 +643,18 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> int i, j;
> bool gen1_tima_os =
> xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
> + static int next_start_core;
> + static int next_start_thread;
> + int start_core = next_start_core;
> + int start_thread = next_start_thread;
>
> for (i = 0; i < chip->nr_cores; i++) {
> - PnvCore *pc = chip->cores[i];
> + PnvCore *pc = chip->cores[(i + start_core) % chip->nr_cores];
> CPUCore *cc = CPU_CORE(pc);
>
> for (j = 0; j < cc->nr_threads; j++) {
> - PowerPCCPU *cpu = pc->threads[j];
> + /* Start search for match with different thread each call */
> + PowerPCCPU *cpu = pc->threads[(j + start_thread) % cc->nr_threads];
> XiveTCTX *tctx;
> int ring;
>
> @@ -694,6 +699,15 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> if (!match->tctx) {
> match->ring = ring;
> match->tctx = tctx;
> +
> + next_start_thread = j + start_thread + 1;
> + if (next_start_thread >= cc->nr_threads) {
> + next_start_thread = 0;
> + next_start_core = i + start_core + 1;
> + if (next_start_core >= chip->nr_cores) {
> + next_start_core = 0;
> + }
> + }
> }
> count++;
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 09/50] ppc/xive2: Fix irq preempted by lower priority group irq
2025-05-12 3:10 ` [PATCH 09/50] ppc/xive2: Fix irq preempted by lower priority group irq Nicholas Piggin
2025-05-14 14:31 ` Caleb Schlossin
@ 2025-05-14 18:52 ` Mike Kowal
2025-05-16 0:12 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:52 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> A problem was seen where uart interrupts would be lost resulting in the
> console hanging. Traces showed that a lower priority interrupt was
> preempting a higher priority interrupt, which would result in the higher
> priority interrupt never being handled.
>
> The new interrupt's priority was being compared against the CPPR
> (Current Processor Priority Register) instead of the PIPR (Post
> Interrupt Priority Register), as was required by the XIVE spec.
> This allowed for a window between raising an interrupt and ACK'ing
> the interrupt where a lower priority interrupt could slip in.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Fixes: 26c55b99418 ("ppc/xive2: Process group backlog when updating the CPPR")
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 453fe37f18..2b4d0f51be 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1283,7 +1283,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> * priority to know if the thread can take the interrupt now or if
> * it is precluded.
> */
> - if (priority < alt_regs[TM_CPPR]) {
> + if (priority < alt_regs[TM_PIPR]) {
> return false;
> }
> return true;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update
2025-05-12 3:10 ` [PATCH 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update Nicholas Piggin
2025-05-14 14:32 ` Caleb Schlossin
@ 2025-05-14 18:53 ` Mike Kowal
2025-05-16 0:15 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:53 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> According to the XIVE spec, updating the CPPR should also update the
> PIPR. The final value of the PIPR depends on other factors, but it
> should never be set to a value that is above the CPPR.
>
> Also added support for redistributing an active group interrupt when it
> is precluded as a result of changing the CPPR value.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 2b4d0f51be..1971c05fa1 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -995,7 +995,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
> }
> }
> - regs[TM_PIPR] = pipr_min;
> +
> + /* PIPR should not be set to a value greater than CPPR */
> + regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
>
> rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> if (rc) {
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR
2025-05-12 3:10 ` [PATCH 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR Nicholas Piggin
2025-05-14 14:32 ` Caleb Schlossin
@ 2025-05-14 18:54 ` Mike Kowal
2025-05-15 15:43 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:54 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Group interrupts should not be taken from the backlog and presented
> if they are precluded by CPPR.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Fixes: 855434b3b8 ("ppc/xive2: Process group backlog when pushing an OS context")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 1971c05fa1..8ede95b671 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -845,7 +845,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> group_prio = xive2_presenter_backlog_scan(xptr, nvp_blk, nvp_idx,
> first_group, &group_level);
> regs[TM_LSMFB] = group_prio;
> - if (regs[TM_LGS] && group_prio < backlog_prio) {
> + if (regs[TM_LGS] && group_prio < backlog_prio &&
> + group_prio < regs[TM_CPPR]) {
> +
> /* VP can take a group interrupt */
> xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
> group_prio, group_level);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 12/50] ppc/xive2: Set CPPR delivery should account for group priority
2025-05-12 3:10 ` [PATCH 12/50] ppc/xive2: Set CPPR delivery should account for group priority Nicholas Piggin
2025-05-14 14:33 ` Caleb Schlossin
@ 2025-05-14 18:57 ` Mike Kowal
2025-05-15 15:45 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:57 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> The group interrupt delivery flow selects the group backlog scan if
> LSMFB < IPB, but that scan may find an interrupt with a priority >=
> IPB. In that case, the VP-direct interrupt should be chosen. This
> extends to selecting the lowest prio between POOL and PHYS rings.
>
> Implement this just by re-starting the selection logic if the
> backlog irq was not found or priority did not match LSMFB (LSMFB
> is updated so next time around it would see the right value and
> not loop infinitely).
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 32 ++++++++++++++++++++++----------
> 1 file changed, 22 insertions(+), 10 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 8ede95b671..de139dcfbf 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -939,7 +939,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> uint8_t *regs = &tctx->regs[ring];
> Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> - uint8_t old_cppr, backlog_prio, first_group, group_level = 0;
> + uint8_t old_cppr, backlog_prio, first_group, group_level;
> uint8_t pipr_min, lsmfb_min, ring_min;
> bool group_enabled;
> uint32_t nvp_blk, nvp_idx;
> @@ -961,10 +961,12 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> * Recompute the PIPR based on local pending interrupts. It will
> * be adjusted below if needed in case of pending group interrupts.
> */
> +again:
> pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
> group_enabled = !!regs[TM_LGS];
> - lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff;
> + lsmfb_min = group_enabled ? regs[TM_LSMFB] : 0xff;
> ring_min = ring;
> + group_level = 0;
>
> /* PHYS updates also depend on POOL values */
> if (ring == TM_QW3_HV_PHYS) {
> @@ -998,9 +1000,6 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
> }
>
> - /* PIPR should not be set to a value greater than CPPR */
> - regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
> -
> rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> if (rc) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
> @@ -1019,7 +1018,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
>
> if (group_enabled &&
> lsmfb_min < cppr &&
> - lsmfb_min < regs[TM_PIPR]) {
> + lsmfb_min < pipr_min) {
> /*
> * Thread has seen a group interrupt with a higher priority
> * than the new cppr or pending local interrupt. Check the
> @@ -1048,12 +1047,25 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> nvp_blk, nvp_idx,
> first_group, &group_level);
> tctx->regs[ring_min + TM_LSMFB] = backlog_prio;
> - if (backlog_prio != 0xFF) {
> - xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
> - backlog_prio, group_level);
> - regs[TM_PIPR] = backlog_prio;
> + if (backlog_prio != lsmfb_min) {
> + /*
> + * If the group backlog scan finds a less favored or no interrupt,
> + * then re-do the processing which may turn up a more favored
> + * interrupt from IPB or the other pool. Backlog should not
> + * find a priority < LSMFB.
> + */
> + g_assert(backlog_prio >= lsmfb_min);
> + goto again;
> }
> +
> + xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
> + backlog_prio, group_level);
> + pipr_min = backlog_prio;
> }
> +
> + /* PIPR should not be set to a value greater than CPPR */
> + regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
> +
> /* CPPR has changed, check if we need to raise a pending exception */
> xive_tctx_notify(tctx, ring_min, group_level);
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 13/50] ppc/xive: tctx_notify should clear the precluded interrupt
2025-05-12 3:10 ` [PATCH 13/50] ppc/xive: tctx_notify should clear the precluded interrupt Nicholas Piggin
2025-05-14 14:33 ` Caleb Schlossin
@ 2025-05-14 18:58 ` Mike Kowal
2025-05-15 15:46 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 18:58 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> If CPPR is lowered to preclude the pending interrupt, NSR should be
> cleared and the qemu_irq should be lowered. This avoids some cases
> of supurious interrupts.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index cebe409a1a..6293ea4361 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -110,6 +110,9 @@ void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
> regs[TM_IPB], alt_regs[TM_PIPR],
> alt_regs[TM_CPPR], alt_regs[TM_NSR]);
> qemu_irq_raise(xive_tctx_output(tctx, ring));
> + } else {
> + alt_regs[TM_NSR] = 0;
> + qemu_irq_lower(xive_tctx_output(tctx, ring));
> }
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 17/50] pnv/xive2: Support ESB Escalation
2025-05-12 3:10 ` [PATCH 17/50] pnv/xive2: Support ESB Escalation Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
@ 2025-05-14 19:00 ` Mike Kowal
2025-05-16 0:05 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:00 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin, Glenn Miles
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.vnet.ibm.com>
>
> Add support for XIVE ESB Interrupt Escalation.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Suggested-by: Michael Kowal <kowal@linux.ibm.com>
> [This change was taken from a patch provided by Michael Kowal.]
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> ---
> hw/intc/xive2.c | 62 ++++++++++++++++++++++++++++++-------
> include/hw/ppc/xive2.h | 1 +
> include/hw/ppc/xive2_regs.h | 13 +++++---
> 3 files changed, 59 insertions(+), 17 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index de139dcfbf..0993e792cc 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1552,18 +1552,39 @@ do_escalation:
> }
> }
>
> - /*
> - * The END trigger becomes an Escalation trigger
> - */
> - xive2_router_end_notify(xrtr,
> - xive_get_field32(END2_W4_END_BLOCK, end.w4),
> - xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> - xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + if (xive2_end_is_escalate_end(&end)) {
> + /*
> + * Perform END Adaptive escalation processing
> + * The END trigger becomes an Escalation trigger
> + */
> + xive2_router_end_notify(xrtr,
> + xive_get_field32(END2_W4_END_BLOCK, end.w4),
> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> + xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + } /* end END adaptive escalation */
> +
> + else {
> + uint32_t lisn; /* Logical Interrupt Source Number */
> +
> + /*
> + * Perform ESB escalation processing
> + * E[N] == 1 --> N
> + * Req[Block] <- E[ESB_Block]
> + * Req[Index] <- E[ESB_Index]
> + * Req[Offset] <- 0x000
> + * Execute <ESB Store> Req command
> + */
> + lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
> +
> + xive2_notify(xrtr, lisn, true /* pq_checked */);
> + }
> +
> + return;
> }
>
> -void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> +void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
> {
> - Xive2Router *xrtr = XIVE2_ROUTER(xn);
> uint8_t eas_blk = XIVE_EAS_BLOCK(lisn);
> uint32_t eas_idx = XIVE_EAS_INDEX(lisn);
> Xive2Eas eas;
> @@ -1606,13 +1627,30 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> return;
> }
>
> + /* TODO: add support for EAS resume */
> + if (xive2_eas_is_resume(&eas)) {
> + qemu_log_mask(LOG_UNIMP,
> + "XIVE: EAS resume processing unimplemented - LISN %x\n",
> + lisn);
> + return;
> + }
> +
> /*
> * The event trigger becomes an END trigger
> */
> xive2_router_end_notify(xrtr,
> - xive_get_field64(EAS2_END_BLOCK, eas.w),
> - xive_get_field64(EAS2_END_INDEX, eas.w),
> - xive_get_field64(EAS2_END_DATA, eas.w));
> + xive_get_field64(EAS2_END_BLOCK, eas.w),
> + xive_get_field64(EAS2_END_INDEX, eas.w),
> + xive_get_field64(EAS2_END_DATA, eas.w));
> + return;
> +}
> +
> +void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
> +{
> + Xive2Router *xrtr = XIVE2_ROUTER(xn);
> +
> + xive2_notify(xrtr, lisn, pq_checked);
> + return;
> }
>
> static const Property xive2_router_properties[] = {
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 8cdf819174..2436ddb5e5 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -80,6 +80,7 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
> uint32_t xive2_router_get_config(Xive2Router *xrtr);
>
> void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
> +void xive2_notify(Xive2Router *xrtr, uint32_t lisn, bool pq_checked);
>
> /*
> * XIVE2 Presenter (POWER10)
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index 3c28de8a30..2c535ec0d0 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -39,15 +39,18 @@
>
> typedef struct Xive2Eas {
> uint64_t w;
> -#define EAS2_VALID PPC_BIT(0)
> -#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
> -#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
> -#define EAS2_MASKED PPC_BIT(32) /* Masked */
> -#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
> +#define EAS2_VALID PPC_BIT(0)
> +#define EAS2_QOS PPC_BIT(1, 2) /* Quality of Service(unimp) */
> +#define EAS2_RESUME PPC_BIT(3) /* END Resume(unimp) */
> +#define EAS2_END_BLOCK PPC_BITMASK(4, 7) /* Destination EQ block# */
> +#define EAS2_END_INDEX PPC_BITMASK(8, 31) /* Destination EQ index */
> +#define EAS2_MASKED PPC_BIT(32) /* Masked */
> +#define EAS2_END_DATA PPC_BITMASK(33, 63) /* written to the EQ */
> } Xive2Eas;
>
> #define xive2_eas_is_valid(eas) (be64_to_cpu((eas)->w) & EAS2_VALID)
> #define xive2_eas_is_masked(eas) (be64_to_cpu((eas)->w) & EAS2_MASKED)
> +#define xive2_eas_is_resume(eas) (be64_to_cpu((eas)->w) & EAS2_RESUME)
>
> void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf);
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 16/50] ppc/xive: Fix pulling pool and phys contexts
2025-05-12 3:10 ` [PATCH 16/50] ppc/xive: Fix pulling pool and phys contexts Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
@ 2025-05-14 19:01 ` Mike Kowal
2025-05-15 15:49 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:01 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> This improves the implementation of pulling pool and phys contexts in
> XIVE1, by following closer the OS pulling code.
>
> In particular, the old ring data is returned rather than the modified,
> and irq signals are reset on pull.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 66 ++++++++++++++++++++++++++++++++++++++++++++------
> 1 file changed, 58 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index c2da23f9ea..1a94642c62 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -241,25 +241,75 @@ static uint64_t xive_tm_ack_hv_reg(XivePresenter *xptr, XiveTCTX *tctx,
> return xive_tctx_accept(tctx, TM_QW3_HV_PHYS);
> }
>
> +static void xive_pool_cam_decode(uint32_t cam, uint8_t *nvt_blk,
> + uint32_t *nvt_idx, bool *vp)
> +{
> + if (nvt_blk) {
> + *nvt_blk = xive_nvt_blk(cam);
> + }
> + if (nvt_idx) {
> + *nvt_idx = xive_nvt_idx(cam);
> + }
> + if (vp) {
> + *vp = !!(cam & TM_QW2W2_VP);
> + }
> +}
> +
> +static uint32_t xive_tctx_get_pool_cam(XiveTCTX *tctx, uint8_t *nvt_blk,
> + uint32_t *nvt_idx, bool *vp)
> +{
> + uint32_t qw2w2 = xive_tctx_word2(&tctx->regs[TM_QW2_HV_POOL]);
> + uint32_t cam = be32_to_cpu(qw2w2);
> +
> + xive_pool_cam_decode(cam, nvt_blk, nvt_idx, vp);
> + return qw2w2;
> +}
> +
> +static void xive_tctx_set_pool_cam(XiveTCTX *tctx, uint32_t qw2w2)
> +{
> + memcpy(&tctx->regs[TM_QW2_HV_POOL + TM_WORD2], &qw2w2, 4);
> +}
> +
> static uint64_t xive_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size)
> {
> - uint32_t qw2w2_prev = xive_tctx_word2(&tctx->regs[TM_QW2_HV_POOL]);
> uint32_t qw2w2;
> + uint32_t qw2w2_new;
> + uint8_t nvt_blk;
> + uint32_t nvt_idx;
> + bool vp;
>
> - qw2w2 = xive_set_field32(TM_QW2W2_VP, qw2w2_prev, 0);
> - memcpy(&tctx->regs[TM_QW2_HV_POOL + TM_WORD2], &qw2w2, 4);
> + qw2w2 = xive_tctx_get_pool_cam(tctx, &nvt_blk, &nvt_idx, &vp);
> +
> + if (!vp) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pull invalid POOL NVT %x/%x !?\n",
> + nvt_blk, nvt_idx);
> + }
> +
> + /* Invalidate CAM line */
> + qw2w2_new = xive_set_field32(TM_QW2W2_VP, qw2w2, 0);
> + xive_tctx_set_pool_cam(tctx, qw2w2_new);
> +
> + xive_tctx_reset_signal(tctx, TM_QW1_OS);
> + xive_tctx_reset_signal(tctx, TM_QW2_HV_POOL);
> return qw2w2;
> }
>
> static uint64_t xive_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size)
> {
> - uint8_t qw3b8_prev = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
> - uint8_t qw3b8;
> + uint8_t qw3b8 = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
> + uint8_t qw3b8_new;
> +
> + qw3b8 = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
> + if (!(qw3b8 & TM_QW3B8_VT)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid PHYS thread!?\n");
> + }
> + qw3b8_new = qw3b8 & ~TM_QW3B8_VT;
> + tctx->regs[TM_QW3_HV_PHYS + TM_WORD2] = qw3b8_new;
>
> - qw3b8 = qw3b8_prev & ~TM_QW3B8_VT;
> - tctx->regs[TM_QW3_HV_PHYS + TM_WORD2] = qw3b8;
> + xive_tctx_reset_signal(tctx, TM_QW1_OS);
> + xive_tctx_reset_signal(tctx, TM_QW3_HV_PHYS);
> return qw3b8;
> }
>
> @@ -489,7 +539,7 @@ static uint64_t xive_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> qw1w2 = xive_tctx_get_os_cam(tctx, &nvt_blk, &nvt_idx, &vo);
>
> if (!vo) {
> - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid NVT %x/%x !?\n",
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pull invalid OS NVT %x/%x !?\n",
> nvt_blk, nvt_idx);
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 15/50] ppc/xive: Move NSR decoding into helper functions
2025-05-12 3:10 ` [PATCH 15/50] ppc/xive: Move NSR decoding into helper functions Nicholas Piggin
2025-05-14 14:35 ` Caleb Schlossin
@ 2025-05-14 19:04 ` Mike Kowal
2025-05-15 15:48 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:04 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Rather than functions to return masks to test NSR bits, have functions
> to test those bits directly. This should be no functional change, it
> just makes the code more readable.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 51 +++++++++++++++++++++++++++++++++++--------
> include/hw/ppc/xive.h | 4 ++++
> 2 files changed, 46 insertions(+), 9 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index bb40a69c5b..c2da23f9ea 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -25,6 +25,45 @@
> /*
> * XIVE Thread Interrupt Management context
> */
> +bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr)
> +{
> + switch (ring) {
> + case TM_QW1_OS:
> + return !!(nsr & TM_QW1_NSR_EO);
> + case TM_QW2_HV_POOL:
> + case TM_QW3_HV_PHYS:
> + return !!(nsr & TM_QW3_NSR_HE);
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr)
> +{
> + if ((nsr & TM_NSR_GRP_LVL) > 0) {
> + g_assert(xive_nsr_indicates_exception(ring, nsr));
> + return true;
> + }
> + return false;
> +}
> +
> +uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr)
> +{
> + /* NSR determines if pool/phys ring is for phys or pool interrupt */
> + if ((ring == TM_QW3_HV_PHYS) || (ring == TM_QW2_HV_POOL)) {
> + uint8_t he = (nsr & TM_QW3_NSR_HE) >> 6;
> +
> + if (he == TM_QW3_NSR_HE_PHYS) {
> + return TM_QW3_HV_PHYS;
> + } else if (he == TM_QW3_NSR_HE_POOL) {
> + return TM_QW2_HV_POOL;
> + } else {
> + /* Don't support LSI mode */
> + g_assert_not_reached();
> + }
> + }
> + return ring;
> +}
>
> static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
> {
> @@ -48,18 +87,12 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
>
> qemu_irq_lower(xive_tctx_output(tctx, ring));
>
> - if (regs[TM_NSR] != 0) {
> + if (xive_nsr_indicates_exception(ring, nsr)) {
> uint8_t cppr = regs[TM_PIPR];
> uint8_t alt_ring;
> uint8_t *alt_regs;
>
> - /* POOL interrupt uses IPB in QW2, POOL ring */
> - if ((ring == TM_QW3_HV_PHYS) &&
> - ((nsr & TM_QW3_NSR_HE) == (TM_QW3_NSR_HE_POOL << 6))) {
> - alt_ring = TM_QW2_HV_POOL;
> - } else {
> - alt_ring = ring;
> - }
> + alt_ring = xive_nsr_exception_ring(ring, nsr);
> alt_regs = &tctx->regs[alt_ring];
>
> regs[TM_CPPR] = cppr;
> @@ -68,7 +101,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> * If the interrupt was for a specific VP, reset the pending
> * buffer bit, otherwise clear the logical server indicator
> */
> - if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
> + if (!xive_nsr_indicates_group_exception(ring, nsr)) {
> alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> }
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 538f438681..28f0f1b79a 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -365,6 +365,10 @@ static inline uint32_t xive_tctx_word2(uint8_t *ring)
> return *((uint32_t *) &ring[TM_WORD2]);
> }
>
> +bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr);
> +bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr);
> +uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr);
> +
> /*
> * XIVE Router
> */
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting
2025-05-12 3:10 ` [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting Nicholas Piggin
2025-05-14 14:34 ` Caleb Schlossin
@ 2025-05-14 19:07 ` Mike Kowal
2025-05-15 23:31 ` Nicholas Piggin
2025-05-15 15:47 ` Miles Glenn
2 siblings, 1 reply; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:07 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Have xive_tctx_accept clear NSR in one shot rather than masking out bits
> as they are tested, which makes it clear it's reset to 0, and does not
> have a partial NSR value in the register.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 6293ea4361..bb40a69c5b 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -68,13 +68,11 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> * If the interrupt was for a specific VP, reset the pending
> * buffer bit, otherwise clear the logical server indicator
> */
> - if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
> - regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
> - } else {
> + if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
Any reason why you didn't just use the else? Regardless I am fine
either way.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
> alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> }
>
> - /* Drop the exception bit and any group/crowd */
> + /* Clear the exception from NSR */
> regs[TM_NSR] = 0;
>
> trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 18/50] pnv/xive2: Print value in invalid register write logging
2025-05-12 3:10 ` [PATCH 18/50] pnv/xive2: Print value in invalid register write logging Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
@ 2025-05-14 19:09 ` Mike Kowal
2025-05-15 15:50 ` Miles Glenn
2025-05-16 0:15 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:09 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> This can make it easier to see what the target system is trying to
> do.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> [npiggin: split from larger patch]
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 24 ++++++++++++++++--------
> 1 file changed, 16 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index d7ca97ecbb..fcf5b2e75c 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1197,7 +1197,8 @@ static void pnv_xive2_ic_cq_write(void *opaque, hwaddr offset,
> case CQ_FIRMASK_OR: /* FIR error reporting */
> break;
> default:
> - xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx, offset);
> + xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1495,7 +1496,8 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "VC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "VC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1703,7 +1705,8 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "PC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "PC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1790,7 +1793,8 @@ static void pnv_xive2_ic_tctxt_write(void *opaque, hwaddr offset,
> xive->tctxt_regs[reg] = val;
> break;
> default:
> - xive2_error(xive, "TCTXT: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "TCTXT: invalid write @0x%"HWADDR_PRIx
> + " data 0x%"PRIx64, offset, val);
> return;
> }
> }
> @@ -1861,7 +1865,8 @@ static void pnv_xive2_xscom_write(void *opaque, hwaddr offset,
> pnv_xive2_ic_tctxt_write(opaque, mmio_offset, val, size);
> break;
> default:
> - xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx
> + " value 0x%"PRIx64, offset, val);
> }
> }
>
> @@ -1929,7 +1934,8 @@ static void pnv_xive2_ic_notify_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx
> + " value 0x%"PRIx64, offset, val);
> }
> }
>
> @@ -1971,7 +1977,8 @@ static void pnv_xive2_ic_lsi_write(void *opaque, hwaddr offset,
> {
> PnvXive2 *xive = PNV_XIVE2(opaque);
>
> - xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> }
>
> static const MemoryRegionOps pnv_xive2_ic_lsi_ops = {
> @@ -2074,7 +2081,8 @@ static void pnv_xive2_ic_sync_write(void *opaque, hwaddr offset,
> inject_type = PNV_XIVE2_QUEUE_NXC_ST_RMT_CI;
> break;
> default:
> - xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
2025-05-12 3:10 ` [PATCH 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL Nicholas Piggin
2025-05-14 14:37 ` Caleb Schlossin
@ 2025-05-14 19:10 ` Mike Kowal
2025-05-15 15:51 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:10 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Firmware expects to read back the WATCH_FULL bit from the VC_ENDC_WATCH_SPEC
> register, so don't clear it on read.
>
> Don't bother clearing the reads-as-zero CONFLICT bit because it's masked
> at write already.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/pnv_xive2.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index fcf5b2e75c..3c26cd6b77 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1329,7 +1329,6 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
> case VC_ENDC_WATCH2_SPEC:
> case VC_ENDC_WATCH3_SPEC:
> watch_engine = (offset - VC_ENDC_WATCH0_SPEC) >> 6;
> - xive->vc_regs[reg] &= ~(VC_ENDC_WATCH_FULL | VC_ENDC_WATCH_CONFLICT);
> pnv_xive2_endc_cache_watch_release(xive, watch_engine);
> val = xive->vc_regs[reg];
> break;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers
2025-05-12 3:10 ` [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers Nicholas Piggin
2025-05-14 14:37 ` Caleb Schlossin
@ 2025-05-14 19:11 ` Mike Kowal
2025-05-15 15:52 ` Miles Glenn
2025-05-16 0:18 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:11 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> Writes to the Flush Control registers were logged as invalid
> when they are allowed. Clearing the unsupported want_cache_disable
> feature is supported, so don't log an error in that case.
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Thanks MAK
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 36 ++++++++++++++++++++++++++++++++----
> 1 file changed, 32 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 3c26cd6b77..c9374f0eee 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1411,7 +1411,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> /*
> * ESB cache updates (not modeled)
> */
> - /* case VC_ESBC_FLUSH_CTRL: */
> + case VC_ESBC_FLUSH_CTRL:
> + if (val & VC_ESBC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_ESBC_FLUSH_POLL:
> xive->vc_regs[VC_ESBC_FLUSH_CTRL >> 3] |= VC_ESBC_FLUSH_CTRL_POLL_VALID;
> /* ESB update */
> @@ -1427,7 +1434,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> /*
> * EAS cache updates (not modeled)
> */
> - /* case VC_EASC_FLUSH_CTRL: */
> + case VC_EASC_FLUSH_CTRL:
> + if (val & VC_EASC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_EASC_FLUSH_POLL:
> xive->vc_regs[VC_EASC_FLUSH_CTRL >> 3] |= VC_EASC_FLUSH_CTRL_POLL_VALID;
> /* EAS update */
> @@ -1466,7 +1480,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> break;
>
>
> - /* case VC_ENDC_FLUSH_CTRL: */
> + case VC_ENDC_FLUSH_CTRL:
> + if (val & VC_ENDC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_ENDC_FLUSH_POLL:
> xive->vc_regs[VC_ENDC_FLUSH_CTRL >> 3] |= VC_ENDC_FLUSH_CTRL_POLL_VALID;
> break;
> @@ -1687,7 +1708,14 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
> pnv_xive2_nxc_update(xive, watch_engine);
> break;
>
> - /* case PC_NXC_FLUSH_CTRL: */
> + case PC_NXC_FLUSH_CTRL:
> + if (val & PC_NXC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case PC_NXC_FLUSH_POLL:
> xive->pc_regs[PC_NXC_FLUSH_CTRL >> 3] |= PC_NXC_FLUSH_CTRL_POLL_VALID;
> break;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 21/50] ppc/xive2: add interrupt priority configuration flags
2025-05-12 3:10 ` [PATCH 21/50] ppc/xive2: add interrupt priority configuration flags Nicholas Piggin
@ 2025-05-14 19:41 ` Mike Kowal
2025-05-16 0:18 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:41 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Adds support for extracting additional configuration flags from
> the XIVE configuration register that are needed for redistribution
> of group interrupts.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 16 ++++++++++++----
> hw/intc/pnv_xive2_regs.h | 1 +
> include/hw/ppc/xive2.h | 8 +++++---
> 3 files changed, 18 insertions(+), 7 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index c9374f0eee..96b8851b7e 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -605,20 +605,28 @@ static uint32_t pnv_xive2_get_config(Xive2Router *xrtr)
> {
> PnvXive2 *xive = PNV_XIVE2(xrtr);
> uint32_t cfg = 0;
> + uint64_t reg = xive->cq_regs[CQ_XIVE_CFG >> 3];
>
> - if (xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS) {
> + if (reg & CQ_XIVE_CFG_GEN1_TIMA_OS) {
> cfg |= XIVE2_GEN1_TIMA_OS;
> }
>
> - if (xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_EN_VP_SAVE_RESTORE) {
> + if (reg & CQ_XIVE_CFG_EN_VP_SAVE_RESTORE) {
> cfg |= XIVE2_VP_SAVE_RESTORE;
> }
>
> - if (GETFIELD(CQ_XIVE_CFG_HYP_HARD_RANGE,
> - xive->cq_regs[CQ_XIVE_CFG >> 3]) == CQ_XIVE_CFG_THREADID_8BITS) {
> + if (GETFIELD(CQ_XIVE_CFG_HYP_HARD_RANGE, reg) ==
> + CQ_XIVE_CFG_THREADID_8BITS) {
> cfg |= XIVE2_THREADID_8BITS;
> }
>
> + if (reg & CQ_XIVE_CFG_EN_VP_GRP_PRIORITY) {
> + cfg |= XIVE2_EN_VP_GRP_PRIORITY;
> + }
> +
> + cfg = SETFIELD(XIVE2_VP_INT_PRIO, cfg,
> + GETFIELD(CQ_XIVE_CFG_VP_INT_PRIO, reg));
> +
> return cfg;
> }
>
> diff --git a/hw/intc/pnv_xive2_regs.h b/hw/intc/pnv_xive2_regs.h
> index e8b87b3d2c..d53300f709 100644
> --- a/hw/intc/pnv_xive2_regs.h
> +++ b/hw/intc/pnv_xive2_regs.h
> @@ -66,6 +66,7 @@
> #define CQ_XIVE_CFG_GEN1_TIMA_HYP_BLK0 PPC_BIT(26) /* 0 if bit[25]=0 */
> #define CQ_XIVE_CFG_GEN1_TIMA_CROWD_DIS PPC_BIT(27) /* 0 if bit[25]=0 */
> #define CQ_XIVE_CFG_GEN1_END_ESX PPC_BIT(28)
> +#define CQ_XIVE_CFG_EN_VP_GRP_PRIORITY PPC_BIT(32) /* 0 if bit[25]=1 */
> #define CQ_XIVE_CFG_EN_VP_SAVE_RESTORE PPC_BIT(38) /* 0 if bit[25]=1 */
> #define CQ_XIVE_CFG_EN_VP_SAVE_REST_STRICT PPC_BIT(39) /* 0 if bit[25]=1 */
>
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 2436ddb5e5..760b94a962 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -29,9 +29,11 @@ OBJECT_DECLARE_TYPE(Xive2Router, Xive2RouterClass, XIVE2_ROUTER);
> * Configuration flags
> */
>
> -#define XIVE2_GEN1_TIMA_OS 0x00000001
> -#define XIVE2_VP_SAVE_RESTORE 0x00000002
> -#define XIVE2_THREADID_8BITS 0x00000004
> +#define XIVE2_GEN1_TIMA_OS 0x00000001
> +#define XIVE2_VP_SAVE_RESTORE 0x00000002
> +#define XIVE2_THREADID_8BITS 0x00000004
> +#define XIVE2_EN_VP_GRP_PRIORITY 0x00000008
> +#define XIVE2_VP_INT_PRIO 0x00000030
>
> typedef struct Xive2RouterClass {
> SysBusDeviceClass parent;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 22/50] ppc/xive2: Support redistribution of group interrupts
2025-05-12 3:10 ` [PATCH 22/50] ppc/xive2: Support redistribution of group interrupts Nicholas Piggin
@ 2025-05-14 19:42 ` Mike Kowal
2025-05-16 0:19 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:42 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> When an XIVE context is pulled while it has an active, unacknowledged
> group interrupt, XIVE will check to see if a context on another thread
> can handle the interrupt and, if so, notify that context. If there
> are no contexts that can handle the interrupt, then the interrupt is
> added to a backlog and XIVE will attempt to escalate the interrupt,
> if configured to do so, allowing the higher privileged handler to
> activate a context that can handle the original interrupt.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 84 +++++++++++++++++++++++++++++++++++--
> include/hw/ppc/xive2_regs.h | 3 ++
> 2 files changed, 83 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 0993e792cc..34fc561c9c 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -19,6 +19,10 @@
> #include "hw/ppc/xive2_regs.h"
> #include "trace.h"
>
> +static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> + uint32_t end_idx, uint32_t end_data,
> + bool redistribute);
> +
> uint32_t xive2_router_get_config(Xive2Router *xrtr)
> {
> Xive2RouterClass *xrc = XIVE2_ROUTER_GET_CLASS(xrtr);
> @@ -597,6 +601,68 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
> return xive2_nvp_cam_line(blk, 1 << tid_shift | (pir & tid_mask));
> }
>
> +static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
> + uint8_t nvp_blk, uint32_t nvp_idx, uint8_t ring)
> +{
> + uint8_t nsr = tctx->regs[ring + TM_NSR];
> + uint8_t crowd = NVx_CROWD_LVL(nsr);
> + uint8_t group = NVx_GROUP_LVL(nsr);
> + uint8_t nvgc_blk;
> + uint8_t nvgc_idx;
> + uint8_t end_blk;
> + uint32_t end_idx;
> + uint8_t pipr = tctx->regs[ring + TM_PIPR];
> + Xive2Nvgc nvgc;
> + uint8_t prio_limit;
> + uint32_t cfg;
> +
> + /* convert crowd/group to blk/idx */
> + if (group > 0) {
> + nvgc_idx = (nvp_idx & (0xffffffff << group)) |
> + ((1 << (group - 1)) - 1);
> + } else {
> + nvgc_idx = nvp_idx;
> + }
> +
> + if (crowd > 0) {
> + crowd = (crowd == 3) ? 4 : crowd;
> + nvgc_blk = (nvp_blk & (0xffffffff << crowd)) |
> + ((1 << (crowd - 1)) - 1);
> + } else {
> + nvgc_blk = nvp_blk;
> + }
> +
> + /* Use blk/idx to retrieve the NVGC */
> + if (xive2_router_get_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, &nvgc)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n",
> + crowd ? "NVC" : "NVG", nvgc_blk, nvgc_idx);
> + return;
> + }
> +
> + /* retrieve the END blk/idx from the NVGC */
> + end_blk = xive_get_field32(NVGC2_W1_END_BLK, nvgc.w1);
> + end_idx = xive_get_field32(NVGC2_W1_END_IDX, nvgc.w1);
> +
> + /* determine number of priorities being used */
> + cfg = xive2_router_get_config(xrtr);
> + if (cfg & XIVE2_EN_VP_GRP_PRIORITY) {
> + prio_limit = 1 << GETFIELD(NVGC2_W1_PSIZE, nvgc.w1);
> + } else {
> + prio_limit = 1 << GETFIELD(XIVE2_VP_INT_PRIO, cfg);
> + }
> +
> + /* add priority offset to end index */
> + end_idx += pipr % prio_limit;
> +
> + /* trigger the group END */
> + xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
> +
> + /* clear interrupt indication for the context */
> + tctx->regs[ring + TM_NSR] = 0;
> + tctx->regs[ring + TM_PIPR] = tctx->regs[ring + TM_CPPR];
> + xive_tctx_reset_signal(tctx, ring);
> +}
> +
> static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size, uint8_t ring)
> {
> @@ -608,6 +674,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> uint8_t cur_ring;
> bool valid;
> bool do_save;
> + uint8_t nsr;
>
> xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &do_save);
>
> @@ -624,6 +691,12 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
> }
>
> + /* Active group/crowd interrupts need to be redistributed */
> + nsr = tctx->regs[ring + TM_NSR];
> + if (xive_nsr_indicates_group_exception(ring, nsr)) {
> + xive2_redistribute(xrtr, tctx, nvp_blk, nvp_idx, ring);
> + }
> +
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> xive2_tctx_save_ctx(xrtr, tctx, nvp_blk, nvp_idx, ring);
> }
> @@ -1352,7 +1425,8 @@ static bool xive2_router_end_es_notify(Xive2Router *xrtr, uint8_t end_blk,
> * message has the same parameters than in the function below.
> */
> static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> - uint32_t end_idx, uint32_t end_data)
> + uint32_t end_idx, uint32_t end_data,
> + bool redistribute)
> {
> Xive2End end;
> uint8_t priority;
> @@ -1380,7 +1454,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> return;
> }
>
> - if (xive2_end_is_enqueue(&end)) {
> + if (!redistribute && xive2_end_is_enqueue(&end)) {
> xive2_end_enqueue(&end, end_data);
> /* Enqueuing event data modifies the EQ toggle and index */
> xive2_router_write_end(xrtr, end_blk, end_idx, &end, 1);
> @@ -1560,7 +1634,8 @@ do_escalation:
> xive2_router_end_notify(xrtr,
> xive_get_field32(END2_W4_END_BLOCK, end.w4),
> xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> - xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + xive_get_field32(END2_W5_ESC_END_DATA, end.w5),
> + false);
> } /* end END adaptive escalation */
>
> else {
> @@ -1641,7 +1716,8 @@ void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
> xive2_router_end_notify(xrtr,
> xive_get_field64(EAS2_END_BLOCK, eas.w),
> xive_get_field64(EAS2_END_INDEX, eas.w),
> - xive_get_field64(EAS2_END_DATA, eas.w));
> + xive_get_field64(EAS2_END_DATA, eas.w),
> + false);
> return;
> }
>
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index 2c535ec0d0..e222038143 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -224,6 +224,9 @@ typedef struct Xive2Nvgc {
> #define NVGC2_W0_VALID PPC_BIT32(0)
> #define NVGC2_W0_PGONEXT PPC_BITMASK32(26, 31)
> uint32_t w1;
> +#define NVGC2_W1_PSIZE PPC_BITMASK32(0, 1)
> +#define NVGC2_W1_END_BLK PPC_BITMASK32(4, 7)
> +#define NVGC2_W1_END_IDX PPC_BITMASK32(8, 31)
> uint32_t w2;
> uint32_t w3;
> uint32_t w4;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 23/50] ppc/xive: Add more interrupt notification tracing
2025-05-12 3:10 ` [PATCH 23/50] ppc/xive: Add more interrupt notification tracing Nicholas Piggin
@ 2025-05-14 19:46 ` Mike Kowal
2025-05-16 0:19 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:46 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Add more tracing around notification, redistribution, and escalation.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/trace-events | 6 ++++++
> hw/intc/xive.c | 3 +++
> hw/intc/xive2.c | 13 ++++++++-----
> 3 files changed, 17 insertions(+), 5 deletions(-)
>
> diff --git a/hw/intc/trace-events b/hw/intc/trace-events
> index f77f9733c9..9eca0925b6 100644
> --- a/hw/intc/trace-events
> +++ b/hw/intc/trace-events
> @@ -279,6 +279,8 @@ xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_
> xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
> xive_source_esb_read(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
> xive_source_esb_write(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
> +xive_source_notify(uint32_t srcno) "Processing notification for queued IRQ 0x%x"
> +xive_source_blocked(uint32_t srcno) "No action needed for IRQ 0x%x currently"
> xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "END 0x%02x/0x%04x -> enqueue 0x%08x"
> xive_router_end_escalate(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t end_data) "END 0x%02x/0x%04x -> escalate END 0x%02x/0x%04x data 0x%08x"
> xive_tctx_tm_write(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
> @@ -289,6 +291,10 @@ xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x
> # xive2.c
> xive_nvp_backlog_op(uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint8_t rc) "NVP 0x%x/0x%x operation=%d priority=%d rc=%d"
> xive_nvgc_backlog_op(bool c, uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint32_t rc) "NVGC crowd=%d 0x%x/0x%x operation=%d priority=%d rc=%d"
> +xive_redistribute(uint32_t index, uint8_t ring, uint8_t end_blk, uint32_t end_idx) "Redistribute from target=%d ring=0x%x NVP 0x%x/0x%x"
> +xive_end_enqueue(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "Queue event for END 0x%x/0x%x data=0x%x"
> +xive_escalate_end(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t esc_data) "Escalate from END 0x%x/0x%x to END 0x%x/0x%x data=0x%x"
> +xive_escalate_esb(uint8_t end_blk, uint32_t end_idx, uint32_t lisn) "Escalate from END 0x%x/0x%x to LISN=0x%x"
>
> # pnv_xive.c
> pnv_xive_ic_hw_trigger(uint64_t addr, uint64_t val) "@0x%"PRIx64" val=0x%"PRIx64
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 1a94642c62..7461dbecb8 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -1276,6 +1276,7 @@ static uint64_t xive_source_esb_read(void *opaque, hwaddr addr, unsigned size)
>
> /* Forward the source event notification for routing */
> if (ret) {
> + trace_xive_source_notify(srcno);
> xive_source_notify(xsrc, srcno);
> }
> break;
> @@ -1371,6 +1372,8 @@ out:
> /* Forward the source event notification for routing */
> if (notify) {
> xive_source_notify(xsrc, srcno);
> + } else {
> + trace_xive_source_blocked(srcno);
> }
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 34fc561c9c..968b698677 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -616,6 +616,7 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t prio_limit;
> uint32_t cfg;
>
> + trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
> /* convert crowd/group to blk/idx */
> if (group > 0) {
> nvgc_idx = (nvp_idx & (0xffffffff << group)) |
> @@ -1455,6 +1456,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> }
>
> if (!redistribute && xive2_end_is_enqueue(&end)) {
> + trace_xive_end_enqueue(end_blk, end_idx, end_data);
> xive2_end_enqueue(&end, end_data);
> /* Enqueuing event data modifies the EQ toggle and index */
> xive2_router_write_end(xrtr, end_blk, end_idx, &end, 1);
> @@ -1631,11 +1633,11 @@ do_escalation:
> * Perform END Adaptive escalation processing
> * The END trigger becomes an Escalation trigger
> */
> - xive2_router_end_notify(xrtr,
> - xive_get_field32(END2_W4_END_BLOCK, end.w4),
> - xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> - xive_get_field32(END2_W5_ESC_END_DATA, end.w5),
> - false);
> + uint8_t esc_blk = xive_get_field32(END2_W4_END_BLOCK, end.w4);
> + uint32_t esc_idx = xive_get_field32(END2_W4_ESC_END_INDEX, end.w4);
> + uint32_t esc_data = xive_get_field32(END2_W5_ESC_END_DATA, end.w5);
> + trace_xive_escalate_end(end_blk, end_idx, esc_blk, esc_idx, esc_data);
> + xive2_router_end_notify(xrtr, esc_blk, esc_idx, esc_data, false);
> } /* end END adaptive escalation */
>
> else {
> @@ -1652,6 +1654,7 @@ do_escalation:
> lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
> xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
>
> + trace_xive_escalate_esb(end_blk, end_idx, lisn);
> xive2_notify(xrtr, lisn, true /* pq_checked */);
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 24/50] ppc/xive2: Improve pool regs variable name
2025-05-12 3:10 ` [PATCH 24/50] ppc/xive2: Improve pool regs variable name Nicholas Piggin
@ 2025-05-14 19:47 ` Mike Kowal
2025-05-16 0:19 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:47 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Change pregs to pool_regs, for clarity.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> [npiggin: split from larger patch]
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 11 +++++------
> 1 file changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 968b698677..ec4b9320b4 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1044,13 +1044,12 @@ again:
>
> /* PHYS updates also depend on POOL values */
> if (ring == TM_QW3_HV_PHYS) {
> - uint8_t *pregs = &tctx->regs[TM_QW2_HV_POOL];
> + uint8_t *pool_regs = &tctx->regs[TM_QW2_HV_POOL];
>
> /* POOL values only matter if POOL ctx is valid */
> - if (pregs[TM_WORD2] & 0x80) {
> -
> - uint8_t pool_pipr = xive_ipb_to_pipr(pregs[TM_IPB]);
> - uint8_t pool_lsmfb = pregs[TM_LSMFB];
> + if (pool_regs[TM_WORD2] & 0x80) {
> + uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
> + uint8_t pool_lsmfb = pool_regs[TM_LSMFB];
>
> /*
> * Determine highest priority interrupt and
> @@ -1064,7 +1063,7 @@ again:
> }
>
> /* Values needed for group priority calculation */
> - if (pregs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
> + if (pool_regs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
> group_enabled = true;
> lsmfb_min = pool_lsmfb;
> if (lsmfb_min < pipr_min) {
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
2025-05-12 3:10 ` [PATCH 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op Nicholas Piggin
@ 2025-05-14 19:48 ` Mike Kowal
2025-05-16 0:20 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:48 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Booting AIX in a PowerVM partition requires the use of the "Acknowledge
> O/S Interrupt to even O/S reporting line" special operation provided by
> the IBM XIVE interrupt controller. This operation is invoked by writing
> a byte (data is irrelevant) to offset 0xC10 of the Thread Interrupt
> Management Area (TIMA). It can be used by software to notify the XIVE
> logic that the interrupt was received.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive.c | 8 ++++---
> hw/intc/xive2.c | 50 ++++++++++++++++++++++++++++++++++++++++++
> include/hw/ppc/xive.h | 1 +
> include/hw/ppc/xive2.h | 3 ++-
> 4 files changed, 58 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 7461dbecb8..9ec1193dfc 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -80,7 +80,7 @@ static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
> }
> }
>
> -static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> +uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> {
> uint8_t *regs = &tctx->regs[ring];
> uint8_t nsr = regs[TM_NSR];
> @@ -340,14 +340,14 @@ static uint64_t xive_tm_vt_poll(XivePresenter *xptr, XiveTCTX *tctx,
>
> static const uint8_t xive_tm_hw_view[] = {
> 3, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-0 User */
> - 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-1 OS */
> + 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 3, /* QW-1 OS */
> 0, 0, 3, 3, 0, 3, 3, 0, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-2 POOL */
> 3, 3, 3, 3, 0, 3, 0, 2, 3, 0, 0, 3, 3, 3, 3, 0, /* QW-3 PHYS */
> };
>
> static const uint8_t xive_tm_hv_view[] = {
> 3, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-0 User */
> - 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-1 OS */
> + 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 3, /* QW-1 OS */
> 0, 0, 3, 3, 0, 3, 3, 0, 0, 3, 3, 3, 0, 0, 0, 0, /* QW-2 POOL */
> 3, 3, 3, 3, 0, 3, 0, 2, 3, 0, 0, 3, 0, 0, 0, 0, /* QW-3 PHYS */
> };
> @@ -718,6 +718,8 @@ static const XiveTmOp xive2_tm_operations[] = {
> xive_tm_pull_phys_ctx },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, xive2_tm_pull_phys_ctx_ol,
> NULL },
> + { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, xive2_tm_ack_os_el,
> + NULL },
> };
>
> static const XiveTmOp *xive_tm_find_op(XivePresenter *xptr, hwaddr offset,
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index ec4b9320b4..68be138335 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1009,6 +1009,56 @@ static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
> return 0;
> }
>
> +static void xive2_tctx_accept_el(XivePresenter *xptr, XiveTCTX *tctx,
> + uint8_t ring, uint8_t cl_ring)
> +{
> + uint64_t rd;
> + Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> + uint32_t nvp_blk, nvp_idx, xive2_cfg;
> + Xive2Nvp nvp;
> + uint64_t phys_addr;
> + uint8_t OGen = 0;
> +
> + xive2_tctx_get_nvp_indexes(tctx, cl_ring, &nvp_blk, &nvp_idx);
> +
> + if (xive2_router_get_nvp(xrtr, (uint8_t)nvp_blk, nvp_idx, &nvp)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
> + nvp_blk, nvp_idx);
> + return;
> + }
> +
> + if (!xive2_nvp_is_valid(&nvp)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
> + nvp_blk, nvp_idx);
> + return;
> + }
> +
> +
> + rd = xive_tctx_accept(tctx, ring);
> +
> + if (ring == TM_QW1_OS) {
> + OGen = tctx->regs[ring + TM_OGEN];
> + }
> + xive2_cfg = xive2_router_get_config(xrtr);
> + phys_addr = xive2_nvp_reporting_addr(&nvp);
> + uint8_t report_data[REPORT_LINE_GEN1_SIZE];
> + memset(report_data, 0xff, sizeof(report_data));
> + if ((OGen == 1) || (xive2_cfg & XIVE2_GEN1_TIMA_OS)) {
> + report_data[8] = (rd >> 8) & 0xff;
> + report_data[9] = rd & 0xff;
> + } else {
> + report_data[0] = (rd >> 8) & 0xff;
> + report_data[1] = rd & 0xff;
> + }
> + cpu_physical_memory_write(phys_addr, report_data, REPORT_LINE_GEN1_SIZE);
> +}
> +
> +void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
> +}
> +
> static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> uint8_t *regs = &tctx->regs[ring];
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 28f0f1b79a..46d05d74fb 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -561,6 +561,7 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level);
> void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
> void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
> +uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
>
> /*
> * KVM XIVE device helpers
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 760b94a962..ff02ce2549 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -142,5 +142,6 @@ void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> -
> +void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size);
> #endif /* PPC_XIVE2_H */
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update
2025-05-12 3:10 ` [PATCH 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update Nicholas Piggin
@ 2025-05-14 19:48 ` Mike Kowal
2025-05-16 0:20 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:48 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Add support for redistributing a presented group interrupt if it
> is precluded as a result of changing the CPPR value. Without this,
> group interrupts can be lost.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 82 ++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 60 insertions(+), 22 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 68be138335..92dbbad8d4 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -601,20 +601,37 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
> return xive2_nvp_cam_line(blk, 1 << tid_shift | (pir & tid_mask));
> }
>
> -static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
> - uint8_t nvp_blk, uint32_t nvp_idx, uint8_t ring)
> +static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> {
> - uint8_t nsr = tctx->regs[ring + TM_NSR];
> + uint8_t *regs = &tctx->regs[ring];
> + uint8_t nsr = regs[TM_NSR];
> + uint8_t pipr = regs[TM_PIPR];
> uint8_t crowd = NVx_CROWD_LVL(nsr);
> uint8_t group = NVx_GROUP_LVL(nsr);
> - uint8_t nvgc_blk;
> - uint8_t nvgc_idx;
> - uint8_t end_blk;
> - uint32_t end_idx;
> - uint8_t pipr = tctx->regs[ring + TM_PIPR];
> + uint8_t nvgc_blk, end_blk, nvp_blk;
> + uint32_t nvgc_idx, end_idx, nvp_idx;
> Xive2Nvgc nvgc;
> uint8_t prio_limit;
> uint32_t cfg;
> + uint8_t alt_ring;
> + uint32_t target_ringw2;
> + uint32_t cam;
> + bool valid;
> + bool hw;
> +
> + /* redistribution is only for group/crowd interrupts */
> + if (!xive_nsr_indicates_group_exception(ring, nsr)) {
> + return;
> + }
> +
> + alt_ring = xive_nsr_exception_ring(ring, nsr);
> + target_ringw2 = xive_tctx_word2(&tctx->regs[alt_ring]);
> + cam = be32_to_cpu(target_ringw2);
> +
> + /* extract nvp block and index from targeted ring's cam */
> + xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &hw);
> +
> + trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
>
> trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
> /* convert crowd/group to blk/idx */
> @@ -659,8 +676,8 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
> xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
>
> /* clear interrupt indication for the context */
> - tctx->regs[ring + TM_NSR] = 0;
> - tctx->regs[ring + TM_PIPR] = tctx->regs[ring + TM_CPPR];
> + regs[TM_NSR] = 0;
> + regs[TM_PIPR] = regs[TM_CPPR];
> xive_tctx_reset_signal(tctx, ring);
> }
>
> @@ -695,7 +712,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> /* Active group/crowd interrupts need to be redistributed */
> nsr = tctx->regs[ring + TM_NSR];
> if (xive_nsr_indicates_group_exception(ring, nsr)) {
> - xive2_redistribute(xrtr, tctx, nvp_blk, nvp_idx, ring);
> + xive2_redistribute(xrtr, tctx, ring);
> }
>
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> @@ -1059,6 +1076,7 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
> }
>
> +/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
> static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> uint8_t *regs = &tctx->regs[ring];
> @@ -1069,10 +1087,11 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> uint32_t nvp_blk, nvp_idx;
> Xive2Nvp nvp;
> int rc;
> + uint8_t nsr = regs[TM_NSR];
>
> trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
> regs[TM_IPB], regs[TM_PIPR],
> - cppr, regs[TM_NSR]);
> + cppr, nsr);
>
> if (cppr > XIVE_PRIORITY_MAX) {
> cppr = 0xff;
> @@ -1081,6 +1100,35 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> old_cppr = regs[TM_CPPR];
> regs[TM_CPPR] = cppr;
>
> + /* Handle increased CPPR priority (lower value) */
> + if (cppr < old_cppr) {
> + if (cppr <= regs[TM_PIPR]) {
> + /* CPPR lowered below PIPR, must un-present interrupt */
> + if (xive_nsr_indicates_exception(ring, nsr)) {
> + if (xive_nsr_indicates_group_exception(ring, nsr)) {
> + /* redistribute precluded active grp interrupt */
> + xive2_redistribute(xrtr, tctx, ring);
> + return;
> + }
> + }
> +
> + /* interrupt is VP directed, pending in IPB */
> + regs[TM_PIPR] = cppr;
> + xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
> + return;
> + } else {
> + /* CPPR was lowered, but still above PIPR. No action needed. */
> + return;
> + }
> + }
> +
> + /* CPPR didn't change, nothing needs to be done */
> + if (cppr == old_cppr) {
> + return;
> + }
> +
> + /* CPPR priority decreased (higher value) */
> +
> /*
> * Recompute the PIPR based on local pending interrupts. It will
> * be adjusted below if needed in case of pending group interrupts.
> @@ -1129,16 +1177,6 @@ again:
> return;
> }
>
> - if (cppr < old_cppr) {
> - /*
> - * FIXME: check if there's a group interrupt being presented
> - * and if the new cppr prevents it. If so, then the group
> - * interrupt needs to be re-added to the backlog and
> - * re-triggered (see re-trigger END info in the NVGC
> - * structure)
> - */
> - }
> -
> if (group_enabled &&
> lsmfb_min < cppr &&
> lsmfb_min < pipr_min) {
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 27/50] ppc/xive2: redistribute irqs for pool and phys ctx pull
2025-05-12 3:10 ` [PATCH 27/50] ppc/xive2: redistribute irqs for pool and phys ctx pull Nicholas Piggin
@ 2025-05-14 19:51 ` Mike Kowal
0 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:51 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> When disabling (pulling) an xive interrupt context, we need
> to redistribute any active group interrupts to other threads
> that can handle the interrupt if possible. This support had
> already been added for the OS context but had not yet been
> added to the pool or physical context.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive.c | 12 ++---
> hw/intc/xive2.c | 94 ++++++++++++++++++++++++++-----------
> include/hw/ppc/xive2.h | 4 ++
> include/hw/ppc/xive2_regs.h | 4 +-
> 4 files changed, 79 insertions(+), 35 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 9ec1193dfc..ad30476c17 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -693,7 +693,7 @@ static const XiveTmOp xive2_tm_operations[] = {
>
> /* MMIOs above 2K : special operations with side effects */
> { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, NULL,
> - xive_tm_ack_os_reg },
> + xive_tm_ack_os_reg },
> { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, xive_tm_set_os_pending,
> NULL },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, NULL,
> @@ -705,17 +705,17 @@ static const XiveTmOp xive2_tm_operations[] = {
> { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, NULL,
> xive_tm_ack_hv_reg },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2, 4, NULL,
> - xive_tm_pull_pool_ctx },
> + xive2_tm_pull_pool_ctx },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, NULL,
> - xive_tm_pull_pool_ctx },
> + xive2_tm_pull_pool_ctx },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, NULL,
> - xive_tm_pull_pool_ctx },
> + xive2_tm_pull_pool_ctx },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL, 1, xive2_tm_pull_os_ctx_ol,
> NULL },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2, 4, NULL,
> - xive_tm_pull_phys_ctx },
> + xive2_tm_pull_phys_ctx },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, NULL,
> - xive_tm_pull_phys_ctx },
> + xive2_tm_pull_phys_ctx },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, xive2_tm_pull_phys_ctx_ol,
> NULL },
> { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, xive2_tm_ack_os_el,
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 92dbbad8d4..ac94193464 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -23,6 +23,9 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> uint32_t end_idx, uint32_t end_data,
> bool redistribute);
>
> +static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
> + uint8_t *nvp_blk, uint32_t *nvp_idx);
> +
> uint32_t xive2_router_get_config(Xive2Router *xrtr)
> {
> Xive2RouterClass *xrc = XIVE2_ROUTER_GET_CLASS(xrtr);
> @@ -604,8 +607,10 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
> static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> {
> uint8_t *regs = &tctx->regs[ring];
> - uint8_t nsr = regs[TM_NSR];
> - uint8_t pipr = regs[TM_PIPR];
> + uint8_t *alt_regs = (ring == TM_QW2_HV_POOL) ? &tctx->regs[TM_QW3_HV_PHYS] :
> + regs;
> + uint8_t nsr = alt_regs[TM_NSR];
> + uint8_t pipr = alt_regs[TM_PIPR];
> uint8_t crowd = NVx_CROWD_LVL(nsr);
> uint8_t group = NVx_GROUP_LVL(nsr);
> uint8_t nvgc_blk, end_blk, nvp_blk;
> @@ -614,10 +619,6 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> uint8_t prio_limit;
> uint32_t cfg;
> uint8_t alt_ring;
> - uint32_t target_ringw2;
> - uint32_t cam;
> - bool valid;
> - bool hw;
>
> /* redistribution is only for group/crowd interrupts */
> if (!xive_nsr_indicates_group_exception(ring, nsr)) {
> @@ -625,11 +626,9 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> }
>
> alt_ring = xive_nsr_exception_ring(ring, nsr);
> - target_ringw2 = xive_tctx_word2(&tctx->regs[alt_ring]);
> - cam = be32_to_cpu(target_ringw2);
>
> - /* extract nvp block and index from targeted ring's cam */
> - xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &hw);
> + /* Don't check return code since ring is expected to be invalidated */
> + xive2_tctx_get_nvp_indexes(tctx, alt_ring, &nvp_blk, &nvp_idx);
>
> trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
>
> @@ -676,11 +675,23 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
>
> /* clear interrupt indication for the context */
> - regs[TM_NSR] = 0;
> - regs[TM_PIPR] = regs[TM_CPPR];
> + alt_regs[TM_NSR] = 0;
> + alt_regs[TM_PIPR] = alt_regs[TM_CPPR];
> xive_tctx_reset_signal(tctx, ring);
> }
>
> +static uint8_t xive2_hv_irq_ring(uint8_t nsr)
> +{
> + switch (nsr >> 6) {
> + case TM_QW3_NSR_HE_POOL:
> + return TM_QW2_HV_POOL;
> + case TM_QW3_NSR_HE_PHYS:
> + return TM_QW3_HV_PHYS;
> + default:
> + return -1;
> + }
> +}
> +
> static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size, uint8_t ring)
> {
> @@ -696,7 +707,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>
> xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &do_save);
>
> - if (!valid) {
> + if (xive2_tctx_get_nvp_indexes(tctx, ring, &nvp_blk, &nvp_idx)) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid NVP %x/%x !?\n",
> nvp_blk, nvp_idx);
> }
> @@ -706,13 +717,25 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> cur_ring += XIVE_TM_RING_SIZE) {
> uint32_t ringw2 = xive_tctx_word2(&tctx->regs[cur_ring]);
> uint32_t ringw2_new = xive_set_field32(TM2_QW1W2_VO, ringw2, 0);
> + bool is_valid = !!(xive_get_field32(TM2_QW1W2_VO, ringw2));
> + uint8_t alt_ring;
> memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
> - }
>
> - /* Active group/crowd interrupts need to be redistributed */
> - nsr = tctx->regs[ring + TM_NSR];
> - if (xive_nsr_indicates_group_exception(ring, nsr)) {
> - xive2_redistribute(xrtr, tctx, ring);
> + /* Skip the rest for USER or invalid contexts */
> + if ((cur_ring == TM_QW0_USER) || !is_valid) {
> + continue;
> + }
> +
> + /* Active group/crowd interrupts need to be redistributed */
> + alt_ring = (cur_ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : cur_ring;
> + nsr = tctx->regs[alt_ring + TM_NSR];
> + if (xive_nsr_indicates_group_exception(alt_ring, nsr)) {
> + /* For HV rings, only redistribute if cur_ring matches NSR */
> + if ((cur_ring == TM_QW1_OS) ||
> + (cur_ring == xive2_hv_irq_ring(nsr))) {
> + xive2_redistribute(xrtr, tctx, cur_ring);
> + }
> + }
> }
>
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> @@ -736,6 +759,18 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> return xive2_tm_pull_ctx(xptr, tctx, offset, size, TM_QW1_OS);
> }
>
> +uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, unsigned size)
> +{
> + return xive2_tm_pull_ctx(xptr, tctx, offset, size, TM_QW2_HV_POOL);
> +}
> +
> +uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, unsigned size)
> +{
> + return xive2_tm_pull_ctx(xptr, tctx, offset, size, TM_QW3_HV_PHYS);
> +}
> +
> #define REPORT_LINE_GEN1_SIZE 16
>
> static void xive2_tm_report_line_gen1(XiveTCTX *tctx, uint8_t *data,
> @@ -993,37 +1028,40 @@ void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> }
> }
>
> +/* returns -1 if ring is invalid, but still populates block and index */
> static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
> - uint32_t *nvp_blk, uint32_t *nvp_idx)
> + uint8_t *nvp_blk, uint32_t *nvp_idx)
> {
> - uint32_t w2, cam;
> + uint32_t w2;
> + uint32_t cam = 0;
> + int rc = 0;
>
> w2 = xive_tctx_word2(&tctx->regs[ring]);
> switch (ring) {
> case TM_QW1_OS:
> if (!(be32_to_cpu(w2) & TM2_QW1W2_VO)) {
> - return -1;
> + rc = -1;
> }
> cam = xive_get_field32(TM2_QW1W2_OS_CAM, w2);
> break;
> case TM_QW2_HV_POOL:
> if (!(be32_to_cpu(w2) & TM2_QW2W2_VP)) {
> - return -1;
> + rc = -1;
> }
> cam = xive_get_field32(TM2_QW2W2_POOL_CAM, w2);
> break;
> case TM_QW3_HV_PHYS:
> if (!(be32_to_cpu(w2) & TM2_QW3W2_VT)) {
> - return -1;
> + rc = -1;
> }
> cam = xive2_tctx_hw_cam_line(tctx->xptr, tctx);
> break;
> default:
> - return -1;
> + rc = -1;
> }
> *nvp_blk = xive2_nvp_blk(cam);
> *nvp_idx = xive2_nvp_idx(cam);
> - return 0;
> + return rc;
> }
>
> static void xive2_tctx_accept_el(XivePresenter *xptr, XiveTCTX *tctx,
> @@ -1031,7 +1069,8 @@ static void xive2_tctx_accept_el(XivePresenter *xptr, XiveTCTX *tctx,
> {
> uint64_t rd;
> Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> - uint32_t nvp_blk, nvp_idx, xive2_cfg;
> + uint32_t nvp_idx, xive2_cfg;
> + uint8_t nvp_blk;
> Xive2Nvp nvp;
> uint64_t phys_addr;
> uint8_t OGen = 0;
> @@ -1084,7 +1123,8 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> uint8_t old_cppr, backlog_prio, first_group, group_level;
> uint8_t pipr_min, lsmfb_min, ring_min;
> bool group_enabled;
> - uint32_t nvp_blk, nvp_idx;
> + uint8_t nvp_blk;
> + uint32_t nvp_idx;
> Xive2Nvp nvp;
> int rc;
> uint8_t nsr = regs[TM_NSR];
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index ff02ce2549..a91b99057c 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -140,6 +140,10 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
> void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
> void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> +uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, unsigned size);
> +uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, unsigned size);
> void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index e222038143..f82054661b 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -209,9 +209,9 @@ static inline uint32_t xive2_nvp_idx(uint32_t cam_line)
> return cam_line & ((1 << XIVE2_NVP_SHIFT) - 1);
> }
>
> -static inline uint32_t xive2_nvp_blk(uint32_t cam_line)
> +static inline uint8_t xive2_nvp_blk(uint32_t cam_line)
> {
> - return (cam_line >> XIVE2_NVP_SHIFT) & 0xf;
> + return (uint8_t)((cam_line >> XIVE2_NVP_SHIFT) & 0xf);
> }
>
> void xive2_nvp_pic_print_info(Xive2Nvp *nvp, uint32_t nvp_idx, GString *buf);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 28/50] ppc/xive: Change presenter .match_nvt to match not present
2025-05-12 3:10 ` [PATCH 28/50] ppc/xive: Change presenter .match_nvt to match not present Nicholas Piggin
@ 2025-05-14 19:54 ` Mike Kowal
2025-05-15 23:40 ` Nicholas Piggin
2025-05-15 15:53 ` Miles Glenn
1 sibling, 1 reply; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:54 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Have the match_nvt method only perform a TCTX match but don't present
> the interrupt, the caller presents. This has no functional change, but
> allows for more complicated presentation logic after matching.
I always found the count meaning less since we do not support the XIVE
Histogram...
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/pnv_xive.c | 16 +++++++-------
> hw/intc/pnv_xive2.c | 16 +++++++-------
> hw/intc/spapr_xive.c | 18 +++++++--------
> hw/intc/xive.c | 51 +++++++++++++++----------------------------
> hw/intc/xive2.c | 31 +++++++++++++-------------
> hw/ppc/pnv.c | 48 ++++++++++++++--------------------------
> hw/ppc/spapr.c | 21 +++++++-----------
> include/hw/ppc/xive.h | 27 +++++++++++++----------
> 8 files changed, 97 insertions(+), 131 deletions(-)
>
> diff --git a/hw/intc/pnv_xive.c b/hw/intc/pnv_xive.c
> index ccbe95a58e..cdde8d0814 100644
> --- a/hw/intc/pnv_xive.c
> +++ b/hw/intc/pnv_xive.c
> @@ -470,14 +470,13 @@ static bool pnv_xive_is_cpu_enabled(PnvXive *xive, PowerPCCPU *cpu)
> return xive->regs[reg >> 3] & PPC_BIT(bit);
> }
>
> -static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match)
> +static bool pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match)
> {
> PnvXive *xive = PNV_XIVE(xptr);
> PnvChip *chip = xive->chip;
> - int count = 0;
> int i, j;
>
> for (i = 0; i < chip->nr_cores; i++) {
> @@ -510,17 +509,18 @@ static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a "
> "thread context NVT %x/%x\n",
> nvt_blk, nvt_idx);
> - return -1;
> + match->count++;
> + continue;
> }
>
> match->ring = ring;
> match->tctx = tctx;
> - count++;
> + match->count++;
> }
> }
> }
>
> - return count;
> + return !!match->count;
> }
>
> static uint32_t pnv_xive_presenter_get_config(XivePresenter *xptr)
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 96b8851b7e..59b95e5219 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -640,14 +640,13 @@ static bool pnv_xive2_is_cpu_enabled(PnvXive2 *xive, PowerPCCPU *cpu)
> return xive->tctxt_regs[reg >> 3] & PPC_BIT(bit);
> }
>
> -static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match)
> +static bool pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match)
> {
> PnvXive2 *xive = PNV_XIVE2(xptr);
> PnvChip *chip = xive->chip;
> - int count = 0;
> int i, j;
> bool gen1_tima_os =
> xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
> @@ -692,7 +691,8 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> "thread context NVT %x/%x\n",
> nvt_blk, nvt_idx);
> /* Should set a FIR if we ever model it */
> - return -1;
> + match->count++;
> + continue;
> }
> /*
> * For a group notification, we need to know if the
> @@ -717,13 +717,13 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> }
> }
> }
> - count++;
> + match->count++;
> }
> }
> }
> }
>
> - return count;
> + return !!match->count;
> }
>
> static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
> diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
> index ce734b03ab..a7475d2f21 100644
> --- a/hw/intc/spapr_xive.c
> +++ b/hw/intc/spapr_xive.c
> @@ -428,14 +428,13 @@ static int spapr_xive_write_nvt(XiveRouter *xrtr, uint8_t nvt_blk,
> g_assert_not_reached();
> }
>
> -static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore,
> - uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match)
> +static bool spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore,
> + uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match)
> {
> CPUState *cs;
> - int count = 0;
>
> CPU_FOREACH(cs) {
> PowerPCCPU *cpu = POWERPC_CPU(cs);
> @@ -463,16 +462,17 @@ static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> if (match->tctx) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a thread "
> "context NVT %x/%x\n", nvt_blk, nvt_idx);
> - return -1;
> + match->count++;
> + continue;
> }
>
> match->ring = ring;
> match->tctx = tctx;
> - count++;
> + match->count++;
> }
> }
>
> - return count;
> + return !!match->count;
> }
>
> static uint32_t spapr_xive_presenter_get_config(XivePresenter *xptr)
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index ad30476c17..27b5a21371 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -1762,8 +1762,8 @@ uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
> return 1U << (first_zero + 1);
> }
>
> -static uint8_t xive_get_group_level(bool crowd, bool ignore,
> - uint32_t nvp_blk, uint32_t nvp_index)
> +uint8_t xive_get_group_level(bool crowd, bool ignore,
> + uint32_t nvp_blk, uint32_t nvp_index)
> {
> int first_zero;
> uint8_t level;
> @@ -1881,15 +1881,14 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> * This is our simple Xive Presenter Engine model. It is merged in the
> * Router as it does not require an extra object.
> */
> -bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> +bool xive_presenter_match(XiveFabric *xfb, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, bool *precluded)
> + uint32_t logic_serv, XiveTCTXMatch *match)
> {
> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
> - XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false };
> - uint8_t group_level;
> - int count;
> +
> + memset(match, 0, sizeof(*match));
>
> /*
> * Ask the machine to scan the interrupt controllers for a match.
> @@ -1914,22 +1913,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> * a new command to the presenters (the equivalent of the "assign"
> * power bus command in the documented full notify sequence.
> */
> - count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
> - priority, logic_serv, &match);
> - if (count < 0) {
> - return false;
> - }
> -
> - /* handle CPU exception delivery */
> - if (count) {
> - group_level = xive_get_group_level(crowd, cam_ignore, nvt_blk, nvt_idx);
> - trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
> - xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
> - } else {
> - *precluded = match.precluded;
> - }
> -
> - return !!count;
> + return xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
> + priority, logic_serv, match);
> }
>
> /*
> @@ -1966,7 +1951,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> uint8_t nvt_blk;
> uint32_t nvt_idx;
> XiveNVT nvt;
> - bool found, precluded;
> + XiveTCTXMatch match;
>
> uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
> uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
> @@ -2046,16 +2031,16 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> return;
> }
>
> - found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
> - false /* crowd */,
> - xive_get_field32(END_W7_F0_IGNORE, end.w7),
> - priority,
> - xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
> - &precluded);
> - /* we don't support VP-group notification on P9, so precluded is not used */
> /* TODO: Auto EOI. */
> -
> - if (found) {
> + /* we don't support VP-group notification on P9, so precluded is not used */
> + if (xive_presenter_match(xrtr->xfb, format, nvt_blk, nvt_idx,
> + false /* crowd */,
> + xive_get_field32(END_W7_F0_IGNORE, end.w7),
> + priority,
> + xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
> + &match)) {
> + trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, 0);
> + xive_tctx_pipr_update(match.tctx, match.ring, priority, 0);
> return;
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index ac94193464..6e136ad2e2 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1559,7 +1559,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> Xive2End end;
> uint8_t priority;
> uint8_t format;
> - bool found, precluded;
> + XiveTCTXMatch match;
> + bool crowd, cam_ignore;
> uint8_t nvx_blk;
> uint32_t nvx_idx;
>
> @@ -1629,16 +1630,19 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> */
> nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end.w6);
> nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end.w6);
> -
> - found = xive_presenter_notify(xrtr->xfb, format, nvx_blk, nvx_idx,
> - xive2_end_is_crowd(&end), xive2_end_is_ignore(&end),
> - priority,
> - xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
> - &precluded);
> + crowd = xive2_end_is_crowd(&end);
> + cam_ignore = xive2_end_is_ignore(&end);
>
> /* TODO: Auto EOI. */
> -
> - if (found) {
> + if (xive_presenter_match(xrtr->xfb, format, nvx_blk, nvx_idx,
> + crowd, cam_ignore, priority,
> + xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
> + &match)) {
> + uint8_t group_level;
> +
> + group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
> + trace_xive_presenter_notify(nvx_blk, nvx_idx, match.ring, group_level);
> + xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
> return;
> }
>
> @@ -1656,7 +1660,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> return;
> }
>
> - if (!xive2_end_is_ignore(&end)) {
> + if (!cam_ignore) {
> uint8_t ipb;
> Xive2Nvp nvp;
>
> @@ -1685,9 +1689,6 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> } else {
> Xive2Nvgc nvgc;
> uint32_t backlog;
> - bool crowd;
> -
> - crowd = xive2_end_is_crowd(&end);
>
> /*
> * For groups and crowds, the per-priority backlog
> @@ -1719,9 +1720,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> if (backlog == 1) {
> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
> xfc->broadcast(xrtr->xfb, nvx_blk, nvx_idx,
> - xive2_end_is_crowd(&end),
> - xive2_end_is_ignore(&end),
> - priority);
> + crowd, cam_ignore, priority);
>
> if (!xive2_end_is_precluded_escalation(&end)) {
> /*
> diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
> index deb29a6389..0c17846b38 100644
> --- a/hw/ppc/pnv.c
> +++ b/hw/ppc/pnv.c
> @@ -2619,62 +2619,46 @@ static void pnv_pic_print_info(InterruptStatsProvider *obj, GString *buf)
> }
> }
>
> -static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv,
> - XiveTCTXMatch *match)
> +static bool pnv_match_nvt(XiveFabric *xfb, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv,
> + XiveTCTXMatch *match)
> {
> PnvMachineState *pnv = PNV_MACHINE(xfb);
> - int total_count = 0;
> int i;
>
> for (i = 0; i < pnv->num_chips; i++) {
> Pnv9Chip *chip9 = PNV9_CHIP(pnv->chips[i]);
> XivePresenter *xptr = XIVE_PRESENTER(&chip9->xive);
> XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
> - int count;
>
> - count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
> - cam_ignore, priority, logic_serv, match);
> -
> - if (count < 0) {
> - return count;
> - }
> -
> - total_count += count;
> + xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
> + cam_ignore, priority, logic_serv, match);
> }
>
> - return total_count;
> + return !!match->count;
> }
>
> -static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv,
> - XiveTCTXMatch *match)
> +static bool pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv,
> + XiveTCTXMatch *match)
> {
> PnvMachineState *pnv = PNV_MACHINE(xfb);
> - int total_count = 0;
> int i;
>
> for (i = 0; i < pnv->num_chips; i++) {
> Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
> XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
> XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
> - int count;
> -
> - count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
> - cam_ignore, priority, logic_serv, match);
> -
> - if (count < 0) {
> - return count;
> - }
>
> - total_count += count;
> + xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
> + cam_ignore, priority, logic_serv, match);
> }
>
> - return total_count;
> + return !!match->count;
> }
>
> static int pnv10_xive_broadcast(XiveFabric *xfb,
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index b0a0f8c689..93574d2a63 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -4468,21 +4468,14 @@ static void spapr_pic_print_info(InterruptStatsProvider *obj, GString *buf)
> /*
> * This is a XIVE only operation
> */
> -static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match)
> +static bool spapr_match_nvt(XiveFabric *xfb, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match)
> {
> SpaprMachineState *spapr = SPAPR_MACHINE(xfb);
> XivePresenter *xptr = XIVE_PRESENTER(spapr->active_intc);
> XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
> - int count;
> -
> - count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
> - priority, logic_serv, match);
> - if (count < 0) {
> - return count;
> - }
>
> /*
> * When we implement the save and restore of the thread interrupt
> @@ -4493,12 +4486,14 @@ static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
> * Until this is done, the sPAPR machine should find at least one
> * matching context always.
> */
> - if (count == 0) {
> + if (!xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
> + priority, logic_serv, match)) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVT %x/%x is not dispatched\n",
> nvt_blk, nvt_idx);
> + return false;
> }
>
> - return count;
> + return true;
> }
>
> int spapr_get_vcpu_id(PowerPCCPU *cpu)
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 46d05d74fb..8152a9df3d 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -425,6 +425,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas);
>
> typedef struct XiveTCTXMatch {
> XiveTCTX *tctx;
> + int count;
> uint8_t ring;
> bool precluded;
> } XiveTCTXMatch;
> @@ -440,10 +441,10 @@ DECLARE_CLASS_CHECKERS(XivePresenterClass, XIVE_PRESENTER,
>
> struct XivePresenterClass {
> InterfaceClass parent;
> - int (*match_nvt)(XivePresenter *xptr, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match);
> + bool (*match_nvt)(XivePresenter *xptr, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match);
> bool (*in_kernel)(const XivePresenter *xptr);
> uint32_t (*get_config)(XivePresenter *xptr);
> int (*broadcast)(XivePresenter *xptr,
> @@ -455,12 +456,14 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> bool cam_ignore, uint32_t logic_serv);
> -bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, bool *precluded);
> +bool xive_presenter_match(XiveFabric *xfb, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match);
>
> uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
> +uint8_t xive_get_group_level(bool crowd, bool ignore,
> + uint32_t nvp_blk, uint32_t nvp_index);
>
> /*
> * XIVE Fabric (Interface between Interrupt Controller and Machine)
> @@ -475,10 +478,10 @@ DECLARE_CLASS_CHECKERS(XiveFabricClass, XIVE_FABRIC,
>
> struct XiveFabricClass {
> InterfaceClass parent;
> - int (*match_nvt)(XiveFabric *xfb, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match);
> + bool (*match_nvt)(XiveFabric *xfb, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match);
> int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
> bool crowd, bool cam_ignore, uint8_t priority);
> };
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt
2025-05-12 3:10 ` [PATCH 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt Nicholas Piggin
@ 2025-05-14 19:55 ` Mike Kowal
2025-05-15 15:54 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 19:55 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> A group interrupt that gets preempted by a higher priority interrupt
> delivery must be redistributed otherwise it would get lost.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 14 ++++++++++++--
> 1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 6e136ad2e2..cae4092198 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1638,11 +1638,21 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> crowd, cam_ignore, priority,
> xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
> &match)) {
> + XiveTCTX *tctx = match.tctx;
> + uint8_t ring = match.ring;
> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> + uint8_t *aregs = &tctx->regs[alt_ring];
> + uint8_t nsr = aregs[TM_NSR];
> uint8_t group_level;
>
> + if (priority < aregs[TM_PIPR] &&
> + xive_nsr_indicates_group_exception(alt_ring, nsr)) {
> + xive2_redistribute(xrtr, tctx, alt_ring);
> + }
> +
> group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
> - trace_xive_presenter_notify(nvx_blk, nvx_idx, match.ring, group_level);
> - xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
> + trace_xive_presenter_notify(nvx_blk, nvx_idx, ring, group_level);
> + xive_tctx_pipr_update(tctx, ring, priority, group_level);
> return;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
2025-05-12 3:10 ` [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt Nicholas Piggin
@ 2025-05-14 20:10 ` Mike Kowal
2025-05-15 15:21 ` Mike Kowal
2025-05-15 23:43 ` Nicholas Piggin
2025-05-15 15:55 ` Miles Glenn
1 sibling, 2 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 20:10 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> xive_tctx_pipr_update() is used for multiple things. In an effort
> to make things simpler and less overloaded, split out the function
> that is used to present a new interrupt to the tctx.
Why is this a separate commit fro 30? The change here does not do
anything different.
Regardless, taken this patch set as a whole, it's good by me.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 8 +++++++-
> hw/intc/xive2.c | 2 +-
> include/hw/ppc/xive.h | 2 ++
> 3 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 27b5a21371..bf4c0634ca 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -225,6 +225,12 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> xive_tctx_notify(tctx, ring, group_level);
> }
>
> +void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> + uint8_t group_level)
> +{
> + xive_tctx_pipr_update(tctx, ring, priority, group_level);
> +}
> +
> /*
> * XIVE Thread Interrupt Management Area (TIMA)
> */
> @@ -2040,7 +2046,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
> &match)) {
> trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, 0);
> - xive_tctx_pipr_update(match.tctx, match.ring, priority, 0);
> + xive_tctx_pipr_present(match.tctx, match.ring, priority, 0);
> return;
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index cae4092198..f91109b84a 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1652,7 +1652,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
>
> group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
> trace_xive_presenter_notify(nvx_blk, nvx_idx, ring, group_level);
> - xive_tctx_pipr_update(tctx, ring, priority, group_level);
> + xive_tctx_pipr_present(tctx, ring, priority, group_level);
> return;
> }
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 8152a9df3d..0d6b11e818 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -562,6 +562,8 @@ void xive_tctx_reset(XiveTCTX *tctx);
> void xive_tctx_destroy(XiveTCTX *tctx);
> void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level);
> +void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> + uint8_t group_level);
> void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
> void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
> uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 32/50] ppc/xive: Split xive recompute from IPB function
2025-05-12 3:10 ` [PATCH 32/50] ppc/xive: Split xive recompute from IPB function Nicholas Piggin
@ 2025-05-14 20:42 ` Mike Kowal
2025-05-15 23:46 ` Nicholas Piggin
2025-05-15 15:56 ` Miles Glenn
1 sibling, 1 reply; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 20:42 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Further split xive_tctx_pipr_update() by splitting out a new function
> that is used to re-compute the PIPR from IPB. This is generally only
> used with XIVE1, because group interrputs require more logic.
Previous upstreaming was focused only on XIVE2 as not to impact users of
XIVE1.
But I assume this does not hurt anything.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 25 ++++++++++++++++++++++---
> 1 file changed, 22 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 25f6c69c44..5ff1b8f024 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -225,6 +225,20 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> xive_tctx_notify(tctx, ring, group_level);
> }
>
> +static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
> +{
> + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> + uint8_t *aregs = &tctx->regs[alt_ring];
> + uint8_t *regs = &tctx->regs[ring];
> +
> + /* Does not support a presented group interrupt */
> + g_assert(!xive_nsr_indicates_group_exception(alt_ring, aregs[TM_NSR]));
> +
> + aregs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> + xive_tctx_notify(tctx, ring, 0);
> +}
> +
> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level)
> {
> @@ -517,7 +531,12 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
> static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size)
> {
> - xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0);
> + uint8_t ring = TM_QW1_OS;
> + uint8_t *regs = &tctx->regs[ring];
> +
> + /* XXX: how should this work exactly? */
> + regs[TM_IPB] |= xive_priority_to_ipb(value & 0xff);
> + xive_tctx_pipr_recompute_from_ipb(tctx, ring);
> }
>
> static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk,
> @@ -601,14 +620,14 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
> }
>
> /*
> - * Always call xive_tctx_pipr_update(). Even if there were no
> + * Always call xive_tctx_recompute_from_ipb(). Even if there were no
> * escalation triggered, there could be a pending interrupt which
> * was saved when the context was pulled and that we need to take
> * into account by recalculating the PIPR (which is not
> * saved/restored).
> * It will also raise the External interrupt signal if needed.
> */
> - xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
> + xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
> }
>
> /*
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 33/50] ppc/xive: tctx signaling registers rework
2025-05-12 3:10 ` [PATCH 33/50] ppc/xive: tctx signaling registers rework Nicholas Piggin
@ 2025-05-14 20:49 ` Mike Kowal
2025-05-15 15:58 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-14 20:49 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> The tctx "signaling" registers (PIPR, CPPR, NSR) raise an interrupt on
> the target CPU thread. The POOL and PHYS rings both raise hypervisor
> interrupts, so they both share one set of signaling registers in the
> PHYS ring. The PHYS NSR register contains a field that indicates which
> ring has presented the interrupt being signaled to the CPU.
>
> This sharing results in all the "alt_regs" throughout the code. alt_regs
> is not very descriptive, and worse is that the name is used for
> conversions in both directions, i.e., to find the presenting ring from
> the signaling ring, and the signaling ring from the presenting ring.
>
> Instead of alt_regs, use the names sig_regs and sig_ring, and regs and
> ring for the presenting ring being worked on. Add a helper function to
> get the sign_regs, and add some asserts to ensure the POOL regs are
> never used to signal interrupts.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 112 ++++++++++++++++++++++--------------------
> hw/intc/xive2.c | 94 ++++++++++++++++-------------------
> include/hw/ppc/xive.h | 26 +++++++++-
> 3 files changed, 126 insertions(+), 106 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 5ff1b8f024..4e0c71d684 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -80,69 +80,77 @@ static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
> }
> }
>
> -uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> +/*
> + * interrupt is accepted on the presentation ring, for PHYS ring the NSR
> + * directs it to the PHYS or POOL rings.
> + */
> +uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
> {
> - uint8_t *regs = &tctx->regs[ring];
> - uint8_t nsr = regs[TM_NSR];
> + uint8_t *sig_regs = &tctx->regs[sig_ring];
> + uint8_t nsr = sig_regs[TM_NSR];
>
> - qemu_irq_lower(xive_tctx_output(tctx, ring));
> + g_assert(sig_ring == TM_QW1_OS || sig_ring == TM_QW3_HV_PHYS);
> +
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
> +
> + qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
>
> - if (xive_nsr_indicates_exception(ring, nsr)) {
> - uint8_t cppr = regs[TM_PIPR];
> - uint8_t alt_ring;
> - uint8_t *alt_regs;
> + if (xive_nsr_indicates_exception(sig_ring, nsr)) {
> + uint8_t cppr = sig_regs[TM_PIPR];
> + uint8_t ring;
> + uint8_t *regs;
>
> - alt_ring = xive_nsr_exception_ring(ring, nsr);
> - alt_regs = &tctx->regs[alt_ring];
> + ring = xive_nsr_exception_ring(sig_ring, nsr);
> + regs = &tctx->regs[ring];
>
> - regs[TM_CPPR] = cppr;
> + sig_regs[TM_CPPR] = cppr;
>
> /*
> * If the interrupt was for a specific VP, reset the pending
> * buffer bit, otherwise clear the logical server indicator
> */
> - if (!xive_nsr_indicates_group_exception(ring, nsr)) {
> - alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> + if (!xive_nsr_indicates_group_exception(sig_ring, nsr)) {
> + regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> }
>
> /* Clear the exception from NSR */
> - regs[TM_NSR] = 0;
> + sig_regs[TM_NSR] = 0;
>
> - trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
> - alt_regs[TM_IPB], regs[TM_PIPR],
> - regs[TM_CPPR], regs[TM_NSR]);
> + trace_xive_tctx_accept(tctx->cs->cpu_index, ring,
> + regs[TM_IPB], sig_regs[TM_PIPR],
> + sig_regs[TM_CPPR], sig_regs[TM_NSR]);
> }
>
> - return ((uint64_t)nsr << 8) | regs[TM_CPPR];
> + return ((uint64_t)nsr << 8) | sig_regs[TM_CPPR];
> }
>
> void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
> {
> - /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *alt_regs = &tctx->regs[alt_ring];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> - if (alt_regs[TM_PIPR] < alt_regs[TM_CPPR]) {
> + if (sig_regs[TM_PIPR] < sig_regs[TM_CPPR]) {
> switch (ring) {
> case TM_QW1_OS:
> - regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
> + sig_regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
> break;
> case TM_QW2_HV_POOL:
> - alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
> + sig_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
> break;
> case TM_QW3_HV_PHYS:
> - regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
> + sig_regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
> break;
> default:
> g_assert_not_reached();
> }
> trace_xive_tctx_notify(tctx->cs->cpu_index, ring,
> - regs[TM_IPB], alt_regs[TM_PIPR],
> - alt_regs[TM_CPPR], alt_regs[TM_NSR]);
> + regs[TM_IPB], sig_regs[TM_PIPR],
> + sig_regs[TM_CPPR], sig_regs[TM_NSR]);
> qemu_irq_raise(xive_tctx_output(tctx, ring));
> } else {
> - alt_regs[TM_NSR] = 0;
> + sig_regs[TM_NSR] = 0;
> qemu_irq_lower(xive_tctx_output(tctx, ring));
> }
> }
> @@ -159,25 +167,32 @@ void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring)
>
> static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> - uint8_t *regs = &tctx->regs[ring];
> + uint8_t *sig_regs = &tctx->regs[ring];
> uint8_t pipr_min;
> uint8_t ring_min;
>
> + g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
> +
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
> +
> + /* XXX: should show pool IPB for PHYS ring */
> trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
> - regs[TM_IPB], regs[TM_PIPR],
> - cppr, regs[TM_NSR]);
> + sig_regs[TM_IPB], sig_regs[TM_PIPR],
> + cppr, sig_regs[TM_NSR]);
>
> if (cppr > XIVE_PRIORITY_MAX) {
> cppr = 0xff;
> }
>
> - tctx->regs[ring + TM_CPPR] = cppr;
> + sig_regs[TM_CPPR] = cppr;
>
> /*
> * Recompute the PIPR based on local pending interrupts. The PHYS
> * ring must take the minimum of both the PHYS and POOL PIPR values.
> */
> - pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
> + pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
> ring_min = ring;
>
> /* PHYS updates also depend on POOL values */
> @@ -186,7 +201,6 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
>
> /* POOL values only matter if POOL ctx is valid */
> if (pool_regs[TM_WORD2] & 0x80) {
> -
> uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
>
> /*
> @@ -200,7 +214,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
> }
>
> - regs[TM_PIPR] = pipr_min;
> + sig_regs[TM_PIPR] = pipr_min;
>
> /* CPPR has changed, check if we need to raise a pending exception */
> xive_tctx_notify(tctx, ring_min, 0);
> @@ -208,56 +222,50 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
>
> void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level)
> - {
> - /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *alt_regs = &tctx->regs[alt_ring];
> +{
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> if (group_level == 0) {
> /* VP-specific */
> regs[TM_IPB] |= xive_priority_to_ipb(priority);
> - alt_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> + sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> } else {
> /* VP-group */
> - alt_regs[TM_PIPR] = xive_priority_to_pipr(priority);
> + sig_regs[TM_PIPR] = xive_priority_to_pipr(priority);
> }
> xive_tctx_notify(tctx, ring, group_level);
> }
>
> static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
> {
> - /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *aregs = &tctx->regs[alt_ring];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> /* Does not support a presented group interrupt */
> - g_assert(!xive_nsr_indicates_group_exception(alt_ring, aregs[TM_NSR]));
> + g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
>
> - aregs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> + sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> xive_tctx_notify(tctx, ring, 0);
> }
>
> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level)
> {
> - /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *aregs = &tctx->regs[alt_ring];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
> uint8_t pipr = xive_priority_to_pipr(priority);
>
> if (group_level == 0) {
> regs[TM_IPB] |= xive_priority_to_ipb(priority);
> - if (pipr >= aregs[TM_PIPR]) {
> + if (pipr >= sig_regs[TM_PIPR]) {
> /* VP interrupts can come here with lower priority than PIPR */
> return;
> }
> }
> g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
> - g_assert(pipr < aregs[TM_PIPR]);
> - aregs[TM_PIPR] = pipr;
> + g_assert(pipr < sig_regs[TM_PIPR]);
> + sig_regs[TM_PIPR] = pipr;
> xive_tctx_notify(tctx, ring, group_level);
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index f91109b84a..b9ee8c9e9f 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -606,11 +606,9 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
>
> static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> {
> - uint8_t *regs = &tctx->regs[ring];
> - uint8_t *alt_regs = (ring == TM_QW2_HV_POOL) ? &tctx->regs[TM_QW3_HV_PHYS] :
> - regs;
> - uint8_t nsr = alt_regs[TM_NSR];
> - uint8_t pipr = alt_regs[TM_PIPR];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> + uint8_t nsr = sig_regs[TM_NSR];
> + uint8_t pipr = sig_regs[TM_PIPR];
> uint8_t crowd = NVx_CROWD_LVL(nsr);
> uint8_t group = NVx_GROUP_LVL(nsr);
> uint8_t nvgc_blk, end_blk, nvp_blk;
> @@ -618,19 +616,16 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> Xive2Nvgc nvgc;
> uint8_t prio_limit;
> uint32_t cfg;
> - uint8_t alt_ring;
>
> /* redistribution is only for group/crowd interrupts */
> if (!xive_nsr_indicates_group_exception(ring, nsr)) {
> return;
> }
>
> - alt_ring = xive_nsr_exception_ring(ring, nsr);
> -
> /* Don't check return code since ring is expected to be invalidated */
> - xive2_tctx_get_nvp_indexes(tctx, alt_ring, &nvp_blk, &nvp_idx);
> + xive2_tctx_get_nvp_indexes(tctx, ring, &nvp_blk, &nvp_idx);
>
> - trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
> + trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
>
> trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
> /* convert crowd/group to blk/idx */
> @@ -675,23 +670,11 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
>
> /* clear interrupt indication for the context */
> - alt_regs[TM_NSR] = 0;
> - alt_regs[TM_PIPR] = alt_regs[TM_CPPR];
> + sig_regs[TM_NSR] = 0;
> + sig_regs[TM_PIPR] = sig_regs[TM_CPPR];
> xive_tctx_reset_signal(tctx, ring);
> }
>
> -static uint8_t xive2_hv_irq_ring(uint8_t nsr)
> -{
> - switch (nsr >> 6) {
> - case TM_QW3_NSR_HE_POOL:
> - return TM_QW2_HV_POOL;
> - case TM_QW3_NSR_HE_PHYS:
> - return TM_QW3_HV_PHYS;
> - default:
> - return -1;
> - }
> -}
> -
> static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size, uint8_t ring)
> {
> @@ -718,7 +701,8 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> uint32_t ringw2 = xive_tctx_word2(&tctx->regs[cur_ring]);
> uint32_t ringw2_new = xive_set_field32(TM2_QW1W2_VO, ringw2, 0);
> bool is_valid = !!(xive_get_field32(TM2_QW1W2_VO, ringw2));
> - uint8_t alt_ring;
> + uint8_t *sig_regs;
> +
> memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
>
> /* Skip the rest for USER or invalid contexts */
> @@ -727,12 +711,11 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> }
>
> /* Active group/crowd interrupts need to be redistributed */
> - alt_ring = (cur_ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : cur_ring;
> - nsr = tctx->regs[alt_ring + TM_NSR];
> - if (xive_nsr_indicates_group_exception(alt_ring, nsr)) {
> - /* For HV rings, only redistribute if cur_ring matches NSR */
> - if ((cur_ring == TM_QW1_OS) ||
> - (cur_ring == xive2_hv_irq_ring(nsr))) {
> + sig_regs = xive_tctx_signal_regs(tctx, ring);
> + nsr = sig_regs[TM_NSR];
> + if (xive_nsr_indicates_group_exception(cur_ring, nsr)) {
> + /* Ensure ring matches NSR (for HV NSR POOL vs PHYS rings) */
> + if (cur_ring == xive_nsr_exception_ring(cur_ring, nsr)) {
> xive2_redistribute(xrtr, tctx, cur_ring);
> }
> }
> @@ -1118,7 +1101,7 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> /* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
> static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> - uint8_t *regs = &tctx->regs[ring];
> + uint8_t *sig_regs = &tctx->regs[ring];
> Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> uint8_t old_cppr, backlog_prio, first_group, group_level;
> uint8_t pipr_min, lsmfb_min, ring_min;
> @@ -1127,33 +1110,41 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> uint32_t nvp_idx;
> Xive2Nvp nvp;
> int rc;
> - uint8_t nsr = regs[TM_NSR];
> + uint8_t nsr = sig_regs[TM_NSR];
> +
> + g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
> +
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
>
> + /* XXX: should show pool IPB for PHYS ring */
> trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
> - regs[TM_IPB], regs[TM_PIPR],
> + sig_regs[TM_IPB], sig_regs[TM_PIPR],
> cppr, nsr);
>
> if (cppr > XIVE_PRIORITY_MAX) {
> cppr = 0xff;
> }
>
> - old_cppr = regs[TM_CPPR];
> - regs[TM_CPPR] = cppr;
> + old_cppr = sig_regs[TM_CPPR];
> + sig_regs[TM_CPPR] = cppr;
>
> /* Handle increased CPPR priority (lower value) */
> if (cppr < old_cppr) {
> - if (cppr <= regs[TM_PIPR]) {
> + if (cppr <= sig_regs[TM_PIPR]) {
> /* CPPR lowered below PIPR, must un-present interrupt */
> if (xive_nsr_indicates_exception(ring, nsr)) {
> if (xive_nsr_indicates_group_exception(ring, nsr)) {
> /* redistribute precluded active grp interrupt */
> - xive2_redistribute(xrtr, tctx, ring);
> + xive2_redistribute(xrtr, tctx,
> + xive_nsr_exception_ring(ring, nsr));
> return;
> }
> }
>
> /* interrupt is VP directed, pending in IPB */
> - regs[TM_PIPR] = cppr;
> + sig_regs[TM_PIPR] = cppr;
> xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
> return;
> } else {
> @@ -1174,9 +1165,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> * be adjusted below if needed in case of pending group interrupts.
> */
> again:
> - pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
> - group_enabled = !!regs[TM_LGS];
> - lsmfb_min = group_enabled ? regs[TM_LSMFB] : 0xff;
> + pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
> + group_enabled = !!sig_regs[TM_LGS];
> + lsmfb_min = group_enabled ? sig_regs[TM_LSMFB] : 0xff;
> ring_min = ring;
> group_level = 0;
>
> @@ -1265,7 +1256,7 @@ again:
> }
>
> /* PIPR should not be set to a value greater than CPPR */
> - regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
> + sig_regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
>
> /* CPPR has changed, check if we need to raise a pending exception */
> xive_tctx_notify(tctx, ring_min, group_level);
> @@ -1490,9 +1481,7 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
>
> bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> {
> - /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *alt_regs = &tctx->regs[alt_ring];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
>
> /*
> * The xive2_presenter_tctx_match() above tells if there's a match
> @@ -1500,7 +1489,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> * priority to know if the thread can take the interrupt now or if
> * it is precluded.
> */
> - if (priority < alt_regs[TM_PIPR]) {
> + if (priority < sig_regs[TM_PIPR]) {
> return false;
> }
> return true;
> @@ -1640,14 +1629,13 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> &match)) {
> XiveTCTX *tctx = match.tctx;
> uint8_t ring = match.ring;
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *aregs = &tctx->regs[alt_ring];
> - uint8_t nsr = aregs[TM_NSR];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> + uint8_t nsr = sig_regs[TM_NSR];
> uint8_t group_level;
>
> - if (priority < aregs[TM_PIPR] &&
> - xive_nsr_indicates_group_exception(alt_ring, nsr)) {
> - xive2_redistribute(xrtr, tctx, alt_ring);
> + if (priority < sig_regs[TM_PIPR] &&
> + xive_nsr_indicates_group_exception(ring, nsr)) {
> + xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
> }
>
> group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 0d6b11e818..a3c2f50ece 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -539,7 +539,7 @@ static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
> }
>
> /*
> - * XIVE Thread Interrupt Management Aera (TIMA)
> + * XIVE Thread Interrupt Management Area (TIMA)
> *
> * This region gives access to the registers of the thread interrupt
> * management context. It is four page wide, each page providing a
> @@ -551,6 +551,30 @@ static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
> #define XIVE_TM_OS_PAGE 0x2
> #define XIVE_TM_USER_PAGE 0x3
>
> +/*
> + * The TCTX (TIMA) has 4 rings (phys, pool, os, user), but only signals
> + * (raises an interrupt on) the CPU from 3 of them. Phys and pool both
> + * cause a hypervisor privileged interrupt so interrupts presented on
> + * those rings signal using the phys ring. This helper returns the signal
> + * regs from the given ring.
> + */
> +static inline uint8_t *xive_tctx_signal_regs(XiveTCTX *tctx, uint8_t ring)
> +{
> + /*
> + * This is a good point to add invariants to ensure nothing has tried to
> + * signal using the POOL ring.
> + */
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
> +
> + if (ring == TM_QW2_HV_POOL) {
> + /* POOL and PHYS rings share the signal regs (PIPR, NSR, CPPR) */
> + ring = TM_QW3_HV_PHYS;
> + }
> + return &tctx->regs[ring];
> +}
> +
> void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> uint64_t value, unsigned size);
> uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented
2025-05-12 3:10 ` [PATCH 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented Nicholas Piggin
@ 2025-05-15 15:16 ` Mike Kowal
2025-05-15 23:50 ` Nicholas Piggin
2025-05-15 16:04 ` Miles Glenn
1 sibling, 1 reply; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:16 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> The relationship between an interrupt signaled in the TIMA and the QEMU
> irq line to the processor to be 1:1, so they should be raised and
...needs to be...
> lowered together and "just in case" lowering should be avoided (it could
> mask
I think you missed the rest of the line...
MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 4e0c71d684..d5dbeab6bd 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -95,8 +95,6 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
> g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
>
> - qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
> -
> if (xive_nsr_indicates_exception(sig_ring, nsr)) {
> uint8_t cppr = sig_regs[TM_PIPR];
> uint8_t ring;
> @@ -117,6 +115,7 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
>
> /* Clear the exception from NSR */
> sig_regs[TM_NSR] = 0;
> + qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
>
> trace_xive_tctx_accept(tctx->cs->cpu_index, ring,
> regs[TM_IPB], sig_regs[TM_PIPR],
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function
2025-05-12 3:10 ` [PATCH 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function Nicholas Piggin
@ 2025-05-15 15:18 ` Mike Kowal
2025-05-15 16:05 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:18 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Have xive_tctx_notify() also set the new PIPR value and rename it to
> xive_tctx_pipr_set(). This can replace the last xive_tctx_pipr_update()
> caller because it does not need to update IPB (it already sets it).
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 39 +++++++++++----------------------------
> hw/intc/xive2.c | 16 +++++++---------
> include/hw/ppc/xive.h | 5 ++---
> 3 files changed, 20 insertions(+), 40 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index d5dbeab6bd..4659821d4a 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -125,12 +125,16 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
> return ((uint64_t)nsr << 8) | sig_regs[TM_CPPR];
> }
>
> -void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
> +/* Change PIPR and calculate NSR and irq based on PIPR, CPPR, group */
> +void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t pipr,
> + uint8_t group_level)
> {
> uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> - if (sig_regs[TM_PIPR] < sig_regs[TM_CPPR]) {
> + sig_regs[TM_PIPR] = pipr;
> +
> + if (pipr < sig_regs[TM_CPPR]) {
> switch (ring) {
> case TM_QW1_OS:
> sig_regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
> @@ -145,7 +149,7 @@ void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
> g_assert_not_reached();
> }
> trace_xive_tctx_notify(tctx->cs->cpu_index, ring,
> - regs[TM_IPB], sig_regs[TM_PIPR],
> + regs[TM_IPB], pipr,
> sig_regs[TM_CPPR], sig_regs[TM_NSR]);
> qemu_irq_raise(xive_tctx_output(tctx, ring));
> } else {
> @@ -213,29 +217,10 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
> }
>
> - sig_regs[TM_PIPR] = pipr_min;
> -
> - /* CPPR has changed, check if we need to raise a pending exception */
> - xive_tctx_notify(tctx, ring_min, 0);
> + /* CPPR has changed, this may present or preclude a pending exception */
> + xive_tctx_pipr_set(tctx, ring_min, pipr_min, 0);
> }
>
> -void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> - uint8_t group_level)
> -{
> - uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> - uint8_t *regs = &tctx->regs[ring];
> -
> - if (group_level == 0) {
> - /* VP-specific */
> - regs[TM_IPB] |= xive_priority_to_ipb(priority);
> - sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> - } else {
> - /* VP-group */
> - sig_regs[TM_PIPR] = xive_priority_to_pipr(priority);
> - }
> - xive_tctx_notify(tctx, ring, group_level);
> - }
> -
> static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
> {
> uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> @@ -244,8 +229,7 @@ static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
> /* Does not support a presented group interrupt */
> g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
>
> - sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> - xive_tctx_notify(tctx, ring, 0);
> + xive_tctx_pipr_set(tctx, ring, xive_ipb_to_pipr(regs[TM_IPB]), 0);
> }
>
> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> @@ -264,8 +248,7 @@ void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> }
> g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
> g_assert(pipr < sig_regs[TM_PIPR]);
> - sig_regs[TM_PIPR] = pipr;
> - xive_tctx_notify(tctx, ring, group_level);
> + xive_tctx_pipr_set(tctx, ring, pipr, group_level);
> }
>
> /*
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index b9ee8c9e9f..8c8dab3aa2 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -966,10 +966,10 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> }
>
> /*
> - * Compute the PIPR based on the restored state.
> + * Set the PIPR/NSR based on the restored state.
> * It will raise the External interrupt signal if needed.
> */
> - xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
> + xive_tctx_pipr_set(tctx, TM_QW1_OS, backlog_prio, backlog_level);
> }
>
> /*
> @@ -1144,8 +1144,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
>
> /* interrupt is VP directed, pending in IPB */
> - sig_regs[TM_PIPR] = cppr;
> - xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
> + xive_tctx_pipr_set(tctx, ring, cppr, 0);
> return;
> } else {
> /* CPPR was lowered, but still above PIPR. No action needed. */
> @@ -1255,11 +1254,10 @@ again:
> pipr_min = backlog_prio;
> }
>
> - /* PIPR should not be set to a value greater than CPPR */
> - sig_regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
> -
> - /* CPPR has changed, check if we need to raise a pending exception */
> - xive_tctx_notify(tctx, ring_min, group_level);
> + if (pipr_min > cppr) {
> + pipr_min = cppr;
> + }
> + xive_tctx_pipr_set(tctx, ring_min, pipr_min, group_level);
> }
>
> void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index a3c2f50ece..2372d1014b 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -584,12 +584,11 @@ void xive_tctx_pic_print_info(XiveTCTX *tctx, GString *buf);
> Object *xive_tctx_create(Object *cpu, XivePresenter *xptr, Error **errp);
> void xive_tctx_reset(XiveTCTX *tctx);
> void xive_tctx_destroy(XiveTCTX *tctx);
> -void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> - uint8_t group_level);
> +void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> + uint8_t group_level);
> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level);
> void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
> -void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
> uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
>
> /*
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
2025-05-14 20:10 ` Mike Kowal
@ 2025-05-15 15:21 ` Mike Kowal
2025-05-15 23:51 ` Nicholas Piggin
2025-05-15 23:43 ` Nicholas Piggin
1 sibling, 1 reply; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:21 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/14/2025 3:10 PM, Mike Kowal wrote:
>
> On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
>> xive_tctx_pipr_update() is used for multiple things. In an effort
>> to make things simpler and less overloaded, split out the function
>> that is used to present a new interrupt to the tctx.
>
>
> Why is this a separate commit fro 30? The change here does not do
> anything different.
> Regardless, taken this patch set as a whole, it's good by me.
Okay, I see the rest or this is done in patch set 35...
>
> Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
>
> Thanks, MAK
>
>
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> hw/intc/xive.c | 8 +++++++-
>> hw/intc/xive2.c | 2 +-
>> include/hw/ppc/xive.h | 2 ++
>> 3 files changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
>> index 27b5a21371..bf4c0634ca 100644
>> --- a/hw/intc/xive.c
>> +++ b/hw/intc/xive.c
>> @@ -225,6 +225,12 @@ void xive_tctx_pipr_update(XiveTCTX *tctx,
>> uint8_t ring, uint8_t priority,
>> xive_tctx_notify(tctx, ring, group_level);
>> }
>> +void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t
>> priority,
>> + uint8_t group_level)
>> +{
>> + xive_tctx_pipr_update(tctx, ring, priority, group_level);
>> +}
>> +
>> /*
>> * XIVE Thread Interrupt Management Area (TIMA)
>> */
>> @@ -2040,7 +2046,7 @@ void xive_router_end_notify(XiveRouter *xrtr,
>> XiveEAS *eas)
>> xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
>> &match)) {
>> trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, 0);
>> - xive_tctx_pipr_update(match.tctx, match.ring, priority, 0);
>> + xive_tctx_pipr_present(match.tctx, match.ring, priority, 0);
>> return;
>> }
>> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
>> index cae4092198..f91109b84a 100644
>> --- a/hw/intc/xive2.c
>> +++ b/hw/intc/xive2.c
>> @@ -1652,7 +1652,7 @@ static void xive2_router_end_notify(Xive2Router
>> *xrtr, uint8_t end_blk,
>> group_level = xive_get_group_level(crowd, cam_ignore,
>> nvx_blk, nvx_idx);
>> trace_xive_presenter_notify(nvx_blk, nvx_idx, ring,
>> group_level);
>> - xive_tctx_pipr_update(tctx, ring, priority, group_level);
>> + xive_tctx_pipr_present(tctx, ring, priority, group_level);
>> return;
>> }
>> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
>> index 8152a9df3d..0d6b11e818 100644
>> --- a/include/hw/ppc/xive.h
>> +++ b/include/hw/ppc/xive.h
>> @@ -562,6 +562,8 @@ void xive_tctx_reset(XiveTCTX *tctx);
>> void xive_tctx_destroy(XiveTCTX *tctx);
>> void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t
>> priority,
>> uint8_t group_level);
>> +void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t
>> priority,
>> + uint8_t group_level);
>> void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
>> void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t
>> group_level);
>> uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP
2025-05-12 3:10 ` [PATCH 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP Nicholas Piggin
@ 2025-05-15 15:21 ` Mike Kowal
2025-05-15 15:55 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:21 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> xive_tctx_pipr_present() as implemented with xive_tctx_pipr_update()
> causes VP-directed (group==0) interrupt to be presented in PIPR and NSR
> despite being a lower priority than the currently presented group
> interrupt.
>
> This must not happen. The IPB bit should record the low priority VP
> interrupt, but PIPR and NSR must not present the lower priority
> interrupt.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 18 +++++++++++++++++-
> 1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index bf4c0634ca..25f6c69c44 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -228,7 +228,23 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level)
> {
> - xive_tctx_pipr_update(tctx, ring, priority, group_level);
> + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> + uint8_t *aregs = &tctx->regs[alt_ring];
> + uint8_t *regs = &tctx->regs[ring];
> + uint8_t pipr = xive_priority_to_pipr(priority);
> +
> + if (group_level == 0) {
> + regs[TM_IPB] |= xive_priority_to_ipb(priority);
> + if (pipr >= aregs[TM_PIPR]) {
> + /* VP interrupts can come here with lower priority than PIPR */
> + return;
> + }
> + }
> + g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
> + g_assert(pipr < aregs[TM_PIPR]);
> + aregs[TM_PIPR] = pipr;
> + xive_tctx_notify(tctx, ring, group_level);
> }
>
> /*
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 36/50] ppc/xive2: split tctx presentation processing from set CPPR
2025-05-12 3:10 ` [PATCH 36/50] ppc/xive2: split tctx presentation processing from set CPPR Nicholas Piggin
@ 2025-05-15 15:24 ` Mike Kowal
2025-05-15 16:06 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:24 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> The second part of the set CPPR operation is to process (or re-present)
> any pending interrupts after CPPR is adjusted.
>
> Split this presentation processing out into a standalone function that
> can be used in other places.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 137 +++++++++++++++++++++++++++---------------------
> 1 file changed, 76 insertions(+), 61 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 8c8dab3aa2..aa06bfda77 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1098,66 +1098,19 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
> }
>
> -/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
> -static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> +/* Re-calculate and present pending interrupts */
> +static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
> {
> - uint8_t *sig_regs = &tctx->regs[ring];
> + uint8_t *sig_regs = &tctx->regs[sig_ring];
> Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> - uint8_t old_cppr, backlog_prio, first_group, group_level;
> + uint8_t backlog_prio, first_group, group_level;
> uint8_t pipr_min, lsmfb_min, ring_min;
> + uint8_t cppr = sig_regs[TM_CPPR];
> bool group_enabled;
> - uint8_t nvp_blk;
> - uint32_t nvp_idx;
> Xive2Nvp nvp;
> int rc;
> - uint8_t nsr = sig_regs[TM_NSR];
> -
> - g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
> -
> - g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> - g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> - g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
> -
> - /* XXX: should show pool IPB for PHYS ring */
> - trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
> - sig_regs[TM_IPB], sig_regs[TM_PIPR],
> - cppr, nsr);
> -
> - if (cppr > XIVE_PRIORITY_MAX) {
> - cppr = 0xff;
> - }
> -
> - old_cppr = sig_regs[TM_CPPR];
> - sig_regs[TM_CPPR] = cppr;
> -
> - /* Handle increased CPPR priority (lower value) */
> - if (cppr < old_cppr) {
> - if (cppr <= sig_regs[TM_PIPR]) {
> - /* CPPR lowered below PIPR, must un-present interrupt */
> - if (xive_nsr_indicates_exception(ring, nsr)) {
> - if (xive_nsr_indicates_group_exception(ring, nsr)) {
> - /* redistribute precluded active grp interrupt */
> - xive2_redistribute(xrtr, tctx,
> - xive_nsr_exception_ring(ring, nsr));
> - return;
> - }
> - }
>
> - /* interrupt is VP directed, pending in IPB */
> - xive_tctx_pipr_set(tctx, ring, cppr, 0);
> - return;
> - } else {
> - /* CPPR was lowered, but still above PIPR. No action needed. */
> - return;
> - }
> - }
> -
> - /* CPPR didn't change, nothing needs to be done */
> - if (cppr == old_cppr) {
> - return;
> - }
> -
> - /* CPPR priority decreased (higher value) */
> + g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
>
> /*
> * Recompute the PIPR based on local pending interrupts. It will
> @@ -1167,11 +1120,11 @@ again:
> pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
> group_enabled = !!sig_regs[TM_LGS];
> lsmfb_min = group_enabled ? sig_regs[TM_LSMFB] : 0xff;
> - ring_min = ring;
> + ring_min = sig_ring;
> group_level = 0;
>
> /* PHYS updates also depend on POOL values */
> - if (ring == TM_QW3_HV_PHYS) {
> + if (sig_ring == TM_QW3_HV_PHYS) {
> uint8_t *pool_regs = &tctx->regs[TM_QW2_HV_POOL];
>
> /* POOL values only matter if POOL ctx is valid */
> @@ -1201,20 +1154,25 @@ again:
> }
> }
>
> - rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> - if (rc) {
> - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
> - return;
> - }
> -
> if (group_enabled &&
> lsmfb_min < cppr &&
> lsmfb_min < pipr_min) {
> +
> + uint8_t nvp_blk;
> + uint32_t nvp_idx;
> +
> /*
> * Thread has seen a group interrupt with a higher priority
> * than the new cppr or pending local interrupt. Check the
> * backlog
> */
> + rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> + if (rc) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid "
> + "context\n");
> + return;
> + }
> +
> if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
> nvp_blk, nvp_idx);
> @@ -1260,6 +1218,63 @@ again:
> xive_tctx_pipr_set(tctx, ring_min, pipr_min, group_level);
> }
>
> +/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
> +static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t sig_ring, uint8_t cppr)
> +{
> + uint8_t *sig_regs = &tctx->regs[sig_ring];
> + Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> + uint8_t old_cppr;
> + uint8_t nsr = sig_regs[TM_NSR];
> +
> + g_assert(sig_ring == TM_QW1_OS || sig_ring == TM_QW3_HV_PHYS);
> +
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
> +
> + /* XXX: should show pool IPB for PHYS ring */
> + trace_xive_tctx_set_cppr(tctx->cs->cpu_index, sig_ring,
> + sig_regs[TM_IPB], sig_regs[TM_PIPR],
> + cppr, nsr);
> +
> + if (cppr > XIVE_PRIORITY_MAX) {
> + cppr = 0xff;
> + }
> +
> + old_cppr = sig_regs[TM_CPPR];
> + sig_regs[TM_CPPR] = cppr;
> +
> + /* Handle increased CPPR priority (lower value) */
> + if (cppr < old_cppr) {
> + if (cppr <= sig_regs[TM_PIPR]) {
> + /* CPPR lowered below PIPR, must un-present interrupt */
> + if (xive_nsr_indicates_exception(sig_ring, nsr)) {
> + if (xive_nsr_indicates_group_exception(sig_ring, nsr)) {
> + /* redistribute precluded active grp interrupt */
> + xive2_redistribute(xrtr, tctx,
> + xive_nsr_exception_ring(sig_ring, nsr));
> + return;
> + }
> + }
> +
> + /* interrupt is VP directed, pending in IPB */
> + xive_tctx_pipr_set(tctx, sig_ring, cppr, 0);
> + return;
> + } else {
> + /* CPPR was lowered, but still above PIPR. No action needed. */
> + return;
> + }
> + }
> +
> + /* CPPR didn't change, nothing needs to be done */
> + if (cppr == old_cppr) {
> + return;
> + }
> +
> + /* CPPR priority decreased (higher value) */
> + xive2_tctx_process_pending(tctx, sig_ring);
> +}
> +
> void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size)
> {
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 37/50] ppc/xive2: Consolidate presentation processing in context push
2025-05-12 3:10 ` [PATCH 37/50] ppc/xive2: Consolidate presentation processing in context push Nicholas Piggin
@ 2025-05-15 15:25 ` Mike Kowal
2025-05-15 16:06 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:25 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> OS-push operation must re-present pending interrupts. Use the
> newly created xive2_tctx_process_pending() function instead of
> duplicating the logic.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 42 ++++++++++--------------------------------
> 1 file changed, 10 insertions(+), 32 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index aa06bfda77..0fdf6a4f20 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -903,18 +903,14 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> return cppr;
> }
>
> +static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
> +
> static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
> {
> - XivePresenter *xptr = XIVE_PRESENTER(xrtr);
> - uint8_t ipb;
> - uint8_t backlog_level;
> - uint8_t group_level;
> - uint8_t first_group;
> - uint8_t backlog_prio;
> - uint8_t group_prio;
> uint8_t *regs = &tctx->regs[TM_QW1_OS];
> + uint8_t ipb;
> Xive2Nvp nvp;
>
> /*
> @@ -946,30 +942,8 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> }
> /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
> - backlog_prio = xive_ipb_to_pipr(regs[TM_IPB]);
> - backlog_level = 0;
> -
> - first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
> - if (first_group && regs[TM_LSMFB] < backlog_prio) {
> - group_prio = xive2_presenter_backlog_scan(xptr, nvp_blk, nvp_idx,
> - first_group, &group_level);
> - regs[TM_LSMFB] = group_prio;
> - if (regs[TM_LGS] && group_prio < backlog_prio &&
> - group_prio < regs[TM_CPPR]) {
> -
> - /* VP can take a group interrupt */
> - xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
> - group_prio, group_level);
> - backlog_prio = group_prio;
> - backlog_level = group_level;
> - }
> - }
>
> - /*
> - * Set the PIPR/NSR based on the restored state.
> - * It will raise the External interrupt signal if needed.
> - */
> - xive_tctx_pipr_set(tctx, TM_QW1_OS, backlog_prio, backlog_level);
> + xive2_tctx_process_pending(tctx, TM_QW1_OS);
> }
>
> /*
> @@ -1103,8 +1077,12 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
> {
> uint8_t *sig_regs = &tctx->regs[sig_ring];
> Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> - uint8_t backlog_prio, first_group, group_level;
> - uint8_t pipr_min, lsmfb_min, ring_min;
> + uint8_t backlog_prio;
> + uint8_t first_group;
> + uint8_t group_level;
> + uint8_t pipr_min;
> + uint8_t lsmfb_min;
> + uint8_t ring_min;
> uint8_t cppr = sig_regs[TM_CPPR];
> bool group_enabled;
> Xive2Nvp nvp;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set
2025-05-12 3:10 ` [PATCH 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set Nicholas Piggin
@ 2025-05-15 15:26 ` Mike Kowal
2025-05-15 16:07 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:26 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> When CPPR priority is decreased, pending interrupts do not need to be
> re-checked if one is already presented because by definition that will
> be the highest priority.
>
> This prevents a presented group interrupt from being lost.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 0fdf6a4f20..ace5871706 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1250,7 +1250,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t sig_ring, uint8_t cppr)
> }
>
> /* CPPR priority decreased (higher value) */
> - xive2_tctx_process_pending(tctx, sig_ring);
> + if (!xive_nsr_indicates_exception(sig_ring, nsr)) {
> + xive2_tctx_process_pending(tctx, sig_ring);
> + }
> }
>
> void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 39/50] ppc/xive: Assert group interrupts were redistributed
2025-05-12 3:10 ` [PATCH 39/50] ppc/xive: Assert group interrupts were redistributed Nicholas Piggin
@ 2025-05-15 15:28 ` Mike Kowal
2025-05-15 16:08 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:28 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Add some assertions to try to ensure presented group interrupts do
> not get lost without being redistributed, if they become precluded
> by CPPR or preempted by a higher priority interrupt.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 2 ++
> hw/intc/xive2.c | 1 +
> 2 files changed, 3 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 4659821d4a..81af59f0ec 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -132,6 +132,8 @@ void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t pipr,
> uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> + g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
> +
> sig_regs[TM_PIPR] = pipr;
>
> if (pipr < sig_regs[TM_CPPR]) {
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index ace5871706..e3060810d3 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1089,6 +1089,7 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
> int rc;
>
> g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
> + g_assert(!xive_nsr_indicates_group_exception(sig_ring, sig_regs[TM_NSR]));
>
> /*
> * Recompute the PIPR based on local pending interrupts. It will
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 01/50] ppc/xive: Fix xive trace event output
2025-05-12 3:10 ` [PATCH 01/50] ppc/xive: Fix xive trace event output Nicholas Piggin
2025-05-14 14:26 ` Caleb Schlossin
2025-05-14 18:41 ` Mike Kowal
@ 2025-05-15 15:30 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:30 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Typo, IBP should be IPB.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/trace-events | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/trace-events b/hw/intc/trace-events
> index 0ba9a02e73..f77f9733c9 100644
> --- a/hw/intc/trace-events
> +++ b/hw/intc/trace-events
> @@ -274,9 +274,9 @@ kvm_xive_cpu_connect(uint32_t id) "connect CPU%d to KVM device"
> kvm_xive_source_reset(uint32_t srcno) "IRQ 0x%x"
>
> # xive.c
> -xive_tctx_accept(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x ACK"
> -xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x raise !"
> -xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
> +xive_tctx_accept(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x ACK"
> +xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x raise !"
> +xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
> xive_source_esb_read(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
> xive_source_esb_write(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
> xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "END 0x%02x/0x%04x -> enqueue 0x%08x"
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 02/50] ppc/xive: Report access size in XIVE TM operation error logs
2025-05-12 3:10 ` [PATCH 02/50] ppc/xive: Report access size in XIVE TM operation error logs Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
2025-05-14 18:42 ` Mike Kowal
@ 2025-05-15 15:31 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:31 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Report access size in XIVE TM operation error logs.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 3eb28c2265..80b07a0afe 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -326,7 +326,7 @@ static void xive_tm_raw_write(XiveTCTX *tctx, hwaddr offset, uint64_t value,
> */
> if (size < 4 || !mask || ring_offset == TM_QW0_USER) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA @%"
> - HWADDR_PRIx"\n", offset);
> + HWADDR_PRIx" size %d\n", offset, size);
> return;
> }
>
> @@ -357,7 +357,7 @@ static uint64_t xive_tm_raw_read(XiveTCTX *tctx, hwaddr offset, unsigned size)
> */
> if (size < 4 || !mask || ring_offset == TM_QW0_USER) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access at TIMA @%"
> - HWADDR_PRIx"\n", offset);
> + HWADDR_PRIx" size %d\n", offset, size);
> return -1;
> }
>
> @@ -688,7 +688,7 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> xto = xive_tm_find_op(tctx->xptr, offset, size, true);
> if (!xto) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA "
> - "@%"HWADDR_PRIx"\n", offset);
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> } else {
> xto->write_handler(xptr, tctx, offset, value, size);
> }
> @@ -727,7 +727,7 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> xto = xive_tm_find_op(tctx->xptr, offset, size, false);
> if (!xto) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access to TIMA"
> - "@%"HWADDR_PRIx"\n", offset);
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> return -1;
> }
> ret = xto->read_handler(xptr, tctx, offset, size);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address
2025-05-12 3:10 ` [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
2025-05-14 18:46 ` Mike Kowal
@ 2025-05-15 15:34 ` Miles Glenn
2025-05-16 0:08 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:34 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> In a multi chip environment there will be remote/forwarded VSDs. The check
> to find a matching INT controller (XIVE) of the remote block number was
> checking the INTs chip number. Block numbers are not tied to a chip number.
> The matching remote INT is the one that matches the forwarded VSD address
> with VSD types associated MMIO BAR.
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 25 +++++++++++++++++--------
> 1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index d1713b406c..30b4ab2efe 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -102,12 +102,10 @@ static uint32_t pnv_xive2_block_id(PnvXive2 *xive)
> }
>
> /*
> - * Remote access to controllers. HW uses MMIOs. For now, a simple scan
> - * of the chips is good enough.
> - *
> - * TODO: Block scope support
> + * Remote access to INT controllers. HW uses MMIOs(?). For now, a simple
> + * scan of all the chips INT controller is good enough.
> */
> -static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
> +static PnvXive2 *pnv_xive2_get_remote(uint32_t vsd_type, hwaddr fwd_addr)
> {
> PnvMachineState *pnv = PNV_MACHINE(qdev_get_machine());
> int i;
> @@ -116,10 +114,22 @@ static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
> Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
> PnvXive2 *xive = &chip10->xive;
>
> - if (pnv_xive2_block_id(xive) == blk) {
> + /*
> + * Is this the XIVE matching the forwarded VSD address is for this
> + * VSD type
> + */
> + if ((vsd_type == VST_ESB && fwd_addr == xive->esb_base) ||
> + (vsd_type == VST_END && fwd_addr == xive->end_base) ||
> + ((vsd_type == VST_NVP ||
> + vsd_type == VST_NVG) && fwd_addr == xive->nvpg_base) ||
> + (vsd_type == VST_NVC && fwd_addr == xive->nvc_base)) {
> return xive;
> }
> }
> +
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "XIVE: >>>>> pnv_xive2_get_remote() vsd_type %u fwd_addr 0x%lX NOT FOUND\n",
> + vsd_type, fwd_addr);
> return NULL;
> }
>
> @@ -252,8 +262,7 @@ static uint64_t pnv_xive2_vst_addr(PnvXive2 *xive, uint32_t type, uint8_t blk,
>
> /* Remote VST access */
> if (GETFIELD(VSD_MODE, vsd) == VSD_MODE_FORWARD) {
> - xive = pnv_xive2_get_remote(blk);
> -
> + xive = pnv_xive2_get_remote(type, (vsd & VSD_ADDRESS_MASK));
> return xive ? pnv_xive2_vst_addr(xive, type, blk, idx) : 0;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 40/50] ppc/xive2: implement NVP context save restore for POOL ring
2025-05-12 3:10 ` [PATCH 40/50] ppc/xive2: implement NVP context save restore for POOL ring Nicholas Piggin
@ 2025-05-15 15:36 ` Mike Kowal
2025-05-15 16:09 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:36 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> In preparation to implement POOL context push, add support for POOL
> NVP context save/restore.
>
> The NVP p bit is defined in the spec as follows:
>
> If TRUE, the CPPR of a Pool VP in the NVP is updated during store of
> the context with the CPPR of the Hard context it was running under.
>
> It's not clear whether non-pool VPs always or never get CPPR updated.
> Before this patch, OS contexts always save CPPR, so we will assume that
> is the behaviour.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 51 +++++++++++++++++++++++++------------
> include/hw/ppc/xive2_regs.h | 1 +
> 2 files changed, 36 insertions(+), 16 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index e3060810d3..d899c1fb14 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -512,12 +512,13 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
> */
>
> static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> - uint8_t nvp_blk, uint32_t nvp_idx,
> - uint8_t ring)
> + uint8_t ring,
> + uint8_t nvp_blk, uint32_t nvp_idx)
> {
> CPUPPCState *env = &POWERPC_CPU(tctx->cs)->env;
> uint32_t pir = env->spr_cb[SPR_PIR].default_value;
> Xive2Nvp nvp;
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
> @@ -553,7 +554,14 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> }
>
> nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, regs[TM_IPB]);
> - nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, regs[TM_CPPR]);
> +
> + if ((nvp.w0 & NVP2_W0_P) || ring != TM_QW2_HV_POOL) {
> + /*
> + * Non-pool contexts always save CPPR (ignore p bit). XXX: Clarify
> + * whether that is the correct behaviour.
> + */
> + nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, sig_regs[TM_CPPR]);
> + }
> if (nvp.w0 & NVP2_W0_L) {
> /*
> * Typically not used. If LSMFB is restored with 0, it will
> @@ -722,7 +730,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> }
>
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> - xive2_tctx_save_ctx(xrtr, tctx, nvp_blk, nvp_idx, ring);
> + xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
> }
>
> /*
> @@ -863,12 +871,15 @@ void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_tm_pull_ctx_ol(xptr, tctx, offset, value, size, TM_QW3_HV_PHYS);
> }
>
> -static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> - uint8_t nvp_blk, uint32_t nvp_idx,
> - Xive2Nvp *nvp)
> +static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> + uint8_t ring,
> + uint8_t nvp_blk, uint32_t nvp_idx,
> + Xive2Nvp *nvp)
> {
> CPUPPCState *env = &POWERPC_CPU(tctx->cs)->env;
> uint32_t pir = env->spr_cb[SPR_PIR].default_value;
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> + uint8_t *regs = &tctx->regs[ring];
> uint8_t cppr;
>
> if (!xive2_nvp_is_hw(nvp)) {
> @@ -881,10 +892,10 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> nvp->w2 = xive_set_field32(NVP2_W2_CPPR, nvp->w2, 0);
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 2);
>
> - tctx->regs[TM_QW1_OS + TM_CPPR] = cppr;
> - tctx->regs[TM_QW1_OS + TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
> - tctx->regs[TM_QW1_OS + TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
> - tctx->regs[TM_QW1_OS + TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
> + sig_regs[TM_CPPR] = cppr;
> + regs[TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
> + regs[TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
> + regs[TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
>
> nvp->w1 = xive_set_field32(NVP2_W1_CO, nvp->w1, 1);
> nvp->w1 = xive_set_field32(NVP2_W1_CO_THRID_VALID, nvp->w1, 1);
> @@ -893,9 +904,18 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> /*
> * Checkout privilege: 0:OS, 1:Pool, 2:Hard
> *
> - * TODO: we only support OS push/pull
> + * TODO: we don't support hard push/pull
> */
> - nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 0);
> + switch (ring) {
> + case TM_QW1_OS:
> + nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 0);
> + break;
> + case TM_QW2_HV_POOL:
> + nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 1);
> + break;
> + default:
> + g_assert_not_reached();
> + }
>
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 1);
>
> @@ -930,9 +950,8 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> }
>
> /* Automatically restore thread context registers */
> - if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE &&
> - do_restore) {
> - xive2_tctx_restore_os_ctx(xrtr, tctx, nvp_blk, nvp_idx, &nvp);
> + if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_restore) {
> + xive2_tctx_restore_ctx(xrtr, tctx, TM_QW1_OS, nvp_blk, nvp_idx, &nvp);
> }
>
> ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index f82054661b..2a3e60abad 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -158,6 +158,7 @@ typedef struct Xive2Nvp {
> #define NVP2_W0_L PPC_BIT32(8)
> #define NVP2_W0_G PPC_BIT32(9)
> #define NVP2_W0_T PPC_BIT32(10)
> +#define NVP2_W0_P PPC_BIT32(11)
> #define NVP2_W0_ESC_END PPC_BIT32(25) /* 'N' bit 0:ESB 1:END */
> #define NVP2_W0_PGOFIRST PPC_BITMASK32(26, 31)
> uint32_t w1;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 05/50] ppc/xive2: fix context push calculation of IPB priority
2025-05-12 3:10 ` [PATCH 05/50] ppc/xive2: fix context push calculation of IPB priority Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
2025-05-14 18:48 ` Mike Kowal
@ 2025-05-15 15:36 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:36 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Pushing a context and loading IPB from NVP is defined to merge ('or')
> that IPB into the TIMA IPB register. PIPR should therefore be calculated
> based on the final IPB value, not just the NVP value.
>
> Fixes: 9d2b6058c5b ("ppc/xive2: Add grouping level to notification")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 790152a2a6..4dd04a0398 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -835,8 +835,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
> }
> + /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
> - backlog_prio = xive_ipb_to_pipr(ipb);
> + backlog_prio = xive_ipb_to_pipr(regs[TM_IPB]);
> backlog_level = 0;
>
> first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 00/50] ppc/xive: updates for PowerVM
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (49 preceding siblings ...)
2025-05-12 3:10 ` [PATCH 50/50] ppc/xive2: Enable lower level contexts on VP push Nicholas Piggin
@ 2025-05-15 15:36 ` Cédric Le Goater
2025-05-16 1:29 ` Nicholas Piggin
2025-07-03 9:37 ` Gautam Menghani
51 siblings, 1 reply; 192+ messages in thread
From: Cédric Le Goater @ 2025-05-15 15:36 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On 5/12/25 05:10, Nicholas Piggin wrote:
> These changes gets the powernv xive2 to the point it is able to run
> PowerVM with good stability.
>
> * Various bug fixes around lost interrupts particularly.
> * Major group interrupt work, in particular around redistributing
> interrupts. Upstream group support is not in a complete or usable
> state as it is.
> * Significant context push/pull improvements, particularly pool and
> phys context handling was quite incomplete beyond trivial OPAL
> case that pushes at boot.
> * Improved tracing and checking for unimp and guest error situations.
> * Various other missing feature support.
>
> The ordering and grouping of patches in the series is not perfect,
> because it had been an ongoing development, and PowerVM only started
> to become stable toward the end. I did try to rearrange and improve
> things, but some were not worth rebasing cost (e.g., some of the
> pool/phys pull redistribution patches should have ideally been squashed
> or moved together), so please bear that in mind. Suggestions for
> further rearranging the series are fine, but I might just find they are
> too much effort to be worthwhile.
>
> Thanks,
> Nick
>
> Glenn Miles (12):
> ppc/xive2: Fix calculation of END queue sizes
> ppc/xive2: Use fair irq target search algorithm
> ppc/xive2: Fix irq preempted by lower priority group irq
> ppc/xive2: Fix treatment of PIPR in CPPR update
> pnv/xive2: Support ESB Escalation
> ppc/xive2: add interrupt priority configuration flags
> ppc/xive2: Support redistribution of group interrupts
> ppc/xive: Add more interrupt notification tracing
> ppc/xive2: Improve pool regs variable name
> ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
> ppc/xive2: Redistribute group interrupt precluded by CPPR update
> ppc/xive2: redistribute irqs for pool and phys ctx pull
>
> Michael Kowal (4):
> ppc/xive2: Remote VSDs need to match on forwarding address
> ppc/xive2: Reset Generation Flipped bit on END Cache Watch
> pnv/xive2: Print value in invalid register write logging
> pnv/xive2: Permit valid writes to VC/PC Flush Control registers
>
> Nicholas Piggin (34):
> ppc/xive: Fix xive trace event output
> ppc/xive: Report access size in XIVE TM operation error logs
> ppc/xive2: fix context push calculation of IPB priority
> ppc/xive: Fix PHYS NSR ring matching
> ppc/xive2: Do not present group interrupt on OS-push if precluded by
> CPPR
> ppc/xive2: Set CPPR delivery should account for group priority
> ppc/xive: tctx_notify should clear the precluded interrupt
> ppc/xive: Explicitly zero NSR after accepting
> ppc/xive: Move NSR decoding into helper functions
> ppc/xive: Fix pulling pool and phys contexts
> pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
> ppc/xive: Change presenter .match_nvt to match not present
> ppc/xive2: Redistribute group interrupt preempted by higher priority
> interrupt
> ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
> ppc/xive: Fix high prio group interrupt being preempted by low prio VP
> ppc/xive: Split xive recompute from IPB function
> ppc/xive: tctx signaling registers rework
> ppc/xive: tctx_accept only lower irq line if an interrupt was
> presented
> ppc/xive: Add xive_tctx_pipr_set() helper function
> ppc/xive2: split tctx presentation processing from set CPPR
> ppc/xive2: Consolidate presentation processing in context push
> ppc/xive2: Avoid needless interrupt re-check on CPPR set
> ppc/xive: Assert group interrupts were redistributed
> ppc/xive2: implement NVP context save restore for POOL ring
> ppc/xive2: Prevent pulling of pool context losing phys interrupt
> ppc/xive: Redistribute phys after pulling of pool context
> ppc/xive: Check TIMA operations validity
> ppc/xive2: Implement pool context push TIMA op
> ppc/xive2: redistribute group interrupts on context push
> ppc/xive2: Implement set_os_pending TIMA op
> ppc/xive2: Implement POOL LGS push TIMA op
> ppc/xive2: Implement PHYS ring VP push TIMA op
> ppc/xive: Split need_resend into restore_nvp
> ppc/xive2: Enable lower level contexts on VP push
>
> hw/intc/pnv_xive.c | 16 +-
> hw/intc/pnv_xive2.c | 139 +++++--
> hw/intc/pnv_xive2_regs.h | 1 +
> hw/intc/spapr_xive.c | 18 +-
> hw/intc/trace-events | 12 +-
> hw/intc/xive.c | 555 ++++++++++++++++++----------
> hw/intc/xive2.c | 717 +++++++++++++++++++++++++++---------
> hw/ppc/pnv.c | 48 +--
> hw/ppc/spapr.c | 21 +-
> include/hw/ppc/xive.h | 66 +++-
> include/hw/ppc/xive2.h | 22 +-
> include/hw/ppc/xive2_regs.h | 22 +-
> 12 files changed, 1145 insertions(+), 492 deletions(-)
>
I am impressed :) and glad that you are still taking care of XIVE.
I suggest adding new names under the XIVE entry in the MAINTAINERS file.
Thanks,
C.
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 06/50] ppc/xive: Fix PHYS NSR ring matching
2025-05-12 3:10 ` [PATCH 06/50] ppc/xive: Fix PHYS NSR ring matching Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
2025-05-14 18:49 ` Mike Kowal
@ 2025-05-15 15:39 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:39 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Test that the NSR exception bit field is equal to the pool ring value,
> rather than any common bits set, which is more correct (although there
> is no practical bug because the LSI NSR type is not implemented and
> POOL/PHYS NSR are encoded with exclusive bits).
>
> Fixes: 4c3ccac636 ("pnv/xive: Add special handling for pool targets")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 80b07a0afe..cebe409a1a 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -54,7 +54,8 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> uint8_t *alt_regs;
>
> /* POOL interrupt uses IPB in QW2, POOL ring */
> - if ((ring == TM_QW3_HV_PHYS) && (nsr & (TM_QW3_NSR_HE_POOL << 6))) {
> + if ((ring == TM_QW3_HV_PHYS) &&
> + ((nsr & TM_QW3_NSR_HE) == (TM_QW3_NSR_HE_POOL << 6))) {
> alt_ring = TM_QW2_HV_POOL;
> } else {
> alt_ring = ring;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch
2025-05-12 3:10 ` [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
2025-05-14 18:50 ` Mike Kowal
@ 2025-05-15 15:41 ` Miles Glenn
2025-05-16 0:09 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:41 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> When the END Event Queue wraps the END EQ Generation bit is flipped and the
> Generation Flipped bit is set to one. On a END cache Watch read operation,
> the Generation Flipped bit needs to be reset.
>
> While debugging an error modified END not valid error messages to include
> the method since all were the same.
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 3 ++-
> hw/intc/xive2.c | 4 ++--
> 2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 30b4ab2efe..72cdf0f20c 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1325,10 +1325,11 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
> case VC_ENDC_WATCH3_DATA0:
> /*
> * Load DATA registers from cache with data requested by the
> - * SPEC register
> + * SPEC register. Clear gen_flipped bit in word 1.
> */
> watch_engine = (offset - VC_ENDC_WATCH0_DATA0) >> 6;
> pnv_xive2_end_cache_load(xive, watch_engine);
> + xive->vc_regs[reg] &= ~(uint64_t)END2_W1_GEN_FLIPPED;
> val = xive->vc_regs[reg];
> break;
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 4dd04a0398..453fe37f18 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -374,8 +374,8 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
> qgen ^= 1;
> end->w1 = xive_set_field32(END2_W1_GENERATION, end->w1, qgen);
>
> - /* TODO(PowerNV): reset GF bit on a cache watch operation */
> - end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, qgen);
> + /* Set gen flipped to 1, it gets reset on a cache watch operation */
> + end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, 1);
> }
> end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm
2025-05-12 3:10 ` [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm Nicholas Piggin
2025-05-14 14:31 ` Caleb Schlossin
2025-05-14 18:51 ` Mike Kowal
@ 2025-05-15 15:42 ` Miles Glenn
2025-05-16 0:12 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:42 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> The current xive algorithm for finding a matching group vCPU
> target always uses the first vCPU found. And, since it always
> starts the search with thread 0 of a core, thread 0 is almost
> always used to handle group interrupts. This can lead to additional
> interrupt latency and poor performance for interrupt intensive
> work loads.
>
> Changing this to use a simple round-robin algorithm for deciding which
> thread number to use when starting a search, which leads to a more
> distributed use of threads for handling group interrupts.
>
> [npiggin: Also round-robin among threads, not just cores]
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 72cdf0f20c..d7ca97ecbb 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -643,13 +643,18 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> int i, j;
> bool gen1_tima_os =
> xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
> + static int next_start_core;
> + static int next_start_thread;
> + int start_core = next_start_core;
> + int start_thread = next_start_thread;
>
> for (i = 0; i < chip->nr_cores; i++) {
> - PnvCore *pc = chip->cores[i];
> + PnvCore *pc = chip->cores[(i + start_core) % chip->nr_cores];
> CPUCore *cc = CPU_CORE(pc);
>
> for (j = 0; j < cc->nr_threads; j++) {
> - PowerPCCPU *cpu = pc->threads[j];
> + /* Start search for match with different thread each call */
> + PowerPCCPU *cpu = pc->threads[(j + start_thread) % cc->nr_threads];
> XiveTCTX *tctx;
> int ring;
>
> @@ -694,6 +699,15 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> if (!match->tctx) {
> match->ring = ring;
> match->tctx = tctx;
> +
> + next_start_thread = j + start_thread + 1;
> + if (next_start_thread >= cc->nr_threads) {
> + next_start_thread = 0;
> + next_start_core = i + start_core + 1;
> + if (next_start_core >= chip->nr_cores) {
> + next_start_core = 0;
> + }
> + }
> }
> count++;
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt
2025-05-12 3:10 ` [PATCH 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt Nicholas Piggin
@ 2025-05-15 15:43 ` Mike Kowal
2025-05-15 16:10 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:43 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> When the pool context is pulled, the shared pool/phys signal is
> reset, which loses the qemu irq if a phys interrupt was presented.
>
> Only reset the signal if a poll irq was presented.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 18 ++++++++++--------
> 1 file changed, 10 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index d899c1fb14..aeeb901b6a 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -727,20 +727,22 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_redistribute(xrtr, tctx, cur_ring);
> }
> }
> +
> + /*
> + * Lower external interrupt line of requested ring and below except for
> + * USER, which doesn't exist.
> + */
> + if (xive_nsr_indicates_exception(cur_ring, nsr)) {
> + if (cur_ring == xive_nsr_exception_ring(cur_ring, nsr)) {
> + xive_tctx_reset_signal(tctx, cur_ring);
> + }
> + }
> }
>
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
> }
>
> - /*
> - * Lower external interrupt line of requested ring and below except for
> - * USER, which doesn't exist.
> - */
> - for (cur_ring = TM_QW1_OS; cur_ring <= ring;
> - cur_ring += XIVE_TM_RING_SIZE) {
> - xive_tctx_reset_signal(tctx, cur_ring);
> - }
> return target_ringw2;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR
2025-05-12 3:10 ` [PATCH 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR Nicholas Piggin
2025-05-14 14:32 ` Caleb Schlossin
2025-05-14 18:54 ` Mike Kowal
@ 2025-05-15 15:43 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:43 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Group interrupts should not be taken from the backlog and presented
> if they are precluded by CPPR.
>
> Fixes: 855434b3b8 ("ppc/xive2: Process group backlog when pushing an OS context")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 1971c05fa1..8ede95b671 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -845,7 +845,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> group_prio = xive2_presenter_backlog_scan(xptr, nvp_blk, nvp_idx,
> first_group, &group_level);
> regs[TM_LSMFB] = group_prio;
> - if (regs[TM_LGS] && group_prio < backlog_prio) {
> + if (regs[TM_LGS] && group_prio < backlog_prio &&
> + group_prio < regs[TM_CPPR]) {
> +
> /* VP can take a group interrupt */
> xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
> group_prio, group_level);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 45/50] ppc/xive2: redistribute group interrupts on context push
2025-05-12 3:10 ` [PATCH 45/50] ppc/xive2: redistribute group interrupts on context push Nicholas Piggin
@ 2025-05-15 15:44 ` Mike Kowal
2025-05-15 16:13 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:44 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> When pushing a context, any presented group interrupt should be
> redistributed before processing pending interrupts to present
> highest priority.
>
> This can occur when pushing the POOL ring when the valid PHYS
> ring has a group interrupt presented, because they share signal
> registers.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 21cd07df68..392ac6077e 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -945,8 +945,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
> {
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
> - uint8_t ipb;
> + uint8_t ipb, nsr = sig_regs[TM_NSR];
> Xive2Nvp nvp;
>
> /*
> @@ -978,6 +979,11 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
>
> + if (xive_nsr_indicates_group_exception(ring, nsr)) {
> + /* redistribute precluded active grp interrupt */
> + g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the grp interrupt */
> + xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
> + }
> xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
> TM_QW3_HV_PHYS : ring);
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 12/50] ppc/xive2: Set CPPR delivery should account for group priority
2025-05-12 3:10 ` [PATCH 12/50] ppc/xive2: Set CPPR delivery should account for group priority Nicholas Piggin
2025-05-14 14:33 ` Caleb Schlossin
2025-05-14 18:57 ` Mike Kowal
@ 2025-05-15 15:45 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:45 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> The group interrupt delivery flow selects the group backlog scan if
> LSMFB < IPB, but that scan may find an interrupt with a priority >=
> IPB. In that case, the VP-direct interrupt should be chosen. This
> extends to selecting the lowest prio between POOL and PHYS rings.
>
> Implement this just by re-starting the selection logic if the
> backlog irq was not found or priority did not match LSMFB (LSMFB
> is updated so next time around it would see the right value and
> not loop infinitely).
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 32 ++++++++++++++++++++++----------
> 1 file changed, 22 insertions(+), 10 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 8ede95b671..de139dcfbf 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -939,7 +939,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> uint8_t *regs = &tctx->regs[ring];
> Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> - uint8_t old_cppr, backlog_prio, first_group, group_level = 0;
> + uint8_t old_cppr, backlog_prio, first_group, group_level;
> uint8_t pipr_min, lsmfb_min, ring_min;
> bool group_enabled;
> uint32_t nvp_blk, nvp_idx;
> @@ -961,10 +961,12 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> * Recompute the PIPR based on local pending interrupts. It will
> * be adjusted below if needed in case of pending group interrupts.
> */
> +again:
> pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
> group_enabled = !!regs[TM_LGS];
> - lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff;
> + lsmfb_min = group_enabled ? regs[TM_LSMFB] : 0xff;
> ring_min = ring;
> + group_level = 0;
>
> /* PHYS updates also depend on POOL values */
> if (ring == TM_QW3_HV_PHYS) {
> @@ -998,9 +1000,6 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
> }
>
> - /* PIPR should not be set to a value greater than CPPR */
> - regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
> -
> rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> if (rc) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
> @@ -1019,7 +1018,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
>
> if (group_enabled &&
> lsmfb_min < cppr &&
> - lsmfb_min < regs[TM_PIPR]) {
> + lsmfb_min < pipr_min) {
> /*
> * Thread has seen a group interrupt with a higher priority
> * than the new cppr or pending local interrupt. Check the
> @@ -1048,12 +1047,25 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> nvp_blk, nvp_idx,
> first_group, &group_level);
> tctx->regs[ring_min + TM_LSMFB] = backlog_prio;
> - if (backlog_prio != 0xFF) {
> - xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
> - backlog_prio, group_level);
> - regs[TM_PIPR] = backlog_prio;
> + if (backlog_prio != lsmfb_min) {
> + /*
> + * If the group backlog scan finds a less favored or no interrupt,
> + * then re-do the processing which may turn up a more favored
> + * interrupt from IPB or the other pool. Backlog should not
> + * find a priority < LSMFB.
> + */
> + g_assert(backlog_prio >= lsmfb_min);
> + goto again;
> }
> +
> + xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
> + backlog_prio, group_level);
> + pipr_min = backlog_prio;
> }
> +
> + /* PIPR should not be set to a value greater than CPPR */
> + regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
> +
> /* CPPR has changed, check if we need to raise a pending exception */
> xive_tctx_notify(tctx, ring_min, group_level);
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 13/50] ppc/xive: tctx_notify should clear the precluded interrupt
2025-05-12 3:10 ` [PATCH 13/50] ppc/xive: tctx_notify should clear the precluded interrupt Nicholas Piggin
2025-05-14 14:33 ` Caleb Schlossin
2025-05-14 18:58 ` Mike Kowal
@ 2025-05-15 15:46 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:46 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> If CPPR is lowered to preclude the pending interrupt, NSR should be
> cleared and the qemu_irq should be lowered. This avoids some cases
> of supurious interrupts.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index cebe409a1a..6293ea4361 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -110,6 +110,9 @@ void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
> regs[TM_IPB], alt_regs[TM_PIPR],
> alt_regs[TM_CPPR], alt_regs[TM_NSR]);
> qemu_irq_raise(xive_tctx_output(tctx, ring));
> + } else {
> + alt_regs[TM_NSR] = 0;
> + qemu_irq_lower(xive_tctx_output(tctx, ring));
> }
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 42/50] ppc/xive: Redistribute phys after pulling of pool context
2025-05-12 3:10 ` [PATCH 42/50] ppc/xive: Redistribute phys after pulling of pool context Nicholas Piggin
@ 2025-05-15 15:46 ` Mike Kowal
2025-05-15 16:11 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:46 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> After pulling the pool context, if a pool irq had been presented and
> was cleared in the process, there could be a pending irq in phys that
> should be presented. Process the phys irq ring after pulling pool ring
> to catch this case and avoid losing irqs.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 3 +++
> hw/intc/xive2.c | 16 ++++++++++++++--
> 2 files changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 81af59f0ec..aeca66e56e 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -320,6 +320,9 @@ static uint64_t xive_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>
> xive_tctx_reset_signal(tctx, TM_QW1_OS);
> xive_tctx_reset_signal(tctx, TM_QW2_HV_POOL);
> + /* Re-check phys for interrupts if pool was disabled */
> + xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW3_HV_PHYS);
> +
> return qw2w2;
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index aeeb901b6a..917ecbaae4 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -683,6 +683,8 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> xive_tctx_reset_signal(tctx, ring);
> }
>
> +static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
> +
> static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size, uint8_t ring)
> {
> @@ -739,6 +741,18 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> }
> }
>
> + if (ring == TM_QW2_HV_POOL) {
> + /* Re-check phys for interrupts if pool was disabled */
> + nsr = tctx->regs[TM_QW3_HV_PHYS + TM_NSR];
> + if (xive_nsr_indicates_exception(TM_QW3_HV_PHYS, nsr)) {
> + /* Ring must be PHYS because POOL would have been redistributed */
> + g_assert(xive_nsr_exception_ring(TM_QW3_HV_PHYS, nsr) ==
> + TM_QW3_HV_PHYS);
> + } else {
> + xive2_tctx_process_pending(tctx, TM_QW3_HV_PHYS);
> + }
> + }
> +
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
> }
> @@ -925,8 +939,6 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> return cppr;
> }
>
> -static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
> -
> static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting
2025-05-12 3:10 ` [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting Nicholas Piggin
2025-05-14 14:34 ` Caleb Schlossin
2025-05-14 19:07 ` Mike Kowal
@ 2025-05-15 15:47 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:47 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Have xive_tctx_accept clear NSR in one shot rather than masking out bits
> as they are tested, which makes it clear it's reset to 0, and does not
> have a partial NSR value in the register.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 6293ea4361..bb40a69c5b 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -68,13 +68,11 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> * If the interrupt was for a specific VP, reset the pending
> * buffer bit, otherwise clear the logical server indicator
> */
> - if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
> - regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
> - } else {
> + if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
> alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> }
>
> - /* Drop the exception bit and any group/crowd */
> + /* Clear the exception from NSR */
> regs[TM_NSR] = 0;
>
> trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 43/50] ppc/xive: Check TIMA operations validity
2025-05-12 3:10 ` [PATCH 43/50] ppc/xive: Check TIMA operations validity Nicholas Piggin
@ 2025-05-15 15:47 ` Mike Kowal
2025-05-15 16:12 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:47 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Certain TIMA operations should only be performed when a ring is valid,
> others when the ring is invalid, and they are considered undefined if
> used incorrectly. Add checks for this condition.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 196 +++++++++++++++++++++++++-----------------
> include/hw/ppc/xive.h | 1 +
> 2 files changed, 116 insertions(+), 81 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index aeca66e56e..d5bbd8f4c6 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -25,6 +25,19 @@
> /*
> * XIVE Thread Interrupt Management context
> */
> +bool xive_ring_valid(XiveTCTX *tctx, uint8_t ring)
> +{
> + uint8_t cur_ring;
> +
> + for (cur_ring = ring; cur_ring <= TM_QW3_HV_PHYS;
> + cur_ring += XIVE_TM_RING_SIZE) {
> + if (!(tctx->regs[cur_ring + TM_WORD2] & 0x80)) {
> + return false;
> + }
> + }
> + return true;
> +}
> +
> bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr)
> {
> switch (ring) {
> @@ -663,6 +676,8 @@ typedef struct XiveTmOp {
> uint8_t page_offset;
> uint32_t op_offset;
> unsigned size;
> + bool hw_ok;
> + bool sw_ok;
> void (*write_handler)(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset,
> uint64_t value, unsigned size);
> @@ -675,34 +690,34 @@ static const XiveTmOp xive_tm_operations[] = {
> * MMIOs below 2K : raw values and special operations without side
> * effects
> */
> - { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive_tm_set_os_cppr,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive_tm_push_os_ctx,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive_tm_set_hv_cppr,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, NULL,
> - xive_tm_vt_poll },
> + { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, true, true,
> + xive_tm_set_os_cppr, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, true, true,
> + xive_tm_push_os_ctx, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
> + xive_tm_set_hv_cppr, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, false, true,
> + xive_tm_vt_push, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
> + NULL, xive_tm_vt_poll },
>
> /* MMIOs above 2K : special operations with side effects */
> - { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, NULL,
> - xive_tm_ack_os_reg },
> - { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, xive_tm_set_os_pending,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, NULL,
> - xive_tm_pull_os_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, NULL,
> - xive_tm_pull_os_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, NULL,
> - xive_tm_ack_hv_reg },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, NULL,
> - xive_tm_pull_pool_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, NULL,
> - xive_tm_pull_pool_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, NULL,
> - xive_tm_pull_phys_ctx },
> + { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, true, false,
> + NULL, xive_tm_ack_os_reg },
> + { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, true, false,
> + xive_tm_set_os_pending, NULL },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, true, false,
> + NULL, xive_tm_pull_os_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, true, false,
> + NULL, xive_tm_pull_os_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, true, false,
> + NULL, xive_tm_ack_hv_reg },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, true, false,
> + NULL, xive_tm_pull_pool_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, true, false,
> + NULL, xive_tm_pull_pool_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, true, false,
> + NULL, xive_tm_pull_phys_ctx },
> };
>
> static const XiveTmOp xive2_tm_operations[] = {
> @@ -710,52 +725,48 @@ static const XiveTmOp xive2_tm_operations[] = {
> * MMIOs below 2K : raw values and special operations without side
> * effects
> */
> - { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive2_tm_set_os_cppr,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive2_tm_push_os_ctx,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 8, xive2_tm_push_os_ctx,
> - NULL },
> - { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, xive_tm_set_os_lgs,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive2_tm_set_hv_cppr,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, NULL,
> - xive_tm_vt_poll },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T, 1, xive2_tm_set_hv_target,
> - NULL },
> + { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, true, true,
> + xive2_tm_set_os_cppr, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, true, true,
> + xive2_tm_push_os_ctx, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 8, true, true,
> + xive2_tm_push_os_ctx, NULL },
> + { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, true, true,
> + xive_tm_set_os_lgs, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
> + xive2_tm_set_hv_cppr, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
> + NULL, xive_tm_vt_poll },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T, 1, true, true,
> + xive2_tm_set_hv_target, NULL },
>
> /* MMIOs above 2K : special operations with side effects */
> - { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, NULL,
> - xive_tm_ack_os_reg },
> - { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, xive_tm_set_os_pending,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, NULL,
> - xive2_tm_pull_os_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, NULL,
> - xive2_tm_pull_os_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, NULL,
> - xive2_tm_pull_os_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, NULL,
> - xive_tm_ack_hv_reg },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2, 4, NULL,
> - xive2_tm_pull_pool_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, NULL,
> - xive2_tm_pull_pool_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, NULL,
> - xive2_tm_pull_pool_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL, 1, xive2_tm_pull_os_ctx_ol,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2, 4, NULL,
> - xive2_tm_pull_phys_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, NULL,
> - xive2_tm_pull_phys_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, xive2_tm_pull_phys_ctx_ol,
> - NULL },
> - { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, xive2_tm_ack_os_el,
> - NULL },
> + { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, true, false,
> + NULL, xive_tm_ack_os_reg },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, true, false,
> + NULL, xive2_tm_pull_os_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, true, false,
> + NULL, xive2_tm_pull_os_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, true, false,
> + NULL, xive2_tm_pull_os_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, true, false,
> + NULL, xive_tm_ack_hv_reg },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2, 4, true, false,
> + NULL, xive2_tm_pull_pool_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, true, false,
> + NULL, xive2_tm_pull_pool_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, true, false,
> + NULL, xive2_tm_pull_pool_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL, 1, true, false,
> + xive2_tm_pull_os_ctx_ol, NULL },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2, 4, true, false,
> + NULL, xive2_tm_pull_phys_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, true, false,
> + NULL, xive2_tm_pull_phys_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, true, false,
> + xive2_tm_pull_phys_ctx_ol, NULL },
> + { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, true, false,
> + xive2_tm_ack_os_el, NULL },
> };
>
> static const XiveTmOp *xive_tm_find_op(XivePresenter *xptr, hwaddr offset,
> @@ -797,18 +808,28 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> uint64_t value, unsigned size)
> {
> const XiveTmOp *xto;
> + uint8_t ring = offset & TM_RING_OFFSET;
> + bool is_valid = xive_ring_valid(tctx, ring);
> + bool hw_owned = is_valid;
>
> trace_xive_tctx_tm_write(tctx->cs->cpu_index, offset, size, value);
>
> - /*
> - * TODO: check V bit in Q[0-3]W2
> - */
> -
> /*
> * First, check for special operations in the 2K region
> */
> + xto = xive_tm_find_op(tctx->xptr, offset, size, true);
> + if (xto) {
> + if (hw_owned && !xto->hw_ok) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to HW TIMA "
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> + }
> + if (!hw_owned && !xto->sw_ok) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to SW TIMA "
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> + }
> + }
> +
> if (offset & TM_SPECIAL_OP) {
> - xto = xive_tm_find_op(tctx->xptr, offset, size, true);
> if (!xto) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA "
> "@%"HWADDR_PRIx" size %d\n", offset, size);
> @@ -821,7 +842,6 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> /*
> * Then, for special operations in the region below 2K.
> */
> - xto = xive_tm_find_op(tctx->xptr, offset, size, true);
> if (xto) {
> xto->write_handler(xptr, tctx, offset, value, size);
> return;
> @@ -830,6 +850,11 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> /*
> * Finish with raw access to the register values
> */
> + if (hw_owned) {
> + /* Store context operations are dangerous when context is valid */
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to HW TIMA "
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> + }
> xive_tm_raw_write(tctx, offset, value, size);
> }
>
> @@ -837,17 +862,27 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> unsigned size)
> {
> const XiveTmOp *xto;
> + uint8_t ring = offset & TM_RING_OFFSET;
> + bool is_valid = xive_ring_valid(tctx, ring);
> + bool hw_owned = is_valid;
> uint64_t ret;
>
> - /*
> - * TODO: check V bit in Q[0-3]W2
> - */
> + xto = xive_tm_find_op(tctx->xptr, offset, size, false);
> + if (xto) {
> + if (hw_owned && !xto->hw_ok) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined read to HW TIMA "
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> + }
> + if (!hw_owned && !xto->sw_ok) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined read to SW TIMA "
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> + }
> + }
>
> /*
> * First, check for special operations in the 2K region
> */
> if (offset & TM_SPECIAL_OP) {
> - xto = xive_tm_find_op(tctx->xptr, offset, size, false);
> if (!xto) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access to TIMA"
> "@%"HWADDR_PRIx" size %d\n", offset, size);
> @@ -860,7 +895,6 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> /*
> * Then, for special operations in the region below 2K.
> */
> - xto = xive_tm_find_op(tctx->xptr, offset, size, false);
> if (xto) {
> ret = xto->read_handler(xptr, tctx, offset, size);
> goto out;
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 2372d1014b..b7ca8544e4 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -365,6 +365,7 @@ static inline uint32_t xive_tctx_word2(uint8_t *ring)
> return *((uint32_t *) &ring[TM_WORD2]);
> }
>
> +bool xive_ring_valid(XiveTCTX *tctx, uint8_t ring);
> bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr);
> bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr);
> uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 15/50] ppc/xive: Move NSR decoding into helper functions
2025-05-12 3:10 ` [PATCH 15/50] ppc/xive: Move NSR decoding into helper functions Nicholas Piggin
2025-05-14 14:35 ` Caleb Schlossin
2025-05-14 19:04 ` Mike Kowal
@ 2025-05-15 15:48 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:48 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Rather than functions to return masks to test NSR bits, have functions
> to test those bits directly. This should be no functional change, it
> just makes the code more readable.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 51 +++++++++++++++++++++++++++++++++++--------
> include/hw/ppc/xive.h | 4 ++++
> 2 files changed, 46 insertions(+), 9 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index bb40a69c5b..c2da23f9ea 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -25,6 +25,45 @@
> /*
> * XIVE Thread Interrupt Management context
> */
> +bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr)
> +{
> + switch (ring) {
> + case TM_QW1_OS:
> + return !!(nsr & TM_QW1_NSR_EO);
> + case TM_QW2_HV_POOL:
> + case TM_QW3_HV_PHYS:
> + return !!(nsr & TM_QW3_NSR_HE);
> + default:
> + g_assert_not_reached();
> + }
> +}
> +
> +bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr)
> +{
> + if ((nsr & TM_NSR_GRP_LVL) > 0) {
> + g_assert(xive_nsr_indicates_exception(ring, nsr));
> + return true;
> + }
> + return false;
> +}
> +
> +uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr)
> +{
> + /* NSR determines if pool/phys ring is for phys or pool interrupt */
> + if ((ring == TM_QW3_HV_PHYS) || (ring == TM_QW2_HV_POOL)) {
> + uint8_t he = (nsr & TM_QW3_NSR_HE) >> 6;
> +
> + if (he == TM_QW3_NSR_HE_PHYS) {
> + return TM_QW3_HV_PHYS;
> + } else if (he == TM_QW3_NSR_HE_POOL) {
> + return TM_QW2_HV_POOL;
> + } else {
> + /* Don't support LSI mode */
> + g_assert_not_reached();
> + }
> + }
> + return ring;
> +}
>
> static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
> {
> @@ -48,18 +87,12 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
>
> qemu_irq_lower(xive_tctx_output(tctx, ring));
>
> - if (regs[TM_NSR] != 0) {
> + if (xive_nsr_indicates_exception(ring, nsr)) {
> uint8_t cppr = regs[TM_PIPR];
> uint8_t alt_ring;
> uint8_t *alt_regs;
>
> - /* POOL interrupt uses IPB in QW2, POOL ring */
> - if ((ring == TM_QW3_HV_PHYS) &&
> - ((nsr & TM_QW3_NSR_HE) == (TM_QW3_NSR_HE_POOL << 6))) {
> - alt_ring = TM_QW2_HV_POOL;
> - } else {
> - alt_ring = ring;
> - }
> + alt_ring = xive_nsr_exception_ring(ring, nsr);
> alt_regs = &tctx->regs[alt_ring];
>
> regs[TM_CPPR] = cppr;
> @@ -68,7 +101,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> * If the interrupt was for a specific VP, reset the pending
> * buffer bit, otherwise clear the logical server indicator
> */
> - if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
> + if (!xive_nsr_indicates_group_exception(ring, nsr)) {
> alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> }
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 538f438681..28f0f1b79a 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -365,6 +365,10 @@ static inline uint32_t xive_tctx_word2(uint8_t *ring)
> return *((uint32_t *) &ring[TM_WORD2]);
> }
>
> +bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr);
> +bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr);
> +uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr);
> +
> /*
> * XIVE Router
> */
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 44/50] ppc/xive2: Implement pool context push TIMA op
2025-05-12 3:10 ` [PATCH 44/50] ppc/xive2: Implement pool context push TIMA op Nicholas Piggin
@ 2025-05-15 15:48 ` Mike Kowal
2025-05-15 16:13 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:48 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Implement pool context push TIMA op.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 4 ++++
> hw/intc/xive2.c | 50 ++++++++++++++++++++++++++++--------------
> include/hw/ppc/xive2.h | 2 ++
> 3 files changed, 39 insertions(+), 17 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index d5bbd8f4c6..979031a587 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -733,6 +733,10 @@ static const XiveTmOp xive2_tm_operations[] = {
> xive2_tm_push_os_ctx, NULL },
> { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, true, true,
> xive_tm_set_os_lgs, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 4, true, true,
> + xive2_tm_push_pool_ctx, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 8, true, true,
> + xive2_tm_push_pool_ctx, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
> xive2_tm_set_hv_cppr, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 917ecbaae4..21cd07df68 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -583,6 +583,7 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 1);
> }
>
> +/* POOL cam is the same as OS cam encoding */
> static void xive2_cam_decode(uint32_t cam, uint8_t *nvp_blk,
> uint32_t *nvp_idx, bool *valid, bool *hw)
> {
> @@ -940,10 +941,11 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> }
>
> static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> + uint8_t ring,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
> {
> - uint8_t *regs = &tctx->regs[TM_QW1_OS];
> + uint8_t *regs = &tctx->regs[ring];
> uint8_t ipb;
> Xive2Nvp nvp;
>
> @@ -965,7 +967,7 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
>
> /* Automatically restore thread context registers */
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_restore) {
> - xive2_tctx_restore_ctx(xrtr, tctx, TM_QW1_OS, nvp_blk, nvp_idx, &nvp);
> + xive2_tctx_restore_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx, &nvp);
> }
>
> ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
> @@ -976,48 +978,62 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
>
> - xive2_tctx_process_pending(tctx, TM_QW1_OS);
> + xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
> + TM_QW3_HV_PHYS : ring);
> }
>
> /*
> - * Updating the OS CAM line can trigger a resend of interrupt
> + * Updating the ring CAM line can trigger a resend of interrupt
> */
> -void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> - hwaddr offset, uint64_t value, unsigned size)
> +static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size,
> + uint8_t ring)
> {
> uint32_t cam;
> - uint32_t qw1w2;
> - uint64_t qw1dw1;
> + uint32_t w2;
> + uint64_t dw1;
> uint8_t nvp_blk;
> uint32_t nvp_idx;
> - bool vo;
> + bool v;
> bool do_restore;
>
> /* First update the thead context */
> switch (size) {
> case 4:
> cam = value;
> - qw1w2 = cpu_to_be32(cam);
> - memcpy(&tctx->regs[TM_QW1_OS + TM_WORD2], &qw1w2, 4);
> + w2 = cpu_to_be32(cam);
> + memcpy(&tctx->regs[ring + TM_WORD2], &w2, 4);
> break;
> case 8:
> cam = value >> 32;
> - qw1dw1 = cpu_to_be64(value);
> - memcpy(&tctx->regs[TM_QW1_OS + TM_WORD2], &qw1dw1, 8);
> + dw1 = cpu_to_be64(value);
> + memcpy(&tctx->regs[ring + TM_WORD2], &dw1, 8);
> break;
> default:
> g_assert_not_reached();
> }
>
> - xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &vo, &do_restore);
> + xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &v, &do_restore);
>
> /* Check the interrupt pending bits */
> - if (vo) {
> - xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, nvp_blk, nvp_idx,
> - do_restore);
> + if (v) {
> + xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, ring,
> + nvp_blk, nvp_idx, do_restore);
> }
> }
>
> +void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW1_OS);
> +}
> +
> +void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW2_HV_POOL);
> +}
> +
> /* returns -1 if ring is invalid, but still populates block and index */
> static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
> uint8_t *nvp_blk, uint32_t *nvp_idx)
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index a91b99057c..c1ab06a55a 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -140,6 +140,8 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
> void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
> void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> +void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size);
> uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size);
> uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 16/50] ppc/xive: Fix pulling pool and phys contexts
2025-05-12 3:10 ` [PATCH 16/50] ppc/xive: Fix pulling pool and phys contexts Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
2025-05-14 19:01 ` Mike Kowal
@ 2025-05-15 15:49 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:49 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> This improves the implementation of pulling pool and phys contexts in
> XIVE1, by following closer the OS pulling code.
>
> In particular, the old ring data is returned rather than the modified,
> and irq signals are reset on pull.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 66 ++++++++++++++++++++++++++++++++++++++++++++------
> 1 file changed, 58 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index c2da23f9ea..1a94642c62 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -241,25 +241,75 @@ static uint64_t xive_tm_ack_hv_reg(XivePresenter *xptr, XiveTCTX *tctx,
> return xive_tctx_accept(tctx, TM_QW3_HV_PHYS);
> }
>
> +static void xive_pool_cam_decode(uint32_t cam, uint8_t *nvt_blk,
> + uint32_t *nvt_idx, bool *vp)
> +{
> + if (nvt_blk) {
> + *nvt_blk = xive_nvt_blk(cam);
> + }
> + if (nvt_idx) {
> + *nvt_idx = xive_nvt_idx(cam);
> + }
> + if (vp) {
> + *vp = !!(cam & TM_QW2W2_VP);
> + }
> +}
> +
> +static uint32_t xive_tctx_get_pool_cam(XiveTCTX *tctx, uint8_t *nvt_blk,
> + uint32_t *nvt_idx, bool *vp)
> +{
> + uint32_t qw2w2 = xive_tctx_word2(&tctx->regs[TM_QW2_HV_POOL]);
> + uint32_t cam = be32_to_cpu(qw2w2);
> +
> + xive_pool_cam_decode(cam, nvt_blk, nvt_idx, vp);
> + return qw2w2;
> +}
> +
> +static void xive_tctx_set_pool_cam(XiveTCTX *tctx, uint32_t qw2w2)
> +{
> + memcpy(&tctx->regs[TM_QW2_HV_POOL + TM_WORD2], &qw2w2, 4);
> +}
> +
> static uint64_t xive_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size)
> {
> - uint32_t qw2w2_prev = xive_tctx_word2(&tctx->regs[TM_QW2_HV_POOL]);
> uint32_t qw2w2;
> + uint32_t qw2w2_new;
> + uint8_t nvt_blk;
> + uint32_t nvt_idx;
> + bool vp;
>
> - qw2w2 = xive_set_field32(TM_QW2W2_VP, qw2w2_prev, 0);
> - memcpy(&tctx->regs[TM_QW2_HV_POOL + TM_WORD2], &qw2w2, 4);
> + qw2w2 = xive_tctx_get_pool_cam(tctx, &nvt_blk, &nvt_idx, &vp);
> +
> + if (!vp) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pull invalid POOL NVT %x/%x !?\n",
> + nvt_blk, nvt_idx);
> + }
> +
> + /* Invalidate CAM line */
> + qw2w2_new = xive_set_field32(TM_QW2W2_VP, qw2w2, 0);
> + xive_tctx_set_pool_cam(tctx, qw2w2_new);
> +
> + xive_tctx_reset_signal(tctx, TM_QW1_OS);
> + xive_tctx_reset_signal(tctx, TM_QW2_HV_POOL);
> return qw2w2;
> }
>
> static uint64_t xive_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size)
> {
> - uint8_t qw3b8_prev = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
> - uint8_t qw3b8;
> + uint8_t qw3b8 = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
> + uint8_t qw3b8_new;
> +
> + qw3b8 = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
> + if (!(qw3b8 & TM_QW3B8_VT)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid PHYS thread!?\n");
> + }
> + qw3b8_new = qw3b8 & ~TM_QW3B8_VT;
> + tctx->regs[TM_QW3_HV_PHYS + TM_WORD2] = qw3b8_new;
>
> - qw3b8 = qw3b8_prev & ~TM_QW3B8_VT;
> - tctx->regs[TM_QW3_HV_PHYS + TM_WORD2] = qw3b8;
> + xive_tctx_reset_signal(tctx, TM_QW1_OS);
> + xive_tctx_reset_signal(tctx, TM_QW3_HV_PHYS);
> return qw3b8;
> }
>
> @@ -489,7 +539,7 @@ static uint64_t xive_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> qw1w2 = xive_tctx_get_os_cam(tctx, &nvt_blk, &nvt_idx, &vo);
>
> if (!vo) {
> - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid NVT %x/%x !?\n",
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pull invalid OS NVT %x/%x !?\n",
> nvt_blk, nvt_idx);
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 46/50] ppc/xive2: Implement set_os_pending TIMA op
2025-05-12 3:10 ` [PATCH 46/50] ppc/xive2: Implement set_os_pending TIMA op Nicholas Piggin
@ 2025-05-15 15:49 ` Mike Kowal
2025-05-15 16:14 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:49 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> xive2 must take into account redistribution of group interrupts if
> the VP directed priority exceeds the group interrupt priority after
> this operation. The xive1 code is not group aware so implement this
> for xive2.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 2 ++
> hw/intc/xive2.c | 28 ++++++++++++++++++++++++++++
> include/hw/ppc/xive2.h | 2 ++
> 3 files changed, 32 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 979031a587..dc64edf13d 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -747,6 +747,8 @@ static const XiveTmOp xive2_tm_operations[] = {
> /* MMIOs above 2K : special operations with side effects */
> { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, true, false,
> NULL, xive_tm_ack_os_reg },
> + { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, true, false,
> + xive2_tm_set_os_pending, NULL },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, true, false,
> NULL, xive2_tm_pull_os_ctx },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, true, false,
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 392ac6077e..de1ccad685 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1323,6 +1323,34 @@ void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_tctx_set_cppr(tctx, TM_QW1_OS, value & 0xff);
> }
>
> +/*
> + * Adjust the IPB to allow a CPU to process event queues of other
> + * priorities during one physical interrupt cycle.
> + */
> +void xive2_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> + uint8_t ring = TM_QW1_OS;
> + uint8_t *regs = &tctx->regs[ring];
> + uint8_t priority = value & 0xff;
> +
> + /*
> + * XXX: should this simply set a bit in IPB and wait for it to be picked
> + * up next cycle, or is it supposed to present it now? We implement the
> + * latter here.
> + */
> + regs[TM_IPB] |= xive_priority_to_ipb(priority);
> + if (xive_ipb_to_pipr(regs[TM_IPB]) >= regs[TM_PIPR]) {
> + return;
> + }
> + if (xive_nsr_indicates_group_exception(ring, regs[TM_NSR])) {
> + xive2_redistribute(xrtr, tctx, ring);
> + }
> +
> + xive_tctx_pipr_present(tctx, ring, priority, 0);
> +}
> +
> static void xive2_tctx_set_target(XiveTCTX *tctx, uint8_t ring, uint8_t target)
> {
> uint8_t *regs = &tctx->regs[ring];
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index c1ab06a55a..45266c2a8b 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -130,6 +130,8 @@ void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> +void xive2_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> uint64_t value, unsigned size);
> uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 18/50] pnv/xive2: Print value in invalid register write logging
2025-05-12 3:10 ` [PATCH 18/50] pnv/xive2: Print value in invalid register write logging Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
2025-05-14 19:09 ` Mike Kowal
@ 2025-05-15 15:50 ` Miles Glenn
2025-05-16 0:15 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:50 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> This can make it easier to see what the target system is trying to
> do.
>
> [npiggin: split from larger patch]
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 24 ++++++++++++++++--------
> 1 file changed, 16 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index d7ca97ecbb..fcf5b2e75c 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1197,7 +1197,8 @@ static void pnv_xive2_ic_cq_write(void *opaque, hwaddr offset,
> case CQ_FIRMASK_OR: /* FIR error reporting */
> break;
> default:
> - xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx, offset);
> + xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1495,7 +1496,8 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "VC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "VC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1703,7 +1705,8 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "PC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "PC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1790,7 +1793,8 @@ static void pnv_xive2_ic_tctxt_write(void *opaque, hwaddr offset,
> xive->tctxt_regs[reg] = val;
> break;
> default:
> - xive2_error(xive, "TCTXT: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "TCTXT: invalid write @0x%"HWADDR_PRIx
> + " data 0x%"PRIx64, offset, val);
> return;
> }
> }
> @@ -1861,7 +1865,8 @@ static void pnv_xive2_xscom_write(void *opaque, hwaddr offset,
> pnv_xive2_ic_tctxt_write(opaque, mmio_offset, val, size);
> break;
> default:
> - xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx
> + " value 0x%"PRIx64, offset, val);
> }
> }
>
> @@ -1929,7 +1934,8 @@ static void pnv_xive2_ic_notify_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx
> + " value 0x%"PRIx64, offset, val);
> }
> }
>
> @@ -1971,7 +1977,8 @@ static void pnv_xive2_ic_lsi_write(void *opaque, hwaddr offset,
> {
> PnvXive2 *xive = PNV_XIVE2(opaque);
>
> - xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> }
>
> static const MemoryRegionOps pnv_xive2_ic_lsi_ops = {
> @@ -2074,7 +2081,8 @@ static void pnv_xive2_ic_sync_write(void *opaque, hwaddr offset,
> inject_type = PNV_XIVE2_QUEUE_NXC_ST_RMT_CI;
> break;
> default:
> - xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 47/50] ppc/xive2: Implement POOL LGS push TIMA op
2025-05-12 3:10 ` [PATCH 47/50] ppc/xive2: Implement POOL LGS push " Nicholas Piggin
@ 2025-05-15 15:50 ` Mike Kowal
2025-05-15 16:15 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:50 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Implement set LGS for the POOL ring.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index dc64edf13d..807a1c1c34 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -532,6 +532,12 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
> xive_tctx_set_lgs(tctx, TM_QW1_OS, value & 0xff);
> }
>
> +static void xive_tm_set_pool_lgs(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive_tctx_set_lgs(tctx, TM_QW2_HV_POOL, value & 0xff);
> +}
> +
> /*
> * Adjust the PIPR to allow a CPU to process event queues of other
> * priorities during one physical interrupt cycle.
> @@ -737,6 +743,8 @@ static const XiveTmOp xive2_tm_operations[] = {
> xive2_tm_push_pool_ctx, NULL },
> { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 8, true, true,
> xive2_tm_push_pool_ctx, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_LGS, 1, true, true,
> + xive_tm_set_pool_lgs, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
> xive2_tm_set_hv_cppr, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 48/50] ppc/xive2: Implement PHYS ring VP push TIMA op
2025-05-12 3:10 ` [PATCH 48/50] ppc/xive2: Implement PHYS ring VP " Nicholas Piggin
@ 2025-05-15 15:50 ` Mike Kowal
2025-05-15 16:16 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:50 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> Implement the phys (aka hard) VP push. PowerVM uses this operation.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 2 ++
> hw/intc/xive2.c | 11 +++++++++++
> include/hw/ppc/xive2.h | 2 ++
> 3 files changed, 15 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 807a1c1c34..69118999e6 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -747,6 +747,8 @@ static const XiveTmOp xive2_tm_operations[] = {
> xive_tm_set_pool_lgs, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
> xive2_tm_set_hv_cppr, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, false, true,
> + xive2_tm_push_phys_ctx, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
> NULL, xive_tm_vt_poll },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T, 1, true, true,
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index de1ccad685..a9b188b909 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1005,6 +1005,11 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>
> /* First update the thead context */
> switch (size) {
> + case 1:
> + tctx->regs[ring + TM_WORD2] = value & 0xff;
> + cam = xive2_tctx_hw_cam_line(xptr, tctx);
> + cam |= ((value & 0xc0) << 24); /* V and H bits */
> + break;
> case 4:
> cam = value;
> w2 = cpu_to_be32(cam);
> @@ -1040,6 +1045,12 @@ void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW2_HV_POOL);
> }
>
> +void xive2_tm_push_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW3_HV_PHYS);
> +}
> +
> /* returns -1 if ring is invalid, but still populates block and index */
> static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
> uint8_t *nvp_blk, uint32_t *nvp_idx)
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 45266c2a8b..f4437e2c79 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -146,6 +146,8 @@ void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size);
> +void xive2_tm_push_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size);
> uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size);
> void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
2025-05-12 3:10 ` [PATCH 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL Nicholas Piggin
2025-05-14 14:37 ` Caleb Schlossin
2025-05-14 19:10 ` Mike Kowal
@ 2025-05-15 15:51 ` Miles Glenn
2 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:51 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Firmware expects to read back the WATCH_FULL bit from the VC_ENDC_WATCH_SPEC
> register, so don't clear it on read.
>
> Don't bother clearing the reads-as-zero CONFLICT bit because it's masked
> at write already.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/pnv_xive2.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index fcf5b2e75c..3c26cd6b77 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1329,7 +1329,6 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
> case VC_ENDC_WATCH2_SPEC:
> case VC_ENDC_WATCH3_SPEC:
> watch_engine = (offset - VC_ENDC_WATCH0_SPEC) >> 6;
> - xive->vc_regs[reg] &= ~(VC_ENDC_WATCH_FULL | VC_ENDC_WATCH_CONFLICT);
> pnv_xive2_endc_cache_watch_release(xive, watch_engine);
> val = xive->vc_regs[reg];
> break;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers
2025-05-12 3:10 ` [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers Nicholas Piggin
2025-05-14 14:37 ` Caleb Schlossin
2025-05-14 19:11 ` Mike Kowal
@ 2025-05-15 15:52 ` Miles Glenn
2025-05-16 0:18 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:52 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> Writes to the Flush Control registers were logged as invalid
> when they are allowed. Clearing the unsupported want_cache_disable
> feature is supported, so don't log an error in that case.
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 36 ++++++++++++++++++++++++++++++++----
> 1 file changed, 32 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 3c26cd6b77..c9374f0eee 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1411,7 +1411,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> /*
> * ESB cache updates (not modeled)
> */
> - /* case VC_ESBC_FLUSH_CTRL: */
> + case VC_ESBC_FLUSH_CTRL:
> + if (val & VC_ESBC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_ESBC_FLUSH_POLL:
> xive->vc_regs[VC_ESBC_FLUSH_CTRL >> 3] |= VC_ESBC_FLUSH_CTRL_POLL_VALID;
> /* ESB update */
> @@ -1427,7 +1434,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> /*
> * EAS cache updates (not modeled)
> */
> - /* case VC_EASC_FLUSH_CTRL: */
> + case VC_EASC_FLUSH_CTRL:
> + if (val & VC_EASC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_EASC_FLUSH_POLL:
> xive->vc_regs[VC_EASC_FLUSH_CTRL >> 3] |= VC_EASC_FLUSH_CTRL_POLL_VALID;
> /* EAS update */
> @@ -1466,7 +1480,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> break;
>
>
> - /* case VC_ENDC_FLUSH_CTRL: */
> + case VC_ENDC_FLUSH_CTRL:
> + if (val & VC_ENDC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_ENDC_FLUSH_POLL:
> xive->vc_regs[VC_ENDC_FLUSH_CTRL >> 3] |= VC_ENDC_FLUSH_CTRL_POLL_VALID;
> break;
> @@ -1687,7 +1708,14 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
> pnv_xive2_nxc_update(xive, watch_engine);
> break;
>
> - /* case PC_NXC_FLUSH_CTRL: */
> + case PC_NXC_FLUSH_CTRL:
> + if (val & PC_NXC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case PC_NXC_FLUSH_POLL:
> xive->pc_regs[PC_NXC_FLUSH_CTRL >> 3] |= PC_NXC_FLUSH_CTRL_POLL_VALID;
> break;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 28/50] ppc/xive: Change presenter .match_nvt to match not present
2025-05-12 3:10 ` [PATCH 28/50] ppc/xive: Change presenter .match_nvt to match not present Nicholas Piggin
2025-05-14 19:54 ` Mike Kowal
@ 2025-05-15 15:53 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:53 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Have the match_nvt method only perform a TCTX match but don't present
> the interrupt, the caller presents. This has no functional change, but
> allows for more complicated presentation logic after matching.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/pnv_xive.c | 16 +++++++-------
> hw/intc/pnv_xive2.c | 16 +++++++-------
> hw/intc/spapr_xive.c | 18 +++++++--------
> hw/intc/xive.c | 51 +++++++++++++++----------------------------
> hw/intc/xive2.c | 31 +++++++++++++-------------
> hw/ppc/pnv.c | 48 ++++++++++++++--------------------------
> hw/ppc/spapr.c | 21 +++++++-----------
> include/hw/ppc/xive.h | 27 +++++++++++++----------
> 8 files changed, 97 insertions(+), 131 deletions(-)
>
> diff --git a/hw/intc/pnv_xive.c b/hw/intc/pnv_xive.c
> index ccbe95a58e..cdde8d0814 100644
> --- a/hw/intc/pnv_xive.c
> +++ b/hw/intc/pnv_xive.c
> @@ -470,14 +470,13 @@ static bool pnv_xive_is_cpu_enabled(PnvXive *xive, PowerPCCPU *cpu)
> return xive->regs[reg >> 3] & PPC_BIT(bit);
> }
>
> -static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match)
> +static bool pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match)
> {
> PnvXive *xive = PNV_XIVE(xptr);
> PnvChip *chip = xive->chip;
> - int count = 0;
> int i, j;
>
> for (i = 0; i < chip->nr_cores; i++) {
> @@ -510,17 +509,18 @@ static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a "
> "thread context NVT %x/%x\n",
> nvt_blk, nvt_idx);
> - return -1;
> + match->count++;
> + continue;
> }
>
> match->ring = ring;
> match->tctx = tctx;
> - count++;
> + match->count++;
> }
> }
> }
>
> - return count;
> + return !!match->count;
> }
>
> static uint32_t pnv_xive_presenter_get_config(XivePresenter *xptr)
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 96b8851b7e..59b95e5219 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -640,14 +640,13 @@ static bool pnv_xive2_is_cpu_enabled(PnvXive2 *xive, PowerPCCPU *cpu)
> return xive->tctxt_regs[reg >> 3] & PPC_BIT(bit);
> }
>
> -static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match)
> +static bool pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match)
> {
> PnvXive2 *xive = PNV_XIVE2(xptr);
> PnvChip *chip = xive->chip;
> - int count = 0;
> int i, j;
> bool gen1_tima_os =
> xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
> @@ -692,7 +691,8 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> "thread context NVT %x/%x\n",
> nvt_blk, nvt_idx);
> /* Should set a FIR if we ever model it */
> - return -1;
> + match->count++;
> + continue;
> }
> /*
> * For a group notification, we need to know if the
> @@ -717,13 +717,13 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> }
> }
> }
> - count++;
> + match->count++;
> }
> }
> }
> }
>
> - return count;
> + return !!match->count;
> }
>
> static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
> diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
> index ce734b03ab..a7475d2f21 100644
> --- a/hw/intc/spapr_xive.c
> +++ b/hw/intc/spapr_xive.c
> @@ -428,14 +428,13 @@ static int spapr_xive_write_nvt(XiveRouter *xrtr, uint8_t nvt_blk,
> g_assert_not_reached();
> }
>
> -static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore,
> - uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match)
> +static bool spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore,
> + uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match)
> {
> CPUState *cs;
> - int count = 0;
>
> CPU_FOREACH(cs) {
> PowerPCCPU *cpu = POWERPC_CPU(cs);
> @@ -463,16 +462,17 @@ static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
> if (match->tctx) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a thread "
> "context NVT %x/%x\n", nvt_blk, nvt_idx);
> - return -1;
> + match->count++;
> + continue;
> }
>
> match->ring = ring;
> match->tctx = tctx;
> - count++;
> + match->count++;
> }
> }
>
> - return count;
> + return !!match->count;
> }
>
> static uint32_t spapr_xive_presenter_get_config(XivePresenter *xptr)
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index ad30476c17..27b5a21371 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -1762,8 +1762,8 @@ uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
> return 1U << (first_zero + 1);
> }
>
> -static uint8_t xive_get_group_level(bool crowd, bool ignore,
> - uint32_t nvp_blk, uint32_t nvp_index)
> +uint8_t xive_get_group_level(bool crowd, bool ignore,
> + uint32_t nvp_blk, uint32_t nvp_index)
> {
> int first_zero;
> uint8_t level;
> @@ -1881,15 +1881,14 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> * This is our simple Xive Presenter Engine model. It is merged in the
> * Router as it does not require an extra object.
> */
> -bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> +bool xive_presenter_match(XiveFabric *xfb, uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, bool *precluded)
> + uint32_t logic_serv, XiveTCTXMatch *match)
> {
> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
> - XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false };
> - uint8_t group_level;
> - int count;
> +
> + memset(match, 0, sizeof(*match));
>
> /*
> * Ask the machine to scan the interrupt controllers for a match.
> @@ -1914,22 +1913,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> * a new command to the presenters (the equivalent of the "assign"
> * power bus command in the documented full notify sequence.
> */
> - count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
> - priority, logic_serv, &match);
> - if (count < 0) {
> - return false;
> - }
> -
> - /* handle CPU exception delivery */
> - if (count) {
> - group_level = xive_get_group_level(crowd, cam_ignore, nvt_blk, nvt_idx);
> - trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
> - xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
> - } else {
> - *precluded = match.precluded;
> - }
> -
> - return !!count;
> + return xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
> + priority, logic_serv, match);
> }
>
> /*
> @@ -1966,7 +1951,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> uint8_t nvt_blk;
> uint32_t nvt_idx;
> XiveNVT nvt;
> - bool found, precluded;
> + XiveTCTXMatch match;
>
> uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
> uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
> @@ -2046,16 +2031,16 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> return;
> }
>
> - found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
> - false /* crowd */,
> - xive_get_field32(END_W7_F0_IGNORE, end.w7),
> - priority,
> - xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
> - &precluded);
> - /* we don't support VP-group notification on P9, so precluded is not used */
> /* TODO: Auto EOI. */
> -
> - if (found) {
> + /* we don't support VP-group notification on P9, so precluded is not used */
> + if (xive_presenter_match(xrtr->xfb, format, nvt_blk, nvt_idx,
> + false /* crowd */,
> + xive_get_field32(END_W7_F0_IGNORE, end.w7),
> + priority,
> + xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
> + &match)) {
> + trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, 0);
> + xive_tctx_pipr_update(match.tctx, match.ring, priority, 0);
> return;
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index ac94193464..6e136ad2e2 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1559,7 +1559,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> Xive2End end;
> uint8_t priority;
> uint8_t format;
> - bool found, precluded;
> + XiveTCTXMatch match;
> + bool crowd, cam_ignore;
> uint8_t nvx_blk;
> uint32_t nvx_idx;
>
> @@ -1629,16 +1630,19 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> */
> nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end.w6);
> nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end.w6);
> -
> - found = xive_presenter_notify(xrtr->xfb, format, nvx_blk, nvx_idx,
> - xive2_end_is_crowd(&end), xive2_end_is_ignore(&end),
> - priority,
> - xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
> - &precluded);
> + crowd = xive2_end_is_crowd(&end);
> + cam_ignore = xive2_end_is_ignore(&end);
>
> /* TODO: Auto EOI. */
> -
> - if (found) {
> + if (xive_presenter_match(xrtr->xfb, format, nvx_blk, nvx_idx,
> + crowd, cam_ignore, priority,
> + xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
> + &match)) {
> + uint8_t group_level;
> +
> + group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
> + trace_xive_presenter_notify(nvx_blk, nvx_idx, match.ring, group_level);
> + xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
> return;
> }
>
> @@ -1656,7 +1660,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> return;
> }
>
> - if (!xive2_end_is_ignore(&end)) {
> + if (!cam_ignore) {
> uint8_t ipb;
> Xive2Nvp nvp;
>
> @@ -1685,9 +1689,6 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> } else {
> Xive2Nvgc nvgc;
> uint32_t backlog;
> - bool crowd;
> -
> - crowd = xive2_end_is_crowd(&end);
>
> /*
> * For groups and crowds, the per-priority backlog
> @@ -1719,9 +1720,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> if (backlog == 1) {
> XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
> xfc->broadcast(xrtr->xfb, nvx_blk, nvx_idx,
> - xive2_end_is_crowd(&end),
> - xive2_end_is_ignore(&end),
> - priority);
> + crowd, cam_ignore, priority);
>
> if (!xive2_end_is_precluded_escalation(&end)) {
> /*
> diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
> index deb29a6389..0c17846b38 100644
> --- a/hw/ppc/pnv.c
> +++ b/hw/ppc/pnv.c
> @@ -2619,62 +2619,46 @@ static void pnv_pic_print_info(InterruptStatsProvider *obj, GString *buf)
> }
> }
>
> -static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv,
> - XiveTCTXMatch *match)
> +static bool pnv_match_nvt(XiveFabric *xfb, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv,
> + XiveTCTXMatch *match)
> {
> PnvMachineState *pnv = PNV_MACHINE(xfb);
> - int total_count = 0;
> int i;
>
> for (i = 0; i < pnv->num_chips; i++) {
> Pnv9Chip *chip9 = PNV9_CHIP(pnv->chips[i]);
> XivePresenter *xptr = XIVE_PRESENTER(&chip9->xive);
> XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
> - int count;
>
> - count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
> - cam_ignore, priority, logic_serv, match);
> -
> - if (count < 0) {
> - return count;
> - }
> -
> - total_count += count;
> + xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
> + cam_ignore, priority, logic_serv, match);
> }
>
> - return total_count;
> + return !!match->count;
> }
>
> -static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv,
> - XiveTCTXMatch *match)
> +static bool pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv,
> + XiveTCTXMatch *match)
> {
> PnvMachineState *pnv = PNV_MACHINE(xfb);
> - int total_count = 0;
> int i;
>
> for (i = 0; i < pnv->num_chips; i++) {
> Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
> XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
> XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
> - int count;
> -
> - count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
> - cam_ignore, priority, logic_serv, match);
> -
> - if (count < 0) {
> - return count;
> - }
>
> - total_count += count;
> + xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
> + cam_ignore, priority, logic_serv, match);
> }
>
> - return total_count;
> + return !!match->count;
> }
>
> static int pnv10_xive_broadcast(XiveFabric *xfb,
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index b0a0f8c689..93574d2a63 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -4468,21 +4468,14 @@ static void spapr_pic_print_info(InterruptStatsProvider *obj, GString *buf)
> /*
> * This is a XIVE only operation
> */
> -static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match)
> +static bool spapr_match_nvt(XiveFabric *xfb, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match)
> {
> SpaprMachineState *spapr = SPAPR_MACHINE(xfb);
> XivePresenter *xptr = XIVE_PRESENTER(spapr->active_intc);
> XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
> - int count;
> -
> - count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
> - priority, logic_serv, match);
> - if (count < 0) {
> - return count;
> - }
>
> /*
> * When we implement the save and restore of the thread interrupt
> @@ -4493,12 +4486,14 @@ static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
> * Until this is done, the sPAPR machine should find at least one
> * matching context always.
> */
> - if (count == 0) {
> + if (!xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
> + priority, logic_serv, match)) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVT %x/%x is not dispatched\n",
> nvt_blk, nvt_idx);
> + return false;
> }
>
> - return count;
> + return true;
> }
>
> int spapr_get_vcpu_id(PowerPCCPU *cpu)
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 46d05d74fb..8152a9df3d 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -425,6 +425,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas);
>
> typedef struct XiveTCTXMatch {
> XiveTCTX *tctx;
> + int count;
> uint8_t ring;
> bool precluded;
> } XiveTCTXMatch;
> @@ -440,10 +441,10 @@ DECLARE_CLASS_CHECKERS(XivePresenterClass, XIVE_PRESENTER,
>
> struct XivePresenterClass {
> InterfaceClass parent;
> - int (*match_nvt)(XivePresenter *xptr, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match);
> + bool (*match_nvt)(XivePresenter *xptr, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match);
> bool (*in_kernel)(const XivePresenter *xptr);
> uint32_t (*get_config)(XivePresenter *xptr);
> int (*broadcast)(XivePresenter *xptr,
> @@ -455,12 +456,14 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
> uint8_t format,
> uint8_t nvt_blk, uint32_t nvt_idx,
> bool cam_ignore, uint32_t logic_serv);
> -bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, bool *precluded);
> +bool xive_presenter_match(XiveFabric *xfb, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match);
>
> uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
> +uint8_t xive_get_group_level(bool crowd, bool ignore,
> + uint32_t nvp_blk, uint32_t nvp_index);
>
> /*
> * XIVE Fabric (Interface between Interrupt Controller and Machine)
> @@ -475,10 +478,10 @@ DECLARE_CLASS_CHECKERS(XiveFabricClass, XIVE_FABRIC,
>
> struct XiveFabricClass {
> InterfaceClass parent;
> - int (*match_nvt)(XiveFabric *xfb, uint8_t format,
> - uint8_t nvt_blk, uint32_t nvt_idx,
> - bool crowd, bool cam_ignore, uint8_t priority,
> - uint32_t logic_serv, XiveTCTXMatch *match);
> + bool (*match_nvt)(XiveFabric *xfb, uint8_t format,
> + uint8_t nvt_blk, uint32_t nvt_idx,
> + bool crowd, bool cam_ignore, uint8_t priority,
> + uint32_t logic_serv, XiveTCTXMatch *match);
> int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
> bool crowd, bool cam_ignore, uint8_t priority);
> };
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 50/50] ppc/xive2: Enable lower level contexts on VP push
2025-05-12 3:10 ` [PATCH 50/50] ppc/xive2: Enable lower level contexts on VP push Nicholas Piggin
@ 2025-05-15 15:54 ` Mike Kowal
2025-05-15 16:17 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:54 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> When pushing a context, the lower-level context becomes valid if it
> had V=1, and so on. Iterate lower level contexts and send them
> pending interrupts if they become enabled.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 36 ++++++++++++++++++++++++++++--------
> 1 file changed, 28 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 53e90b8178..ded003fa87 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -995,6 +995,12 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> bool v;
> bool do_restore;
>
> + if (xive_ring_valid(tctx, ring)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Attempt to push VP to enabled"
> + " ring 0x%02x\n", ring);
> + return;
> + }
> +
> /* First update the thead context */
> switch (size) {
> case 1:
> @@ -1021,19 +1027,32 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> /* Check the interrupt pending bits */
> if (v) {
> Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> - uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> - uint8_t nsr = sig_regs[TM_NSR];
> + uint8_t cur_ring;
>
> xive2_tctx_restore_nvp(xrtr, tctx, ring,
> nvp_blk, nvp_idx, do_restore);
>
> - if (xive_nsr_indicates_group_exception(ring, nsr)) {
> - /* redistribute precluded active grp interrupt */
> - g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the interrupt */
> - xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
> + for (cur_ring = TM_QW1_OS; cur_ring <= ring;
> + cur_ring += XIVE_TM_RING_SIZE) {
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, cur_ring);
> + uint8_t nsr = sig_regs[TM_NSR];
> +
> + if (!xive_ring_valid(tctx, cur_ring)) {
> + continue;
> + }
> +
> + if (cur_ring == TM_QW2_HV_POOL) {
> + if (xive_nsr_indicates_exception(cur_ring, nsr)) {
> + g_assert(xive_nsr_exception_ring(cur_ring, nsr) ==
> + TM_QW3_HV_PHYS);
> + xive2_redistribute(xrtr, tctx,
> + xive_nsr_exception_ring(ring, nsr));
> + }
> + xive2_tctx_process_pending(tctx, TM_QW3_HV_PHYS);
> + break;
> + }
> + xive2_tctx_process_pending(tctx, cur_ring);
> }
> - xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
> - TM_QW3_HV_PHYS : ring);
> }
> }
>
> @@ -1159,6 +1178,7 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
> int rc;
>
> g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
> + g_assert(sig_regs[TM_WORD2] & 0x80);
> g_assert(!xive_nsr_indicates_group_exception(sig_ring, sig_regs[TM_NSR]));
>
> /*
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt
2025-05-12 3:10 ` [PATCH 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt Nicholas Piggin
2025-05-14 19:55 ` Mike Kowal
@ 2025-05-15 15:54 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:54 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> A group interrupt that gets preempted by a higher priority interrupt
> delivery must be redistributed otherwise it would get lost.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 14 ++++++++++++--
> 1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 6e136ad2e2..cae4092198 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1638,11 +1638,21 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> crowd, cam_ignore, priority,
> xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
> &match)) {
> + XiveTCTX *tctx = match.tctx;
> + uint8_t ring = match.ring;
> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> + uint8_t *aregs = &tctx->regs[alt_ring];
> + uint8_t nsr = aregs[TM_NSR];
> uint8_t group_level;
>
> + if (priority < aregs[TM_PIPR] &&
> + xive_nsr_indicates_group_exception(alt_ring, nsr)) {
> + xive2_redistribute(xrtr, tctx, alt_ring);
> + }
> +
> group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
> - trace_xive_presenter_notify(nvx_blk, nvx_idx, match.ring, group_level);
> - xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
> + trace_xive_presenter_notify(nvx_blk, nvx_idx, ring, group_level);
> + xive_tctx_pipr_update(tctx, ring, priority, group_level);
> return;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
2025-05-12 3:10 ` [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt Nicholas Piggin
2025-05-14 20:10 ` Mike Kowal
@ 2025-05-15 15:55 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:55 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> xive_tctx_pipr_update() is used for multiple things. In an effort
> to make things simpler and less overloaded, split out the function
> that is used to present a new interrupt to the tctx.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 8 +++++++-
> hw/intc/xive2.c | 2 +-
> include/hw/ppc/xive.h | 2 ++
> 3 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 27b5a21371..bf4c0634ca 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -225,6 +225,12 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> xive_tctx_notify(tctx, ring, group_level);
> }
>
> +void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> + uint8_t group_level)
> +{
> + xive_tctx_pipr_update(tctx, ring, priority, group_level);
> +}
> +
> /*
> * XIVE Thread Interrupt Management Area (TIMA)
> */
> @@ -2040,7 +2046,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
> xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
> &match)) {
> trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, 0);
> - xive_tctx_pipr_update(match.tctx, match.ring, priority, 0);
> + xive_tctx_pipr_present(match.tctx, match.ring, priority, 0);
> return;
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index cae4092198..f91109b84a 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1652,7 +1652,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
>
> group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
> trace_xive_presenter_notify(nvx_blk, nvx_idx, ring, group_level);
> - xive_tctx_pipr_update(tctx, ring, priority, group_level);
> + xive_tctx_pipr_present(tctx, ring, priority, group_level);
> return;
> }
>
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 8152a9df3d..0d6b11e818 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -562,6 +562,8 @@ void xive_tctx_reset(XiveTCTX *tctx);
> void xive_tctx_destroy(XiveTCTX *tctx);
> void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level);
> +void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> + uint8_t group_level);
> void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
> void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
> uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP
2025-05-12 3:10 ` [PATCH 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP Nicholas Piggin
2025-05-15 15:21 ` Mike Kowal
@ 2025-05-15 15:55 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:55 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> xive_tctx_pipr_present() as implemented with xive_tctx_pipr_update()
> causes VP-directed (group==0) interrupt to be presented in PIPR and NSR
> despite being a lower priority than the currently presented group
> interrupt.
>
> This must not happen. The IPB bit should record the low priority VP
> interrupt, but PIPR and NSR must not present the lower priority
> interrupt.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 18 +++++++++++++++++-
> 1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index bf4c0634ca..25f6c69c44 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -228,7 +228,23 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level)
> {
> - xive_tctx_pipr_update(tctx, ring, priority, group_level);
> + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> + uint8_t *aregs = &tctx->regs[alt_ring];
> + uint8_t *regs = &tctx->regs[ring];
> + uint8_t pipr = xive_priority_to_pipr(priority);
> +
> + if (group_level == 0) {
> + regs[TM_IPB] |= xive_priority_to_ipb(priority);
> + if (pipr >= aregs[TM_PIPR]) {
> + /* VP interrupts can come here with lower priority than PIPR */
> + return;
> + }
> + }
> + g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
> + g_assert(pipr < aregs[TM_PIPR]);
> + aregs[TM_PIPR] = pipr;
> + xive_tctx_notify(tctx, ring, group_level);
> }
>
> /*
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 32/50] ppc/xive: Split xive recompute from IPB function
2025-05-12 3:10 ` [PATCH 32/50] ppc/xive: Split xive recompute from IPB function Nicholas Piggin
2025-05-14 20:42 ` Mike Kowal
@ 2025-05-15 15:56 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:56 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Further split xive_tctx_pipr_update() by splitting out a new function
> that is used to re-compute the PIPR from IPB. This is generally only
> used with XIVE1, because group interrputs require more logic.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 25 ++++++++++++++++++++++---
> 1 file changed, 22 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 25f6c69c44..5ff1b8f024 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -225,6 +225,20 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> xive_tctx_notify(tctx, ring, group_level);
> }
>
> +static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
> +{
> + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> + uint8_t *aregs = &tctx->regs[alt_ring];
> + uint8_t *regs = &tctx->regs[ring];
> +
> + /* Does not support a presented group interrupt */
> + g_assert(!xive_nsr_indicates_group_exception(alt_ring, aregs[TM_NSR]));
> +
> + aregs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> + xive_tctx_notify(tctx, ring, 0);
> +}
> +
> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level)
> {
> @@ -517,7 +531,12 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
> static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size)
> {
> - xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0);
> + uint8_t ring = TM_QW1_OS;
> + uint8_t *regs = &tctx->regs[ring];
> +
> + /* XXX: how should this work exactly? */
> + regs[TM_IPB] |= xive_priority_to_ipb(value & 0xff);
> + xive_tctx_pipr_recompute_from_ipb(tctx, ring);
> }
>
> static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk,
> @@ -601,14 +620,14 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
> }
>
> /*
> - * Always call xive_tctx_pipr_update(). Even if there were no
> + * Always call xive_tctx_recompute_from_ipb(). Even if there were no
> * escalation triggered, there could be a pending interrupt which
> * was saved when the context was pulled and that we need to take
> * into account by recalculating the PIPR (which is not
> * saved/restored).
> * It will also raise the External interrupt signal if needed.
> */
> - xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
> + xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
> }
>
> /*
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 49/50] ppc/xive: Split need_resend into restore_nvp
2025-05-12 3:10 ` [PATCH 49/50] ppc/xive: Split need_resend into restore_nvp Nicholas Piggin
@ 2025-05-15 15:57 ` Mike Kowal
2025-05-15 16:16 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-15 15:57 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
> This is needed by the next patch which will re-send on all lower
> rings when pushing a context.
Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks, MAK
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 24 ++++++++++++------------
> hw/intc/xive2.c | 28 ++++++++++++++++------------
> 2 files changed, 28 insertions(+), 24 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 69118999e6..9ade9ec6c1 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -606,7 +606,7 @@ static uint64_t xive_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> return qw1w2;
> }
>
> -static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
> +static void xive_tctx_restore_nvp(XiveRouter *xrtr, XiveTCTX *tctx,
> uint8_t nvt_blk, uint32_t nvt_idx)
> {
> XiveNVT nvt;
> @@ -632,16 +632,6 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
> uint8_t *regs = &tctx->regs[TM_QW1_OS];
> regs[TM_IPB] |= ipb;
> }
> -
> - /*
> - * Always call xive_tctx_recompute_from_ipb(). Even if there were no
> - * escalation triggered, there could be a pending interrupt which
> - * was saved when the context was pulled and that we need to take
> - * into account by recalculating the PIPR (which is not
> - * saved/restored).
> - * It will also raise the External interrupt signal if needed.
> - */
> - xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
> }
>
> /*
> @@ -663,7 +653,17 @@ static void xive_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>
> /* Check the interrupt pending bits */
> if (vo) {
> - xive_tctx_need_resend(XIVE_ROUTER(xptr), tctx, nvt_blk, nvt_idx);
> + xive_tctx_restore_nvp(XIVE_ROUTER(xptr), tctx, nvt_blk, nvt_idx);
> +
> + /*
> + * Always call xive_tctx_recompute_from_ipb(). Even if there were no
> + * escalation triggered, there could be a pending interrupt which
> + * was saved when the context was pulled and that we need to take
> + * into account by recalculating the PIPR (which is not
> + * saved/restored).
> + * It will also raise the External interrupt signal if needed.
> + */
> + xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
> }
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index a9b188b909..53e90b8178 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -940,14 +940,14 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> return cppr;
> }
>
> -static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> +/* Restore TIMA VP context from NVP backlog */
> +static void xive2_tctx_restore_nvp(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t ring,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
> {
> - uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
> - uint8_t ipb, nsr = sig_regs[TM_NSR];
> + uint8_t ipb;
> Xive2Nvp nvp;
>
> /*
> @@ -978,14 +978,6 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> }
> /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
> -
> - if (xive_nsr_indicates_group_exception(ring, nsr)) {
> - /* redistribute precluded active grp interrupt */
> - g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the grp interrupt */
> - xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
> - }
> - xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
> - TM_QW3_HV_PHYS : ring);
> }
>
> /*
> @@ -1028,8 +1020,20 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>
> /* Check the interrupt pending bits */
> if (v) {
> - xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, ring,
> + Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> + uint8_t nsr = sig_regs[TM_NSR];
> +
> + xive2_tctx_restore_nvp(xrtr, tctx, ring,
> nvp_blk, nvp_idx, do_restore);
> +
> + if (xive_nsr_indicates_group_exception(ring, nsr)) {
> + /* redistribute precluded active grp interrupt */
> + g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the interrupt */
> + xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
> + }
> + xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
> + TM_QW3_HV_PHYS : ring);
> }
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 33/50] ppc/xive: tctx signaling registers rework
2025-05-12 3:10 ` [PATCH 33/50] ppc/xive: tctx signaling registers rework Nicholas Piggin
2025-05-14 20:49 ` Mike Kowal
@ 2025-05-15 15:58 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 15:58 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> The tctx "signaling" registers (PIPR, CPPR, NSR) raise an interrupt on
> the target CPU thread. The POOL and PHYS rings both raise hypervisor
> interrupts, so they both share one set of signaling registers in the
> PHYS ring. The PHYS NSR register contains a field that indicates which
> ring has presented the interrupt being signaled to the CPU.
>
> This sharing results in all the "alt_regs" throughout the code. alt_regs
> is not very descriptive, and worse is that the name is used for
> conversions in both directions, i.e., to find the presenting ring from
> the signaling ring, and the signaling ring from the presenting ring.
>
> Instead of alt_regs, use the names sig_regs and sig_ring, and regs and
> ring for the presenting ring being worked on. Add a helper function to
> get the sign_regs, and add some asserts to ensure the POOL regs are
> never used to signal interrupts.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 112 ++++++++++++++++++++++--------------------
> hw/intc/xive2.c | 94 ++++++++++++++++-------------------
> include/hw/ppc/xive.h | 26 +++++++++-
> 3 files changed, 126 insertions(+), 106 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 5ff1b8f024..4e0c71d684 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -80,69 +80,77 @@ static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
> }
> }
>
> -uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> +/*
> + * interrupt is accepted on the presentation ring, for PHYS ring the NSR
> + * directs it to the PHYS or POOL rings.
> + */
> +uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
> {
> - uint8_t *regs = &tctx->regs[ring];
> - uint8_t nsr = regs[TM_NSR];
> + uint8_t *sig_regs = &tctx->regs[sig_ring];
> + uint8_t nsr = sig_regs[TM_NSR];
>
> - qemu_irq_lower(xive_tctx_output(tctx, ring));
> + g_assert(sig_ring == TM_QW1_OS || sig_ring == TM_QW3_HV_PHYS);
> +
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
> +
> + qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
>
> - if (xive_nsr_indicates_exception(ring, nsr)) {
> - uint8_t cppr = regs[TM_PIPR];
> - uint8_t alt_ring;
> - uint8_t *alt_regs;
> + if (xive_nsr_indicates_exception(sig_ring, nsr)) {
> + uint8_t cppr = sig_regs[TM_PIPR];
> + uint8_t ring;
> + uint8_t *regs;
>
> - alt_ring = xive_nsr_exception_ring(ring, nsr);
> - alt_regs = &tctx->regs[alt_ring];
> + ring = xive_nsr_exception_ring(sig_ring, nsr);
> + regs = &tctx->regs[ring];
>
> - regs[TM_CPPR] = cppr;
> + sig_regs[TM_CPPR] = cppr;
>
> /*
> * If the interrupt was for a specific VP, reset the pending
> * buffer bit, otherwise clear the logical server indicator
> */
> - if (!xive_nsr_indicates_group_exception(ring, nsr)) {
> - alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> + if (!xive_nsr_indicates_group_exception(sig_ring, nsr)) {
> + regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
> }
>
> /* Clear the exception from NSR */
> - regs[TM_NSR] = 0;
> + sig_regs[TM_NSR] = 0;
>
> - trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
> - alt_regs[TM_IPB], regs[TM_PIPR],
> - regs[TM_CPPR], regs[TM_NSR]);
> + trace_xive_tctx_accept(tctx->cs->cpu_index, ring,
> + regs[TM_IPB], sig_regs[TM_PIPR],
> + sig_regs[TM_CPPR], sig_regs[TM_NSR]);
> }
>
> - return ((uint64_t)nsr << 8) | regs[TM_CPPR];
> + return ((uint64_t)nsr << 8) | sig_regs[TM_CPPR];
> }
>
> void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
> {
> - /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *alt_regs = &tctx->regs[alt_ring];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> - if (alt_regs[TM_PIPR] < alt_regs[TM_CPPR]) {
> + if (sig_regs[TM_PIPR] < sig_regs[TM_CPPR]) {
> switch (ring) {
> case TM_QW1_OS:
> - regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
> + sig_regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
> break;
> case TM_QW2_HV_POOL:
> - alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
> + sig_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
> break;
> case TM_QW3_HV_PHYS:
> - regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
> + sig_regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
> break;
> default:
> g_assert_not_reached();
> }
> trace_xive_tctx_notify(tctx->cs->cpu_index, ring,
> - regs[TM_IPB], alt_regs[TM_PIPR],
> - alt_regs[TM_CPPR], alt_regs[TM_NSR]);
> + regs[TM_IPB], sig_regs[TM_PIPR],
> + sig_regs[TM_CPPR], sig_regs[TM_NSR]);
> qemu_irq_raise(xive_tctx_output(tctx, ring));
> } else {
> - alt_regs[TM_NSR] = 0;
> + sig_regs[TM_NSR] = 0;
> qemu_irq_lower(xive_tctx_output(tctx, ring));
> }
> }
> @@ -159,25 +167,32 @@ void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring)
>
> static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> - uint8_t *regs = &tctx->regs[ring];
> + uint8_t *sig_regs = &tctx->regs[ring];
> uint8_t pipr_min;
> uint8_t ring_min;
>
> + g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
> +
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
> +
> + /* XXX: should show pool IPB for PHYS ring */
> trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
> - regs[TM_IPB], regs[TM_PIPR],
> - cppr, regs[TM_NSR]);
> + sig_regs[TM_IPB], sig_regs[TM_PIPR],
> + cppr, sig_regs[TM_NSR]);
>
> if (cppr > XIVE_PRIORITY_MAX) {
> cppr = 0xff;
> }
>
> - tctx->regs[ring + TM_CPPR] = cppr;
> + sig_regs[TM_CPPR] = cppr;
>
> /*
> * Recompute the PIPR based on local pending interrupts. The PHYS
> * ring must take the minimum of both the PHYS and POOL PIPR values.
> */
> - pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
> + pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
> ring_min = ring;
>
> /* PHYS updates also depend on POOL values */
> @@ -186,7 +201,6 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
>
> /* POOL values only matter if POOL ctx is valid */
> if (pool_regs[TM_WORD2] & 0x80) {
> -
> uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
>
> /*
> @@ -200,7 +214,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
> }
>
> - regs[TM_PIPR] = pipr_min;
> + sig_regs[TM_PIPR] = pipr_min;
>
> /* CPPR has changed, check if we need to raise a pending exception */
> xive_tctx_notify(tctx, ring_min, 0);
> @@ -208,56 +222,50 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
>
> void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level)
> - {
> - /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *alt_regs = &tctx->regs[alt_ring];
> +{
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> if (group_level == 0) {
> /* VP-specific */
> regs[TM_IPB] |= xive_priority_to_ipb(priority);
> - alt_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> + sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> } else {
> /* VP-group */
> - alt_regs[TM_PIPR] = xive_priority_to_pipr(priority);
> + sig_regs[TM_PIPR] = xive_priority_to_pipr(priority);
> }
> xive_tctx_notify(tctx, ring, group_level);
> }
>
> static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
> {
> - /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *aregs = &tctx->regs[alt_ring];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> /* Does not support a presented group interrupt */
> - g_assert(!xive_nsr_indicates_group_exception(alt_ring, aregs[TM_NSR]));
> + g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
>
> - aregs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> + sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> xive_tctx_notify(tctx, ring, 0);
> }
>
> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level)
> {
> - /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *aregs = &tctx->regs[alt_ring];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
> uint8_t pipr = xive_priority_to_pipr(priority);
>
> if (group_level == 0) {
> regs[TM_IPB] |= xive_priority_to_ipb(priority);
> - if (pipr >= aregs[TM_PIPR]) {
> + if (pipr >= sig_regs[TM_PIPR]) {
> /* VP interrupts can come here with lower priority than PIPR */
> return;
> }
> }
> g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
> - g_assert(pipr < aregs[TM_PIPR]);
> - aregs[TM_PIPR] = pipr;
> + g_assert(pipr < sig_regs[TM_PIPR]);
> + sig_regs[TM_PIPR] = pipr;
> xive_tctx_notify(tctx, ring, group_level);
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index f91109b84a..b9ee8c9e9f 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -606,11 +606,9 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
>
> static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> {
> - uint8_t *regs = &tctx->regs[ring];
> - uint8_t *alt_regs = (ring == TM_QW2_HV_POOL) ? &tctx->regs[TM_QW3_HV_PHYS] :
> - regs;
> - uint8_t nsr = alt_regs[TM_NSR];
> - uint8_t pipr = alt_regs[TM_PIPR];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> + uint8_t nsr = sig_regs[TM_NSR];
> + uint8_t pipr = sig_regs[TM_PIPR];
> uint8_t crowd = NVx_CROWD_LVL(nsr);
> uint8_t group = NVx_GROUP_LVL(nsr);
> uint8_t nvgc_blk, end_blk, nvp_blk;
> @@ -618,19 +616,16 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> Xive2Nvgc nvgc;
> uint8_t prio_limit;
> uint32_t cfg;
> - uint8_t alt_ring;
>
> /* redistribution is only for group/crowd interrupts */
> if (!xive_nsr_indicates_group_exception(ring, nsr)) {
> return;
> }
>
> - alt_ring = xive_nsr_exception_ring(ring, nsr);
> -
> /* Don't check return code since ring is expected to be invalidated */
> - xive2_tctx_get_nvp_indexes(tctx, alt_ring, &nvp_blk, &nvp_idx);
> + xive2_tctx_get_nvp_indexes(tctx, ring, &nvp_blk, &nvp_idx);
>
> - trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
> + trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
>
> trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
> /* convert crowd/group to blk/idx */
> @@ -675,23 +670,11 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
>
> /* clear interrupt indication for the context */
> - alt_regs[TM_NSR] = 0;
> - alt_regs[TM_PIPR] = alt_regs[TM_CPPR];
> + sig_regs[TM_NSR] = 0;
> + sig_regs[TM_PIPR] = sig_regs[TM_CPPR];
> xive_tctx_reset_signal(tctx, ring);
> }
>
> -static uint8_t xive2_hv_irq_ring(uint8_t nsr)
> -{
> - switch (nsr >> 6) {
> - case TM_QW3_NSR_HE_POOL:
> - return TM_QW2_HV_POOL;
> - case TM_QW3_NSR_HE_PHYS:
> - return TM_QW3_HV_PHYS;
> - default:
> - return -1;
> - }
> -}
> -
> static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size, uint8_t ring)
> {
> @@ -718,7 +701,8 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> uint32_t ringw2 = xive_tctx_word2(&tctx->regs[cur_ring]);
> uint32_t ringw2_new = xive_set_field32(TM2_QW1W2_VO, ringw2, 0);
> bool is_valid = !!(xive_get_field32(TM2_QW1W2_VO, ringw2));
> - uint8_t alt_ring;
> + uint8_t *sig_regs;
> +
> memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
>
> /* Skip the rest for USER or invalid contexts */
> @@ -727,12 +711,11 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> }
>
> /* Active group/crowd interrupts need to be redistributed */
> - alt_ring = (cur_ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : cur_ring;
> - nsr = tctx->regs[alt_ring + TM_NSR];
> - if (xive_nsr_indicates_group_exception(alt_ring, nsr)) {
> - /* For HV rings, only redistribute if cur_ring matches NSR */
> - if ((cur_ring == TM_QW1_OS) ||
> - (cur_ring == xive2_hv_irq_ring(nsr))) {
> + sig_regs = xive_tctx_signal_regs(tctx, ring);
> + nsr = sig_regs[TM_NSR];
> + if (xive_nsr_indicates_group_exception(cur_ring, nsr)) {
> + /* Ensure ring matches NSR (for HV NSR POOL vs PHYS rings) */
> + if (cur_ring == xive_nsr_exception_ring(cur_ring, nsr)) {
> xive2_redistribute(xrtr, tctx, cur_ring);
> }
> }
> @@ -1118,7 +1101,7 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> /* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
> static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> - uint8_t *regs = &tctx->regs[ring];
> + uint8_t *sig_regs = &tctx->regs[ring];
> Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> uint8_t old_cppr, backlog_prio, first_group, group_level;
> uint8_t pipr_min, lsmfb_min, ring_min;
> @@ -1127,33 +1110,41 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> uint32_t nvp_idx;
> Xive2Nvp nvp;
> int rc;
> - uint8_t nsr = regs[TM_NSR];
> + uint8_t nsr = sig_regs[TM_NSR];
> +
> + g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
> +
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
>
> + /* XXX: should show pool IPB for PHYS ring */
> trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
> - regs[TM_IPB], regs[TM_PIPR],
> + sig_regs[TM_IPB], sig_regs[TM_PIPR],
> cppr, nsr);
>
> if (cppr > XIVE_PRIORITY_MAX) {
> cppr = 0xff;
> }
>
> - old_cppr = regs[TM_CPPR];
> - regs[TM_CPPR] = cppr;
> + old_cppr = sig_regs[TM_CPPR];
> + sig_regs[TM_CPPR] = cppr;
>
> /* Handle increased CPPR priority (lower value) */
> if (cppr < old_cppr) {
> - if (cppr <= regs[TM_PIPR]) {
> + if (cppr <= sig_regs[TM_PIPR]) {
> /* CPPR lowered below PIPR, must un-present interrupt */
> if (xive_nsr_indicates_exception(ring, nsr)) {
> if (xive_nsr_indicates_group_exception(ring, nsr)) {
> /* redistribute precluded active grp interrupt */
> - xive2_redistribute(xrtr, tctx, ring);
> + xive2_redistribute(xrtr, tctx,
> + xive_nsr_exception_ring(ring, nsr));
> return;
> }
> }
>
> /* interrupt is VP directed, pending in IPB */
> - regs[TM_PIPR] = cppr;
> + sig_regs[TM_PIPR] = cppr;
> xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
> return;
> } else {
> @@ -1174,9 +1165,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> * be adjusted below if needed in case of pending group interrupts.
> */
> again:
> - pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
> - group_enabled = !!regs[TM_LGS];
> - lsmfb_min = group_enabled ? regs[TM_LSMFB] : 0xff;
> + pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
> + group_enabled = !!sig_regs[TM_LGS];
> + lsmfb_min = group_enabled ? sig_regs[TM_LSMFB] : 0xff;
> ring_min = ring;
> group_level = 0;
>
> @@ -1265,7 +1256,7 @@ again:
> }
>
> /* PIPR should not be set to a value greater than CPPR */
> - regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
> + sig_regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
>
> /* CPPR has changed, check if we need to raise a pending exception */
> xive_tctx_notify(tctx, ring_min, group_level);
> @@ -1490,9 +1481,7 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
>
> bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> {
> - /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *alt_regs = &tctx->regs[alt_ring];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
>
> /*
> * The xive2_presenter_tctx_match() above tells if there's a match
> @@ -1500,7 +1489,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> * priority to know if the thread can take the interrupt now or if
> * it is precluded.
> */
> - if (priority < alt_regs[TM_PIPR]) {
> + if (priority < sig_regs[TM_PIPR]) {
> return false;
> }
> return true;
> @@ -1640,14 +1629,13 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> &match)) {
> XiveTCTX *tctx = match.tctx;
> uint8_t ring = match.ring;
> - uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
> - uint8_t *aregs = &tctx->regs[alt_ring];
> - uint8_t nsr = aregs[TM_NSR];
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> + uint8_t nsr = sig_regs[TM_NSR];
> uint8_t group_level;
>
> - if (priority < aregs[TM_PIPR] &&
> - xive_nsr_indicates_group_exception(alt_ring, nsr)) {
> - xive2_redistribute(xrtr, tctx, alt_ring);
> + if (priority < sig_regs[TM_PIPR] &&
> + xive_nsr_indicates_group_exception(ring, nsr)) {
> + xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
> }
>
> group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 0d6b11e818..a3c2f50ece 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -539,7 +539,7 @@ static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
> }
>
> /*
> - * XIVE Thread Interrupt Management Aera (TIMA)
> + * XIVE Thread Interrupt Management Area (TIMA)
> *
> * This region gives access to the registers of the thread interrupt
> * management context. It is four page wide, each page providing a
> @@ -551,6 +551,30 @@ static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
> #define XIVE_TM_OS_PAGE 0x2
> #define XIVE_TM_USER_PAGE 0x3
>
> +/*
> + * The TCTX (TIMA) has 4 rings (phys, pool, os, user), but only signals
> + * (raises an interrupt on) the CPU from 3 of them. Phys and pool both
> + * cause a hypervisor privileged interrupt so interrupts presented on
> + * those rings signal using the phys ring. This helper returns the signal
> + * regs from the given ring.
> + */
> +static inline uint8_t *xive_tctx_signal_regs(XiveTCTX *tctx, uint8_t ring)
> +{
> + /*
> + * This is a good point to add invariants to ensure nothing has tried to
> + * signal using the POOL ring.
> + */
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
> +
> + if (ring == TM_QW2_HV_POOL) {
> + /* POOL and PHYS rings share the signal regs (PIPR, NSR, CPPR) */
> + ring = TM_QW3_HV_PHYS;
> + }
> + return &tctx->regs[ring];
> +}
> +
> void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> uint64_t value, unsigned size);
> uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented
2025-05-12 3:10 ` [PATCH 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented Nicholas Piggin
2025-05-15 15:16 ` Mike Kowal
@ 2025-05-15 16:04 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:04 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> The relationship between an interrupt signaled in the TIMA and the QEMU
> irq line to the processor to be 1:1, so they should be raised and
> lowered together and "just in case" lowering should be avoided (it could
> mask
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 4e0c71d684..d5dbeab6bd 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -95,8 +95,6 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
> g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
>
> - qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
> -
> if (xive_nsr_indicates_exception(sig_ring, nsr)) {
> uint8_t cppr = sig_regs[TM_PIPR];
> uint8_t ring;
> @@ -117,6 +115,7 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
>
> /* Clear the exception from NSR */
> sig_regs[TM_NSR] = 0;
> + qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
>
> trace_xive_tctx_accept(tctx->cs->cpu_index, ring,
> regs[TM_IPB], sig_regs[TM_PIPR],
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function
2025-05-12 3:10 ` [PATCH 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function Nicholas Piggin
2025-05-15 15:18 ` Mike Kowal
@ 2025-05-15 16:05 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:05 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Have xive_tctx_notify() also set the new PIPR value and rename it to
> xive_tctx_pipr_set(). This can replace the last xive_tctx_pipr_update()
> caller because it does not need to update IPB (it already sets it).
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 39 +++++++++++----------------------------
> hw/intc/xive2.c | 16 +++++++---------
> include/hw/ppc/xive.h | 5 ++---
> 3 files changed, 20 insertions(+), 40 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index d5dbeab6bd..4659821d4a 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -125,12 +125,16 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
> return ((uint64_t)nsr << 8) | sig_regs[TM_CPPR];
> }
>
> -void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
> +/* Change PIPR and calculate NSR and irq based on PIPR, CPPR, group */
> +void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t pipr,
> + uint8_t group_level)
> {
> uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> - if (sig_regs[TM_PIPR] < sig_regs[TM_CPPR]) {
> + sig_regs[TM_PIPR] = pipr;
> +
> + if (pipr < sig_regs[TM_CPPR]) {
> switch (ring) {
> case TM_QW1_OS:
> sig_regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
> @@ -145,7 +149,7 @@ void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
> g_assert_not_reached();
> }
> trace_xive_tctx_notify(tctx->cs->cpu_index, ring,
> - regs[TM_IPB], sig_regs[TM_PIPR],
> + regs[TM_IPB], pipr,
> sig_regs[TM_CPPR], sig_regs[TM_NSR]);
> qemu_irq_raise(xive_tctx_output(tctx, ring));
> } else {
> @@ -213,29 +217,10 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
> }
>
> - sig_regs[TM_PIPR] = pipr_min;
> -
> - /* CPPR has changed, check if we need to raise a pending exception */
> - xive_tctx_notify(tctx, ring_min, 0);
> + /* CPPR has changed, this may present or preclude a pending exception */
> + xive_tctx_pipr_set(tctx, ring_min, pipr_min, 0);
> }
>
> -void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> - uint8_t group_level)
> -{
> - uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> - uint8_t *regs = &tctx->regs[ring];
> -
> - if (group_level == 0) {
> - /* VP-specific */
> - regs[TM_IPB] |= xive_priority_to_ipb(priority);
> - sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> - } else {
> - /* VP-group */
> - sig_regs[TM_PIPR] = xive_priority_to_pipr(priority);
> - }
> - xive_tctx_notify(tctx, ring, group_level);
> - }
> -
> static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
> {
> uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> @@ -244,8 +229,7 @@ static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
> /* Does not support a presented group interrupt */
> g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
>
> - sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
> - xive_tctx_notify(tctx, ring, 0);
> + xive_tctx_pipr_set(tctx, ring, xive_ipb_to_pipr(regs[TM_IPB]), 0);
> }
>
> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> @@ -264,8 +248,7 @@ void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> }
> g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
> g_assert(pipr < sig_regs[TM_PIPR]);
> - sig_regs[TM_PIPR] = pipr;
> - xive_tctx_notify(tctx, ring, group_level);
> + xive_tctx_pipr_set(tctx, ring, pipr, group_level);
> }
>
> /*
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index b9ee8c9e9f..8c8dab3aa2 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -966,10 +966,10 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> }
>
> /*
> - * Compute the PIPR based on the restored state.
> + * Set the PIPR/NSR based on the restored state.
> * It will raise the External interrupt signal if needed.
> */
> - xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
> + xive_tctx_pipr_set(tctx, TM_QW1_OS, backlog_prio, backlog_level);
> }
>
> /*
> @@ -1144,8 +1144,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
>
> /* interrupt is VP directed, pending in IPB */
> - sig_regs[TM_PIPR] = cppr;
> - xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
> + xive_tctx_pipr_set(tctx, ring, cppr, 0);
> return;
> } else {
> /* CPPR was lowered, but still above PIPR. No action needed. */
> @@ -1255,11 +1254,10 @@ again:
> pipr_min = backlog_prio;
> }
>
> - /* PIPR should not be set to a value greater than CPPR */
> - sig_regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
> -
> - /* CPPR has changed, check if we need to raise a pending exception */
> - xive_tctx_notify(tctx, ring_min, group_level);
> + if (pipr_min > cppr) {
> + pipr_min = cppr;
> + }
> + xive_tctx_pipr_set(tctx, ring_min, pipr_min, group_level);
> }
>
> void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index a3c2f50ece..2372d1014b 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -584,12 +584,11 @@ void xive_tctx_pic_print_info(XiveTCTX *tctx, GString *buf);
> Object *xive_tctx_create(Object *cpu, XivePresenter *xptr, Error **errp);
> void xive_tctx_reset(XiveTCTX *tctx);
> void xive_tctx_destroy(XiveTCTX *tctx);
> -void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> - uint8_t group_level);
> +void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> + uint8_t group_level);
> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level);
> void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
> -void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
> uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
>
> /*
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 36/50] ppc/xive2: split tctx presentation processing from set CPPR
2025-05-12 3:10 ` [PATCH 36/50] ppc/xive2: split tctx presentation processing from set CPPR Nicholas Piggin
2025-05-15 15:24 ` Mike Kowal
@ 2025-05-15 16:06 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:06 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> The second part of the set CPPR operation is to process (or re-present)
> any pending interrupts after CPPR is adjusted.
>
> Split this presentation processing out into a standalone function that
> can be used in other places.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 137 +++++++++++++++++++++++++++---------------------
> 1 file changed, 76 insertions(+), 61 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 8c8dab3aa2..aa06bfda77 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1098,66 +1098,19 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
> }
>
> -/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
> -static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> +/* Re-calculate and present pending interrupts */
> +static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
> {
> - uint8_t *sig_regs = &tctx->regs[ring];
> + uint8_t *sig_regs = &tctx->regs[sig_ring];
> Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> - uint8_t old_cppr, backlog_prio, first_group, group_level;
> + uint8_t backlog_prio, first_group, group_level;
> uint8_t pipr_min, lsmfb_min, ring_min;
> + uint8_t cppr = sig_regs[TM_CPPR];
> bool group_enabled;
> - uint8_t nvp_blk;
> - uint32_t nvp_idx;
> Xive2Nvp nvp;
> int rc;
> - uint8_t nsr = sig_regs[TM_NSR];
> -
> - g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
> -
> - g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> - g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> - g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
> -
> - /* XXX: should show pool IPB for PHYS ring */
> - trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
> - sig_regs[TM_IPB], sig_regs[TM_PIPR],
> - cppr, nsr);
> -
> - if (cppr > XIVE_PRIORITY_MAX) {
> - cppr = 0xff;
> - }
> -
> - old_cppr = sig_regs[TM_CPPR];
> - sig_regs[TM_CPPR] = cppr;
> -
> - /* Handle increased CPPR priority (lower value) */
> - if (cppr < old_cppr) {
> - if (cppr <= sig_regs[TM_PIPR]) {
> - /* CPPR lowered below PIPR, must un-present interrupt */
> - if (xive_nsr_indicates_exception(ring, nsr)) {
> - if (xive_nsr_indicates_group_exception(ring, nsr)) {
> - /* redistribute precluded active grp interrupt */
> - xive2_redistribute(xrtr, tctx,
> - xive_nsr_exception_ring(ring, nsr));
> - return;
> - }
> - }
>
> - /* interrupt is VP directed, pending in IPB */
> - xive_tctx_pipr_set(tctx, ring, cppr, 0);
> - return;
> - } else {
> - /* CPPR was lowered, but still above PIPR. No action needed. */
> - return;
> - }
> - }
> -
> - /* CPPR didn't change, nothing needs to be done */
> - if (cppr == old_cppr) {
> - return;
> - }
> -
> - /* CPPR priority decreased (higher value) */
> + g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
>
> /*
> * Recompute the PIPR based on local pending interrupts. It will
> @@ -1167,11 +1120,11 @@ again:
> pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
> group_enabled = !!sig_regs[TM_LGS];
> lsmfb_min = group_enabled ? sig_regs[TM_LSMFB] : 0xff;
> - ring_min = ring;
> + ring_min = sig_ring;
> group_level = 0;
>
> /* PHYS updates also depend on POOL values */
> - if (ring == TM_QW3_HV_PHYS) {
> + if (sig_ring == TM_QW3_HV_PHYS) {
> uint8_t *pool_regs = &tctx->regs[TM_QW2_HV_POOL];
>
> /* POOL values only matter if POOL ctx is valid */
> @@ -1201,20 +1154,25 @@ again:
> }
> }
>
> - rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> - if (rc) {
> - qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
> - return;
> - }
> -
> if (group_enabled &&
> lsmfb_min < cppr &&
> lsmfb_min < pipr_min) {
> +
> + uint8_t nvp_blk;
> + uint32_t nvp_idx;
> +
> /*
> * Thread has seen a group interrupt with a higher priority
> * than the new cppr or pending local interrupt. Check the
> * backlog
> */
> + rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> + if (rc) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid "
> + "context\n");
> + return;
> + }
> +
> if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
> nvp_blk, nvp_idx);
> @@ -1260,6 +1218,63 @@ again:
> xive_tctx_pipr_set(tctx, ring_min, pipr_min, group_level);
> }
>
> +/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
> +static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t sig_ring, uint8_t cppr)
> +{
> + uint8_t *sig_regs = &tctx->regs[sig_ring];
> + Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> + uint8_t old_cppr;
> + uint8_t nsr = sig_regs[TM_NSR];
> +
> + g_assert(sig_ring == TM_QW1_OS || sig_ring == TM_QW3_HV_PHYS);
> +
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
> + g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
> +
> + /* XXX: should show pool IPB for PHYS ring */
> + trace_xive_tctx_set_cppr(tctx->cs->cpu_index, sig_ring,
> + sig_regs[TM_IPB], sig_regs[TM_PIPR],
> + cppr, nsr);
> +
> + if (cppr > XIVE_PRIORITY_MAX) {
> + cppr = 0xff;
> + }
> +
> + old_cppr = sig_regs[TM_CPPR];
> + sig_regs[TM_CPPR] = cppr;
> +
> + /* Handle increased CPPR priority (lower value) */
> + if (cppr < old_cppr) {
> + if (cppr <= sig_regs[TM_PIPR]) {
> + /* CPPR lowered below PIPR, must un-present interrupt */
> + if (xive_nsr_indicates_exception(sig_ring, nsr)) {
> + if (xive_nsr_indicates_group_exception(sig_ring, nsr)) {
> + /* redistribute precluded active grp interrupt */
> + xive2_redistribute(xrtr, tctx,
> + xive_nsr_exception_ring(sig_ring, nsr));
> + return;
> + }
> + }
> +
> + /* interrupt is VP directed, pending in IPB */
> + xive_tctx_pipr_set(tctx, sig_ring, cppr, 0);
> + return;
> + } else {
> + /* CPPR was lowered, but still above PIPR. No action needed. */
> + return;
> + }
> + }
> +
> + /* CPPR didn't change, nothing needs to be done */
> + if (cppr == old_cppr) {
> + return;
> + }
> +
> + /* CPPR priority decreased (higher value) */
> + xive2_tctx_process_pending(tctx, sig_ring);
> +}
> +
> void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size)
> {
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 37/50] ppc/xive2: Consolidate presentation processing in context push
2025-05-12 3:10 ` [PATCH 37/50] ppc/xive2: Consolidate presentation processing in context push Nicholas Piggin
2025-05-15 15:25 ` Mike Kowal
@ 2025-05-15 16:06 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:06 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> OS-push operation must re-present pending interrupts. Use the
> newly created xive2_tctx_process_pending() function instead of
> duplicating the logic.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 42 ++++++++++--------------------------------
> 1 file changed, 10 insertions(+), 32 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index aa06bfda77..0fdf6a4f20 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -903,18 +903,14 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> return cppr;
> }
>
> +static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
> +
> static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
> {
> - XivePresenter *xptr = XIVE_PRESENTER(xrtr);
> - uint8_t ipb;
> - uint8_t backlog_level;
> - uint8_t group_level;
> - uint8_t first_group;
> - uint8_t backlog_prio;
> - uint8_t group_prio;
> uint8_t *regs = &tctx->regs[TM_QW1_OS];
> + uint8_t ipb;
> Xive2Nvp nvp;
>
> /*
> @@ -946,30 +942,8 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> }
> /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
> - backlog_prio = xive_ipb_to_pipr(regs[TM_IPB]);
> - backlog_level = 0;
> -
> - first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
> - if (first_group && regs[TM_LSMFB] < backlog_prio) {
> - group_prio = xive2_presenter_backlog_scan(xptr, nvp_blk, nvp_idx,
> - first_group, &group_level);
> - regs[TM_LSMFB] = group_prio;
> - if (regs[TM_LGS] && group_prio < backlog_prio &&
> - group_prio < regs[TM_CPPR]) {
> -
> - /* VP can take a group interrupt */
> - xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
> - group_prio, group_level);
> - backlog_prio = group_prio;
> - backlog_level = group_level;
> - }
> - }
>
> - /*
> - * Set the PIPR/NSR based on the restored state.
> - * It will raise the External interrupt signal if needed.
> - */
> - xive_tctx_pipr_set(tctx, TM_QW1_OS, backlog_prio, backlog_level);
> + xive2_tctx_process_pending(tctx, TM_QW1_OS);
> }
>
> /*
> @@ -1103,8 +1077,12 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
> {
> uint8_t *sig_regs = &tctx->regs[sig_ring];
> Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
> - uint8_t backlog_prio, first_group, group_level;
> - uint8_t pipr_min, lsmfb_min, ring_min;
> + uint8_t backlog_prio;
> + uint8_t first_group;
> + uint8_t group_level;
> + uint8_t pipr_min;
> + uint8_t lsmfb_min;
> + uint8_t ring_min;
> uint8_t cppr = sig_regs[TM_CPPR];
> bool group_enabled;
> Xive2Nvp nvp;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set
2025-05-12 3:10 ` [PATCH 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set Nicholas Piggin
2025-05-15 15:26 ` Mike Kowal
@ 2025-05-15 16:07 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:07 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> When CPPR priority is decreased, pending interrupts do not need to be
> re-checked if one is already presented because by definition that will
> be the highest priority.
>
> This prevents a presented group interrupt from being lost.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 0fdf6a4f20..ace5871706 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1250,7 +1250,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t sig_ring, uint8_t cppr)
> }
>
> /* CPPR priority decreased (higher value) */
> - xive2_tctx_process_pending(tctx, sig_ring);
> + if (!xive_nsr_indicates_exception(sig_ring, nsr)) {
> + xive2_tctx_process_pending(tctx, sig_ring);
> + }
> }
>
> void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 39/50] ppc/xive: Assert group interrupts were redistributed
2025-05-12 3:10 ` [PATCH 39/50] ppc/xive: Assert group interrupts were redistributed Nicholas Piggin
2025-05-15 15:28 ` Mike Kowal
@ 2025-05-15 16:08 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:08 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Add some assertions to try to ensure presented group interrupts do
> not get lost without being redistributed, if they become precluded
> by CPPR or preempted by a higher priority interrupt.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 2 ++
> hw/intc/xive2.c | 1 +
> 2 files changed, 3 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 4659821d4a..81af59f0ec 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -132,6 +132,8 @@ void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t pipr,
> uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> + g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
> +
> sig_regs[TM_PIPR] = pipr;
>
> if (pipr < sig_regs[TM_CPPR]) {
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index ace5871706..e3060810d3 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1089,6 +1089,7 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
> int rc;
>
> g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
> + g_assert(!xive_nsr_indicates_group_exception(sig_ring, sig_regs[TM_NSR]));
>
> /*
> * Recompute the PIPR based on local pending interrupts. It will
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 40/50] ppc/xive2: implement NVP context save restore for POOL ring
2025-05-12 3:10 ` [PATCH 40/50] ppc/xive2: implement NVP context save restore for POOL ring Nicholas Piggin
2025-05-15 15:36 ` Mike Kowal
@ 2025-05-15 16:09 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:09 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> In preparation to implement POOL context push, add support for POOL
> NVP context save/restore.
>
> The NVP p bit is defined in the spec as follows:
>
> If TRUE, the CPPR of a Pool VP in the NVP is updated during store of
> the context with the CPPR of the Hard context it was running under.
>
> It's not clear whether non-pool VPs always or never get CPPR updated.
> Before this patch, OS contexts always save CPPR, so we will assume that
> is the behaviour.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 51 +++++++++++++++++++++++++------------
> include/hw/ppc/xive2_regs.h | 1 +
> 2 files changed, 36 insertions(+), 16 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index e3060810d3..d899c1fb14 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -512,12 +512,13 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
> */
>
> static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> - uint8_t nvp_blk, uint32_t nvp_idx,
> - uint8_t ring)
> + uint8_t ring,
> + uint8_t nvp_blk, uint32_t nvp_idx)
> {
> CPUPPCState *env = &POWERPC_CPU(tctx->cs)->env;
> uint32_t pir = env->spr_cb[SPR_PIR].default_value;
> Xive2Nvp nvp;
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
>
> if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
> @@ -553,7 +554,14 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> }
>
> nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, regs[TM_IPB]);
> - nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, regs[TM_CPPR]);
> +
> + if ((nvp.w0 & NVP2_W0_P) || ring != TM_QW2_HV_POOL) {
> + /*
> + * Non-pool contexts always save CPPR (ignore p bit). XXX: Clarify
> + * whether that is the correct behaviour.
> + */
> + nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, sig_regs[TM_CPPR]);
> + }
> if (nvp.w0 & NVP2_W0_L) {
> /*
> * Typically not used. If LSMFB is restored with 0, it will
> @@ -722,7 +730,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> }
>
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> - xive2_tctx_save_ctx(xrtr, tctx, nvp_blk, nvp_idx, ring);
> + xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
> }
>
> /*
> @@ -863,12 +871,15 @@ void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_tm_pull_ctx_ol(xptr, tctx, offset, value, size, TM_QW3_HV_PHYS);
> }
>
> -static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> - uint8_t nvp_blk, uint32_t nvp_idx,
> - Xive2Nvp *nvp)
> +static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> + uint8_t ring,
> + uint8_t nvp_blk, uint32_t nvp_idx,
> + Xive2Nvp *nvp)
> {
> CPUPPCState *env = &POWERPC_CPU(tctx->cs)->env;
> uint32_t pir = env->spr_cb[SPR_PIR].default_value;
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> + uint8_t *regs = &tctx->regs[ring];
> uint8_t cppr;
>
> if (!xive2_nvp_is_hw(nvp)) {
> @@ -881,10 +892,10 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> nvp->w2 = xive_set_field32(NVP2_W2_CPPR, nvp->w2, 0);
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 2);
>
> - tctx->regs[TM_QW1_OS + TM_CPPR] = cppr;
> - tctx->regs[TM_QW1_OS + TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
> - tctx->regs[TM_QW1_OS + TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
> - tctx->regs[TM_QW1_OS + TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
> + sig_regs[TM_CPPR] = cppr;
> + regs[TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
> + regs[TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
> + regs[TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
>
> nvp->w1 = xive_set_field32(NVP2_W1_CO, nvp->w1, 1);
> nvp->w1 = xive_set_field32(NVP2_W1_CO_THRID_VALID, nvp->w1, 1);
> @@ -893,9 +904,18 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> /*
> * Checkout privilege: 0:OS, 1:Pool, 2:Hard
> *
> - * TODO: we only support OS push/pull
> + * TODO: we don't support hard push/pull
> */
> - nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 0);
> + switch (ring) {
> + case TM_QW1_OS:
> + nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 0);
> + break;
> + case TM_QW2_HV_POOL:
> + nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 1);
> + break;
> + default:
> + g_assert_not_reached();
> + }
>
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 1);
>
> @@ -930,9 +950,8 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> }
>
> /* Automatically restore thread context registers */
> - if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE &&
> - do_restore) {
> - xive2_tctx_restore_os_ctx(xrtr, tctx, nvp_blk, nvp_idx, &nvp);
> + if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_restore) {
> + xive2_tctx_restore_ctx(xrtr, tctx, TM_QW1_OS, nvp_blk, nvp_idx, &nvp);
> }
>
> ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index f82054661b..2a3e60abad 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -158,6 +158,7 @@ typedef struct Xive2Nvp {
> #define NVP2_W0_L PPC_BIT32(8)
> #define NVP2_W0_G PPC_BIT32(9)
> #define NVP2_W0_T PPC_BIT32(10)
> +#define NVP2_W0_P PPC_BIT32(11)
> #define NVP2_W0_ESC_END PPC_BIT32(25) /* 'N' bit 0:ESB 1:END */
> #define NVP2_W0_PGOFIRST PPC_BITMASK32(26, 31)
> uint32_t w1;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt
2025-05-12 3:10 ` [PATCH 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt Nicholas Piggin
2025-05-15 15:43 ` Mike Kowal
@ 2025-05-15 16:10 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:10 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> When the pool context is pulled, the shared pool/phys signal is
> reset, which loses the qemu irq if a phys interrupt was presented.
>
> Only reset the signal if a poll irq was presented.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 18 ++++++++++--------
> 1 file changed, 10 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index d899c1fb14..aeeb901b6a 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -727,20 +727,22 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_redistribute(xrtr, tctx, cur_ring);
> }
> }
> +
> + /*
> + * Lower external interrupt line of requested ring and below except for
> + * USER, which doesn't exist.
> + */
> + if (xive_nsr_indicates_exception(cur_ring, nsr)) {
> + if (cur_ring == xive_nsr_exception_ring(cur_ring, nsr)) {
> + xive_tctx_reset_signal(tctx, cur_ring);
> + }
> + }
> }
>
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
> }
>
> - /*
> - * Lower external interrupt line of requested ring and below except for
> - * USER, which doesn't exist.
> - */
> - for (cur_ring = TM_QW1_OS; cur_ring <= ring;
> - cur_ring += XIVE_TM_RING_SIZE) {
> - xive_tctx_reset_signal(tctx, cur_ring);
> - }
> return target_ringw2;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 42/50] ppc/xive: Redistribute phys after pulling of pool context
2025-05-12 3:10 ` [PATCH 42/50] ppc/xive: Redistribute phys after pulling of pool context Nicholas Piggin
2025-05-15 15:46 ` Mike Kowal
@ 2025-05-15 16:11 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:11 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> After pulling the pool context, if a pool irq had been presented and
> was cleared in the process, there could be a pending irq in phys that
> should be presented. Process the phys irq ring after pulling pool ring
> to catch this case and avoid losing irqs.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 3 +++
> hw/intc/xive2.c | 16 ++++++++++++++--
> 2 files changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 81af59f0ec..aeca66e56e 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -320,6 +320,9 @@ static uint64_t xive_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>
> xive_tctx_reset_signal(tctx, TM_QW1_OS);
> xive_tctx_reset_signal(tctx, TM_QW2_HV_POOL);
> + /* Re-check phys for interrupts if pool was disabled */
> + xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW3_HV_PHYS);
> +
> return qw2w2;
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index aeeb901b6a..917ecbaae4 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -683,6 +683,8 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> xive_tctx_reset_signal(tctx, ring);
> }
>
> +static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
> +
> static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size, uint8_t ring)
> {
> @@ -739,6 +741,18 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> }
> }
>
> + if (ring == TM_QW2_HV_POOL) {
> + /* Re-check phys for interrupts if pool was disabled */
> + nsr = tctx->regs[TM_QW3_HV_PHYS + TM_NSR];
> + if (xive_nsr_indicates_exception(TM_QW3_HV_PHYS, nsr)) {
> + /* Ring must be PHYS because POOL would have been redistributed */
> + g_assert(xive_nsr_exception_ring(TM_QW3_HV_PHYS, nsr) ==
> + TM_QW3_HV_PHYS);
> + } else {
> + xive2_tctx_process_pending(tctx, TM_QW3_HV_PHYS);
> + }
> + }
> +
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
> }
> @@ -925,8 +939,6 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> return cppr;
> }
>
> -static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
> -
> static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 43/50] ppc/xive: Check TIMA operations validity
2025-05-12 3:10 ` [PATCH 43/50] ppc/xive: Check TIMA operations validity Nicholas Piggin
2025-05-15 15:47 ` Mike Kowal
@ 2025-05-15 16:12 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:12 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Certain TIMA operations should only be performed when a ring is valid,
> others when the ring is invalid, and they are considered undefined if
> used incorrectly. Add checks for this condition.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 196 +++++++++++++++++++++++++-----------------
> include/hw/ppc/xive.h | 1 +
> 2 files changed, 116 insertions(+), 81 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index aeca66e56e..d5bbd8f4c6 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -25,6 +25,19 @@
> /*
> * XIVE Thread Interrupt Management context
> */
> +bool xive_ring_valid(XiveTCTX *tctx, uint8_t ring)
> +{
> + uint8_t cur_ring;
> +
> + for (cur_ring = ring; cur_ring <= TM_QW3_HV_PHYS;
> + cur_ring += XIVE_TM_RING_SIZE) {
> + if (!(tctx->regs[cur_ring + TM_WORD2] & 0x80)) {
> + return false;
> + }
> + }
> + return true;
> +}
> +
> bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr)
> {
> switch (ring) {
> @@ -663,6 +676,8 @@ typedef struct XiveTmOp {
> uint8_t page_offset;
> uint32_t op_offset;
> unsigned size;
> + bool hw_ok;
> + bool sw_ok;
> void (*write_handler)(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset,
> uint64_t value, unsigned size);
> @@ -675,34 +690,34 @@ static const XiveTmOp xive_tm_operations[] = {
> * MMIOs below 2K : raw values and special operations without side
> * effects
> */
> - { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive_tm_set_os_cppr,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive_tm_push_os_ctx,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive_tm_set_hv_cppr,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, NULL,
> - xive_tm_vt_poll },
> + { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, true, true,
> + xive_tm_set_os_cppr, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, true, true,
> + xive_tm_push_os_ctx, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
> + xive_tm_set_hv_cppr, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, false, true,
> + xive_tm_vt_push, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
> + NULL, xive_tm_vt_poll },
>
> /* MMIOs above 2K : special operations with side effects */
> - { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, NULL,
> - xive_tm_ack_os_reg },
> - { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, xive_tm_set_os_pending,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, NULL,
> - xive_tm_pull_os_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, NULL,
> - xive_tm_pull_os_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, NULL,
> - xive_tm_ack_hv_reg },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, NULL,
> - xive_tm_pull_pool_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, NULL,
> - xive_tm_pull_pool_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, NULL,
> - xive_tm_pull_phys_ctx },
> + { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, true, false,
> + NULL, xive_tm_ack_os_reg },
> + { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, true, false,
> + xive_tm_set_os_pending, NULL },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, true, false,
> + NULL, xive_tm_pull_os_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, true, false,
> + NULL, xive_tm_pull_os_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, true, false,
> + NULL, xive_tm_ack_hv_reg },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, true, false,
> + NULL, xive_tm_pull_pool_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, true, false,
> + NULL, xive_tm_pull_pool_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, true, false,
> + NULL, xive_tm_pull_phys_ctx },
> };
>
> static const XiveTmOp xive2_tm_operations[] = {
> @@ -710,52 +725,48 @@ static const XiveTmOp xive2_tm_operations[] = {
> * MMIOs below 2K : raw values and special operations without side
> * effects
> */
> - { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, xive2_tm_set_os_cppr,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, xive2_tm_push_os_ctx,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 8, xive2_tm_push_os_ctx,
> - NULL },
> - { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, xive_tm_set_os_lgs,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, xive2_tm_set_hv_cppr,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, NULL,
> - xive_tm_vt_poll },
> - { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T, 1, xive2_tm_set_hv_target,
> - NULL },
> + { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR, 1, true, true,
> + xive2_tm_set_os_cppr, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 4, true, true,
> + xive2_tm_push_os_ctx, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2, 8, true, true,
> + xive2_tm_push_os_ctx, NULL },
> + { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, true, true,
> + xive_tm_set_os_lgs, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
> + xive2_tm_set_hv_cppr, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
> + NULL, xive_tm_vt_poll },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T, 1, true, true,
> + xive2_tm_set_hv_target, NULL },
>
> /* MMIOs above 2K : special operations with side effects */
> - { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, NULL,
> - xive_tm_ack_os_reg },
> - { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, xive_tm_set_os_pending,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, NULL,
> - xive2_tm_pull_os_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, NULL,
> - xive2_tm_pull_os_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, NULL,
> - xive2_tm_pull_os_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, NULL,
> - xive_tm_ack_hv_reg },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2, 4, NULL,
> - xive2_tm_pull_pool_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, NULL,
> - xive2_tm_pull_pool_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, NULL,
> - xive2_tm_pull_pool_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL, 1, xive2_tm_pull_os_ctx_ol,
> - NULL },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2, 4, NULL,
> - xive2_tm_pull_phys_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, NULL,
> - xive2_tm_pull_phys_ctx },
> - { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, xive2_tm_pull_phys_ctx_ol,
> - NULL },
> - { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, xive2_tm_ack_os_el,
> - NULL },
> + { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, true, false,
> + NULL, xive_tm_ack_os_reg },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, true, false,
> + NULL, xive2_tm_pull_os_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, true, false,
> + NULL, xive2_tm_pull_os_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 8, true, false,
> + NULL, xive2_tm_pull_os_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG, 2, true, false,
> + NULL, xive_tm_ack_hv_reg },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2, 4, true, false,
> + NULL, xive2_tm_pull_pool_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 4, true, false,
> + NULL, xive2_tm_pull_pool_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX, 8, true, false,
> + NULL, xive2_tm_pull_pool_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL, 1, true, false,
> + xive2_tm_pull_os_ctx_ol, NULL },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2, 4, true, false,
> + NULL, xive2_tm_pull_phys_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX, 1, true, false,
> + NULL, xive2_tm_pull_phys_ctx },
> + { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, true, false,
> + xive2_tm_pull_phys_ctx_ol, NULL },
> + { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, true, false,
> + xive2_tm_ack_os_el, NULL },
> };
>
> static const XiveTmOp *xive_tm_find_op(XivePresenter *xptr, hwaddr offset,
> @@ -797,18 +808,28 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> uint64_t value, unsigned size)
> {
> const XiveTmOp *xto;
> + uint8_t ring = offset & TM_RING_OFFSET;
> + bool is_valid = xive_ring_valid(tctx, ring);
> + bool hw_owned = is_valid;
>
> trace_xive_tctx_tm_write(tctx->cs->cpu_index, offset, size, value);
>
> - /*
> - * TODO: check V bit in Q[0-3]W2
> - */
> -
> /*
> * First, check for special operations in the 2K region
> */
> + xto = xive_tm_find_op(tctx->xptr, offset, size, true);
> + if (xto) {
> + if (hw_owned && !xto->hw_ok) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to HW TIMA "
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> + }
> + if (!hw_owned && !xto->sw_ok) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to SW TIMA "
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> + }
> + }
> +
> if (offset & TM_SPECIAL_OP) {
> - xto = xive_tm_find_op(tctx->xptr, offset, size, true);
> if (!xto) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA "
> "@%"HWADDR_PRIx" size %d\n", offset, size);
> @@ -821,7 +842,6 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> /*
> * Then, for special operations in the region below 2K.
> */
> - xto = xive_tm_find_op(tctx->xptr, offset, size, true);
> if (xto) {
> xto->write_handler(xptr, tctx, offset, value, size);
> return;
> @@ -830,6 +850,11 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> /*
> * Finish with raw access to the register values
> */
> + if (hw_owned) {
> + /* Store context operations are dangerous when context is valid */
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to HW TIMA "
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> + }
> xive_tm_raw_write(tctx, offset, value, size);
> }
>
> @@ -837,17 +862,27 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> unsigned size)
> {
> const XiveTmOp *xto;
> + uint8_t ring = offset & TM_RING_OFFSET;
> + bool is_valid = xive_ring_valid(tctx, ring);
> + bool hw_owned = is_valid;
> uint64_t ret;
>
> - /*
> - * TODO: check V bit in Q[0-3]W2
> - */
> + xto = xive_tm_find_op(tctx->xptr, offset, size, false);
> + if (xto) {
> + if (hw_owned && !xto->hw_ok) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined read to HW TIMA "
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> + }
> + if (!hw_owned && !xto->sw_ok) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined read to SW TIMA "
> + "@%"HWADDR_PRIx" size %d\n", offset, size);
> + }
> + }
>
> /*
> * First, check for special operations in the 2K region
> */
> if (offset & TM_SPECIAL_OP) {
> - xto = xive_tm_find_op(tctx->xptr, offset, size, false);
> if (!xto) {
> qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access to TIMA"
> "@%"HWADDR_PRIx" size %d\n", offset, size);
> @@ -860,7 +895,6 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> /*
> * Then, for special operations in the region below 2K.
> */
> - xto = xive_tm_find_op(tctx->xptr, offset, size, false);
> if (xto) {
> ret = xto->read_handler(xptr, tctx, offset, size);
> goto out;
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 2372d1014b..b7ca8544e4 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -365,6 +365,7 @@ static inline uint32_t xive_tctx_word2(uint8_t *ring)
> return *((uint32_t *) &ring[TM_WORD2]);
> }
>
> +bool xive_ring_valid(XiveTCTX *tctx, uint8_t ring);
> bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr);
> bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr);
> uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 44/50] ppc/xive2: Implement pool context push TIMA op
2025-05-12 3:10 ` [PATCH 44/50] ppc/xive2: Implement pool context push TIMA op Nicholas Piggin
2025-05-15 15:48 ` Mike Kowal
@ 2025-05-15 16:13 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:13 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Implement pool context push TIMA op.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 4 ++++
> hw/intc/xive2.c | 50 ++++++++++++++++++++++++++++--------------
> include/hw/ppc/xive2.h | 2 ++
> 3 files changed, 39 insertions(+), 17 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index d5bbd8f4c6..979031a587 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -733,6 +733,10 @@ static const XiveTmOp xive2_tm_operations[] = {
> xive2_tm_push_os_ctx, NULL },
> { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS, 1, true, true,
> xive_tm_set_os_lgs, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 4, true, true,
> + xive2_tm_push_pool_ctx, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 8, true, true,
> + xive2_tm_push_pool_ctx, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
> xive2_tm_set_hv_cppr, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 917ecbaae4..21cd07df68 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -583,6 +583,7 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 1);
> }
>
> +/* POOL cam is the same as OS cam encoding */
> static void xive2_cam_decode(uint32_t cam, uint8_t *nvp_blk,
> uint32_t *nvp_idx, bool *valid, bool *hw)
> {
> @@ -940,10 +941,11 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> }
>
> static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> + uint8_t ring,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
> {
> - uint8_t *regs = &tctx->regs[TM_QW1_OS];
> + uint8_t *regs = &tctx->regs[ring];
> uint8_t ipb;
> Xive2Nvp nvp;
>
> @@ -965,7 +967,7 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
>
> /* Automatically restore thread context registers */
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_restore) {
> - xive2_tctx_restore_ctx(xrtr, tctx, TM_QW1_OS, nvp_blk, nvp_idx, &nvp);
> + xive2_tctx_restore_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx, &nvp);
> }
>
> ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
> @@ -976,48 +978,62 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
>
> - xive2_tctx_process_pending(tctx, TM_QW1_OS);
> + xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
> + TM_QW3_HV_PHYS : ring);
> }
>
> /*
> - * Updating the OS CAM line can trigger a resend of interrupt
> + * Updating the ring CAM line can trigger a resend of interrupt
> */
> -void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> - hwaddr offset, uint64_t value, unsigned size)
> +static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size,
> + uint8_t ring)
> {
> uint32_t cam;
> - uint32_t qw1w2;
> - uint64_t qw1dw1;
> + uint32_t w2;
> + uint64_t dw1;
> uint8_t nvp_blk;
> uint32_t nvp_idx;
> - bool vo;
> + bool v;
> bool do_restore;
>
> /* First update the thead context */
> switch (size) {
> case 4:
> cam = value;
> - qw1w2 = cpu_to_be32(cam);
> - memcpy(&tctx->regs[TM_QW1_OS + TM_WORD2], &qw1w2, 4);
> + w2 = cpu_to_be32(cam);
> + memcpy(&tctx->regs[ring + TM_WORD2], &w2, 4);
> break;
> case 8:
> cam = value >> 32;
> - qw1dw1 = cpu_to_be64(value);
> - memcpy(&tctx->regs[TM_QW1_OS + TM_WORD2], &qw1dw1, 8);
> + dw1 = cpu_to_be64(value);
> + memcpy(&tctx->regs[ring + TM_WORD2], &dw1, 8);
> break;
> default:
> g_assert_not_reached();
> }
>
> - xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &vo, &do_restore);
> + xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &v, &do_restore);
>
> /* Check the interrupt pending bits */
> - if (vo) {
> - xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, nvp_blk, nvp_idx,
> - do_restore);
> + if (v) {
> + xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, ring,
> + nvp_blk, nvp_idx, do_restore);
> }
> }
>
> +void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW1_OS);
> +}
> +
> +void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW2_HV_POOL);
> +}
> +
> /* returns -1 if ring is invalid, but still populates block and index */
> static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
> uint8_t *nvp_blk, uint32_t *nvp_idx)
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index a91b99057c..c1ab06a55a 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -140,6 +140,8 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
> void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
> void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> +void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size);
> uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size);
> uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 45/50] ppc/xive2: redistribute group interrupts on context push
2025-05-12 3:10 ` [PATCH 45/50] ppc/xive2: redistribute group interrupts on context push Nicholas Piggin
2025-05-15 15:44 ` Mike Kowal
@ 2025-05-15 16:13 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:13 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> When pushing a context, any presented group interrupt should be
> redistributed before processing pending interrupts to present
> highest priority.
>
> This can occur when pushing the POOL ring when the valid PHYS
> ring has a group interrupt presented, because they share signal
> registers.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 21cd07df68..392ac6077e 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -945,8 +945,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
> {
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
> - uint8_t ipb;
> + uint8_t ipb, nsr = sig_regs[TM_NSR];
> Xive2Nvp nvp;
>
> /*
> @@ -978,6 +979,11 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
>
> + if (xive_nsr_indicates_group_exception(ring, nsr)) {
> + /* redistribute precluded active grp interrupt */
> + g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the grp interrupt */
> + xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
> + }
> xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
> TM_QW3_HV_PHYS : ring);
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 46/50] ppc/xive2: Implement set_os_pending TIMA op
2025-05-12 3:10 ` [PATCH 46/50] ppc/xive2: Implement set_os_pending TIMA op Nicholas Piggin
2025-05-15 15:49 ` Mike Kowal
@ 2025-05-15 16:14 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:14 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> xive2 must take into account redistribution of group interrupts if
> the VP directed priority exceeds the group interrupt priority after
> this operation. The xive1 code is not group aware so implement this
> for xive2.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 2 ++
> hw/intc/xive2.c | 28 ++++++++++++++++++++++++++++
> include/hw/ppc/xive2.h | 2 ++
> 3 files changed, 32 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 979031a587..dc64edf13d 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -747,6 +747,8 @@ static const XiveTmOp xive2_tm_operations[] = {
> /* MMIOs above 2K : special operations with side effects */
> { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG, 2, true, false,
> NULL, xive_tm_ack_os_reg },
> + { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING, 1, true, false,
> + xive2_tm_set_os_pending, NULL },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2, 4, true, false,
> NULL, xive2_tm_pull_os_ctx },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX, 4, true, false,
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 392ac6077e..de1ccad685 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1323,6 +1323,34 @@ void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_tctx_set_cppr(tctx, TM_QW1_OS, value & 0xff);
> }
>
> +/*
> + * Adjust the IPB to allow a CPU to process event queues of other
> + * priorities during one physical interrupt cycle.
> + */
> +void xive2_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> + uint8_t ring = TM_QW1_OS;
> + uint8_t *regs = &tctx->regs[ring];
> + uint8_t priority = value & 0xff;
> +
> + /*
> + * XXX: should this simply set a bit in IPB and wait for it to be picked
> + * up next cycle, or is it supposed to present it now? We implement the
> + * latter here.
> + */
> + regs[TM_IPB] |= xive_priority_to_ipb(priority);
> + if (xive_ipb_to_pipr(regs[TM_IPB]) >= regs[TM_PIPR]) {
> + return;
> + }
> + if (xive_nsr_indicates_group_exception(ring, regs[TM_NSR])) {
> + xive2_redistribute(xrtr, tctx, ring);
> + }
> +
> + xive_tctx_pipr_present(tctx, ring, priority, 0);
> +}
> +
> static void xive2_tctx_set_target(XiveTCTX *tctx, uint8_t ring, uint8_t target)
> {
> uint8_t *regs = &tctx->regs[ring];
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index c1ab06a55a..45266c2a8b 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -130,6 +130,8 @@ void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> +void xive2_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
> uint64_t value, unsigned size);
> uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 47/50] ppc/xive2: Implement POOL LGS push TIMA op
2025-05-12 3:10 ` [PATCH 47/50] ppc/xive2: Implement POOL LGS push " Nicholas Piggin
2025-05-15 15:50 ` Mike Kowal
@ 2025-05-15 16:15 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:15 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Implement set LGS for the POOL ring.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index dc64edf13d..807a1c1c34 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -532,6 +532,12 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
> xive_tctx_set_lgs(tctx, TM_QW1_OS, value & 0xff);
> }
>
> +static void xive_tm_set_pool_lgs(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive_tctx_set_lgs(tctx, TM_QW2_HV_POOL, value & 0xff);
> +}
> +
> /*
> * Adjust the PIPR to allow a CPU to process event queues of other
> * priorities during one physical interrupt cycle.
> @@ -737,6 +743,8 @@ static const XiveTmOp xive2_tm_operations[] = {
> xive2_tm_push_pool_ctx, NULL },
> { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 8, true, true,
> xive2_tm_push_pool_ctx, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_LGS, 1, true, true,
> + xive_tm_set_pool_lgs, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
> xive2_tm_set_hv_cppr, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 48/50] ppc/xive2: Implement PHYS ring VP push TIMA op
2025-05-12 3:10 ` [PATCH 48/50] ppc/xive2: Implement PHYS ring VP " Nicholas Piggin
2025-05-15 15:50 ` Mike Kowal
@ 2025-05-15 16:16 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:16 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> Implement the phys (aka hard) VP push. PowerVM uses this operation.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 2 ++
> hw/intc/xive2.c | 11 +++++++++++
> include/hw/ppc/xive2.h | 2 ++
> 3 files changed, 15 insertions(+)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 807a1c1c34..69118999e6 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -747,6 +747,8 @@ static const XiveTmOp xive2_tm_operations[] = {
> xive_tm_set_pool_lgs, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR, 1, true, true,
> xive2_tm_set_hv_cppr, NULL },
> + { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, false, true,
> + xive2_tm_push_phys_ctx, NULL },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
> NULL, xive_tm_vt_poll },
> { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T, 1, true, true,
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index de1ccad685..a9b188b909 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1005,6 +1005,11 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>
> /* First update the thead context */
> switch (size) {
> + case 1:
> + tctx->regs[ring + TM_WORD2] = value & 0xff;
> + cam = xive2_tctx_hw_cam_line(xptr, tctx);
> + cam |= ((value & 0xc0) << 24); /* V and H bits */
> + break;
> case 4:
> cam = value;
> w2 = cpu_to_be32(cam);
> @@ -1040,6 +1045,12 @@ void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW2_HV_POOL);
> }
>
> +void xive2_tm_push_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW3_HV_PHYS);
> +}
> +
> /* returns -1 if ring is invalid, but still populates block and index */
> static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
> uint8_t *nvp_blk, uint32_t *nvp_idx)
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 45266c2a8b..f4437e2c79 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -146,6 +146,8 @@ void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size);
> +void xive2_tm_push_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size);
> uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size);
> void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 49/50] ppc/xive: Split need_resend into restore_nvp
2025-05-12 3:10 ` [PATCH 49/50] ppc/xive: Split need_resend into restore_nvp Nicholas Piggin
2025-05-15 15:57 ` Mike Kowal
@ 2025-05-15 16:16 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:16 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> This is needed by the next patch which will re-send on all lower
> rings when pushing a context.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive.c | 24 ++++++++++++------------
> hw/intc/xive2.c | 28 ++++++++++++++++------------
> 2 files changed, 28 insertions(+), 24 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 69118999e6..9ade9ec6c1 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -606,7 +606,7 @@ static uint64_t xive_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> return qw1w2;
> }
>
> -static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
> +static void xive_tctx_restore_nvp(XiveRouter *xrtr, XiveTCTX *tctx,
> uint8_t nvt_blk, uint32_t nvt_idx)
> {
> XiveNVT nvt;
> @@ -632,16 +632,6 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
> uint8_t *regs = &tctx->regs[TM_QW1_OS];
> regs[TM_IPB] |= ipb;
> }
> -
> - /*
> - * Always call xive_tctx_recompute_from_ipb(). Even if there were no
> - * escalation triggered, there could be a pending interrupt which
> - * was saved when the context was pulled and that we need to take
> - * into account by recalculating the PIPR (which is not
> - * saved/restored).
> - * It will also raise the External interrupt signal if needed.
> - */
> - xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
> }
>
> /*
> @@ -663,7 +653,17 @@ static void xive_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>
> /* Check the interrupt pending bits */
> if (vo) {
> - xive_tctx_need_resend(XIVE_ROUTER(xptr), tctx, nvt_blk, nvt_idx);
> + xive_tctx_restore_nvp(XIVE_ROUTER(xptr), tctx, nvt_blk, nvt_idx);
> +
> + /*
> + * Always call xive_tctx_recompute_from_ipb(). Even if there were no
> + * escalation triggered, there could be a pending interrupt which
> + * was saved when the context was pulled and that we need to take
> + * into account by recalculating the PIPR (which is not
> + * saved/restored).
> + * It will also raise the External interrupt signal if needed.
> + */
> + xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
> }
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index a9b188b909..53e90b8178 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -940,14 +940,14 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
> return cppr;
> }
>
> -static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> +/* Restore TIMA VP context from NVP backlog */
> +static void xive2_tctx_restore_nvp(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t ring,
> uint8_t nvp_blk, uint32_t nvp_idx,
> bool do_restore)
> {
> - uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> uint8_t *regs = &tctx->regs[ring];
> - uint8_t ipb, nsr = sig_regs[TM_NSR];
> + uint8_t ipb;
> Xive2Nvp nvp;
>
> /*
> @@ -978,14 +978,6 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
> }
> /* IPB bits in the backlog are merged with the TIMA IPB bits */
> regs[TM_IPB] |= ipb;
> -
> - if (xive_nsr_indicates_group_exception(ring, nsr)) {
> - /* redistribute precluded active grp interrupt */
> - g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the grp interrupt */
> - xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
> - }
> - xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
> - TM_QW3_HV_PHYS : ring);
> }
>
> /*
> @@ -1028,8 +1020,20 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
>
> /* Check the interrupt pending bits */
> if (v) {
> - xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, ring,
> + Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> + uint8_t nsr = sig_regs[TM_NSR];
> +
> + xive2_tctx_restore_nvp(xrtr, tctx, ring,
> nvp_blk, nvp_idx, do_restore);
> +
> + if (xive_nsr_indicates_group_exception(ring, nsr)) {
> + /* redistribute precluded active grp interrupt */
> + g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the interrupt */
> + xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
> + }
> + xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
> + TM_QW3_HV_PHYS : ring);
> }
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 50/50] ppc/xive2: Enable lower level contexts on VP push
2025-05-12 3:10 ` [PATCH 50/50] ppc/xive2: Enable lower level contexts on VP push Nicholas Piggin
2025-05-15 15:54 ` Mike Kowal
@ 2025-05-15 16:17 ` Miles Glenn
1 sibling, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-15 16:17 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
On Mon, 2025-05-12 at 13:10 +1000, Nicholas Piggin wrote:
> When pushing a context, the lower-level context becomes valid if it
> had V=1, and so on. Iterate lower level contexts and send them
> pending interrupts if they become enabled.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 36 ++++++++++++++++++++++++++++--------
> 1 file changed, 28 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 53e90b8178..ded003fa87 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -995,6 +995,12 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> bool v;
> bool do_restore;
>
> + if (xive_ring_valid(tctx, ring)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Attempt to push VP to enabled"
> + " ring 0x%02x\n", ring);
> + return;
> + }
> +
> /* First update the thead context */
> switch (size) {
> case 1:
> @@ -1021,19 +1027,32 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> /* Check the interrupt pending bits */
> if (v) {
> Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> - uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
> - uint8_t nsr = sig_regs[TM_NSR];
> + uint8_t cur_ring;
>
> xive2_tctx_restore_nvp(xrtr, tctx, ring,
> nvp_blk, nvp_idx, do_restore);
>
> - if (xive_nsr_indicates_group_exception(ring, nsr)) {
> - /* redistribute precluded active grp interrupt */
> - g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the interrupt */
> - xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
> + for (cur_ring = TM_QW1_OS; cur_ring <= ring;
> + cur_ring += XIVE_TM_RING_SIZE) {
> + uint8_t *sig_regs = xive_tctx_signal_regs(tctx, cur_ring);
> + uint8_t nsr = sig_regs[TM_NSR];
> +
> + if (!xive_ring_valid(tctx, cur_ring)) {
> + continue;
> + }
> +
> + if (cur_ring == TM_QW2_HV_POOL) {
> + if (xive_nsr_indicates_exception(cur_ring, nsr)) {
> + g_assert(xive_nsr_exception_ring(cur_ring, nsr) ==
> + TM_QW3_HV_PHYS);
> + xive2_redistribute(xrtr, tctx,
> + xive_nsr_exception_ring(ring, nsr));
> + }
> + xive2_tctx_process_pending(tctx, TM_QW3_HV_PHYS);
> + break;
> + }
> + xive2_tctx_process_pending(tctx, cur_ring);
> }
> - xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
> - TM_QW3_HV_PHYS : ring);
> }
> }
>
> @@ -1159,6 +1178,7 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
> int rc;
>
> g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
> + g_assert(sig_regs[TM_WORD2] & 0x80);
> g_assert(!xive_nsr_indicates_group_exception(sig_ring, sig_regs[TM_NSR]));
>
> /*
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting
2025-05-14 19:07 ` Mike Kowal
@ 2025-05-15 23:31 ` Nicholas Piggin
0 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-15 23:31 UTC (permalink / raw)
To: Mike Kowal, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On Thu May 15, 2025 at 5:07 AM AEST, Mike Kowal wrote:
>
> On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
>> Have xive_tctx_accept clear NSR in one shot rather than masking out bits
>> as they are tested, which makes it clear it's reset to 0, and does not
>> have a partial NSR value in the register.
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> hw/intc/xive.c | 6 ++----
>> 1 file changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
>> index 6293ea4361..bb40a69c5b 100644
>> --- a/hw/intc/xive.c
>> +++ b/hw/intc/xive.c
>> @@ -68,13 +68,11 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
>> * If the interrupt was for a specific VP, reset the pending
>> * buffer bit, otherwise clear the logical server indicator
>> */
>> - if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
>> - regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
>> - } else {
>> + if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
>
>
> Any reason why you didn't just use the else? Regardless I am fine
> either way.
IIRC it was because the 'if' side goes away entirely, ends up
working better this way I think.
>
> Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
>
> Thanks MAK
Thanks,
Nick
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 28/50] ppc/xive: Change presenter .match_nvt to match not present
2025-05-14 19:54 ` Mike Kowal
@ 2025-05-15 23:40 ` Nicholas Piggin
0 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-15 23:40 UTC (permalink / raw)
To: Mike Kowal, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On Thu May 15, 2025 at 5:54 AM AEST, Mike Kowal wrote:
>
> On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
>> Have the match_nvt method only perform a TCTX match but don't present
>> the interrupt, the caller presents. This has no functional change, but
>> allows for more complicated presentation logic after matching.
>
>
> I always found the count meaning less since we do not support the XIVE
> Histogram...
Right, nothing gets done with it at the moment which is confusing.
We could remove it.
Histogram looks like a LRU type selection as opposed to this
round-robin we do?
Thanks,
Nick
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
2025-05-14 20:10 ` Mike Kowal
2025-05-15 15:21 ` Mike Kowal
@ 2025-05-15 23:43 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-15 23:43 UTC (permalink / raw)
To: Mike Kowal, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On Thu May 15, 2025 at 6:10 AM AEST, Mike Kowal wrote:
>
> On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
>> xive_tctx_pipr_update() is used for multiple things. In an effort
>> to make things simpler and less overloaded, split out the function
>> that is used to present a new interrupt to the tctx.
>
>
> Why is this a separate commit fro 30? The change here does not do
> anything different.
I think you meant 31.
You're right this one doesn't change any function and they could
be squashed. I added the API here, then made the fix to it in the
next patch, but it is a small enough change that it could have
easily been in one patch.
> Regardless, taken this patch set as a whole, it's good by me.
>
> Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks,
Nick
>
> Thanks, MAK
>
>
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> hw/intc/xive.c | 8 +++++++-
>> hw/intc/xive2.c | 2 +-
>> include/hw/ppc/xive.h | 2 ++
>> 3 files changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
>> index 27b5a21371..bf4c0634ca 100644
>> --- a/hw/intc/xive.c
>> +++ b/hw/intc/xive.c
>> @@ -225,6 +225,12 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
>> xive_tctx_notify(tctx, ring, group_level);
>> }
>>
>> +void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
>> + uint8_t group_level)
>> +{
>> + xive_tctx_pipr_update(tctx, ring, priority, group_level);
>> +}
>> +
>> /*
>> * XIVE Thread Interrupt Management Area (TIMA)
>> */
>> @@ -2040,7 +2046,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
>> xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
>> &match)) {
>> trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, 0);
>> - xive_tctx_pipr_update(match.tctx, match.ring, priority, 0);
>> + xive_tctx_pipr_present(match.tctx, match.ring, priority, 0);
>> return;
>> }
>>
>> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
>> index cae4092198..f91109b84a 100644
>> --- a/hw/intc/xive2.c
>> +++ b/hw/intc/xive2.c
>> @@ -1652,7 +1652,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
>>
>> group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
>> trace_xive_presenter_notify(nvx_blk, nvx_idx, ring, group_level);
>> - xive_tctx_pipr_update(tctx, ring, priority, group_level);
>> + xive_tctx_pipr_present(tctx, ring, priority, group_level);
>> return;
>> }
>>
>> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
>> index 8152a9df3d..0d6b11e818 100644
>> --- a/include/hw/ppc/xive.h
>> +++ b/include/hw/ppc/xive.h
>> @@ -562,6 +562,8 @@ void xive_tctx_reset(XiveTCTX *tctx);
>> void xive_tctx_destroy(XiveTCTX *tctx);
>> void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
>> uint8_t group_level);
>> +void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
>> + uint8_t group_level);
>> void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
>> void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
>> uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 32/50] ppc/xive: Split xive recompute from IPB function
2025-05-14 20:42 ` Mike Kowal
@ 2025-05-15 23:46 ` Nicholas Piggin
0 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-15 23:46 UTC (permalink / raw)
To: Mike Kowal, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On Thu May 15, 2025 at 6:42 AM AEST, Mike Kowal wrote:
>
> On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
>> Further split xive_tctx_pipr_update() by splitting out a new function
>> that is used to re-compute the PIPR from IPB. This is generally only
>> used with XIVE1, because group interrputs require more logic.
>
>
> Previous upstreaming was focused only on XIVE2 as not to impact users of
> XIVE1.
Yeah it's a balancing act. I didn't want xive1 to diverge too much in
basic APIs like the PIPR updating.
So long as powernv9 gets some basic OPAL testing, hopefully it should be
okay.
>
> But I assume this does not hurt anything.
>
> Reviewed-by: Michael Kowal<kowal@linux.ibm.com>
Thanks,
Nick
>
> Thanks, MAK
>
>
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> hw/intc/xive.c | 25 ++++++++++++++++++++++---
>> 1 file changed, 22 insertions(+), 3 deletions(-)
>>
>> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
>> index 25f6c69c44..5ff1b8f024 100644
>> --- a/hw/intc/xive.c
>> +++ b/hw/intc/xive.c
>> @@ -225,6 +225,20 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
>> xive_tctx_notify(tctx, ring, group_level);
>> }
>>
>> +static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
>> +{
>> + /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
>> + uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
>> + uint8_t *aregs = &tctx->regs[alt_ring];
>> + uint8_t *regs = &tctx->regs[ring];
>> +
>> + /* Does not support a presented group interrupt */
>> + g_assert(!xive_nsr_indicates_group_exception(alt_ring, aregs[TM_NSR]));
>> +
>> + aregs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
>> + xive_tctx_notify(tctx, ring, 0);
>> +}
>> +
>> void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
>> uint8_t group_level)
>> {
>> @@ -517,7 +531,12 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
>> static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
>> hwaddr offset, uint64_t value, unsigned size)
>> {
>> - xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0);
>> + uint8_t ring = TM_QW1_OS;
>> + uint8_t *regs = &tctx->regs[ring];
>> +
>> + /* XXX: how should this work exactly? */
>> + regs[TM_IPB] |= xive_priority_to_ipb(value & 0xff);
>> + xive_tctx_pipr_recompute_from_ipb(tctx, ring);
>> }
>>
>> static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk,
>> @@ -601,14 +620,14 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
>> }
>>
>> /*
>> - * Always call xive_tctx_pipr_update(). Even if there were no
>> + * Always call xive_tctx_recompute_from_ipb(). Even if there were no
>> * escalation triggered, there could be a pending interrupt which
>> * was saved when the context was pulled and that we need to take
>> * into account by recalculating the PIPR (which is not
>> * saved/restored).
>> * It will also raise the External interrupt signal if needed.
>> */
>> - xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
>> + xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
>> }
>>
>> /*
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented
2025-05-15 15:16 ` Mike Kowal
@ 2025-05-15 23:50 ` Nicholas Piggin
0 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-15 23:50 UTC (permalink / raw)
To: Mike Kowal, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On Fri May 16, 2025 at 1:16 AM AEST, Mike Kowal wrote:
>
> On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
>> The relationship between an interrupt signaled in the TIMA and the QEMU
>> irq line to the processor to be 1:1, so they should be raised and
>
> ...needs to be...
>
>
>> lowered together and "just in case" lowering should be avoided (it could
>> mask
>
> I think you missed the rest of the line...
Thanks, good catch. I think it's supposed to be "could mask a bug in
the logic elsewhere".
I'll fix.
Thanks,
Nick
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
2025-05-15 15:21 ` Mike Kowal
@ 2025-05-15 23:51 ` Nicholas Piggin
0 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-15 23:51 UTC (permalink / raw)
To: Mike Kowal, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On Fri May 16, 2025 at 1:21 AM AEST, Mike Kowal wrote:
>
> On 5/14/2025 3:10 PM, Mike Kowal wrote:
>>
>> On 5/11/2025 10:10 PM, Nicholas Piggin wrote:
>>> xive_tctx_pipr_update() is used for multiple things. In an effort
>>> to make things simpler and less overloaded, split out the function
>>> that is used to present a new interrupt to the tctx.
>>
>>
>> Why is this a separate commit fro 30? The change here does not do
>> anything different.
>> Regardless, taken this patch set as a whole, it's good by me.
>
>
> Okay, I see the rest or this is done in patch set 35...
Yeah, I split up the old API in several steps...
Thanks,
Nick
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 17/50] pnv/xive2: Support ESB Escalation
2025-05-12 3:10 ` [PATCH 17/50] pnv/xive2: Support ESB Escalation Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
2025-05-14 19:00 ` Mike Kowal
@ 2025-05-16 0:05 ` Nicholas Piggin
2025-05-16 15:44 ` Miles Glenn
2 siblings, 1 reply; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:05 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin, Glenn Miles
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.vnet.ibm.com>
>
> Add support for XIVE ESB Interrupt Escalation.
>
> Suggested-by: Michael Kowal <kowal@linux.ibm.com>
> [This change was taken from a patch provided by Michael Kowal.]
> Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> ---
> hw/intc/xive2.c | 62 ++++++++++++++++++++++++++++++-------
> include/hw/ppc/xive2.h | 1 +
> include/hw/ppc/xive2_regs.h | 13 +++++---
> 3 files changed, 59 insertions(+), 17 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index de139dcfbf..0993e792cc 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1552,18 +1552,39 @@ do_escalation:
> }
> }
>
> - /*
> - * The END trigger becomes an Escalation trigger
> - */
> - xive2_router_end_notify(xrtr,
> - xive_get_field32(END2_W4_END_BLOCK, end.w4),
> - xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> - xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + if (xive2_end_is_escalate_end(&end)) {
> + /*
> + * Perform END Adaptive escalation processing
> + * The END trigger becomes an Escalation trigger
> + */
> + xive2_router_end_notify(xrtr,
> + xive_get_field32(END2_W4_END_BLOCK, end.w4),
> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> + xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + } /* end END adaptive escalation */
> +
> + else {
> + uint32_t lisn; /* Logical Interrupt Source Number */
> +
> + /*
> + * Perform ESB escalation processing
> + * E[N] == 1 --> N
> + * Req[Block] <- E[ESB_Block]
> + * Req[Index] <- E[ESB_Index]
> + * Req[Offset] <- 0x000
> + * Execute <ESB Store> Req command
> + */
> + lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
> + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
> +
> + xive2_notify(xrtr, lisn, true /* pq_checked */);
Sorry I forgot to squash in a fix for the issues here. These should be
_ESB_ constants not _END_, and we believe pq_checked should be false
here so the ESB state machine is run.
https://lore.kernel.org/qemu-devel/D8CFK7Z5AJF8.ALT8MMH6EYYT@gmail.com/
I think we took discussion offline after that but that was the
conclusion. I will sqash that fix in here. With that,
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
I also wonder if Mike should be author of this patch since
that's what the note indicates? Or co-author? Better give your
Signed-off-by too, if so.
Thanks,
Nick
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 03/50] ppc/xive2: Fix calculation of END queue sizes
2025-05-12 3:10 ` [PATCH 03/50] ppc/xive2: Fix calculation of END queue sizes Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
2025-05-14 18:45 ` Mike Kowal
@ 2025-05-16 0:06 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:06 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> The queue size of an Event Notification Descriptor (END)
> is determined by the 'cl' and QsZ fields of the END.
> If the cl field is 1, then the queue size (in bytes) will
> be the size of a cache line 128B * 2^QsZ and QsZ is limited
> to 4. Otherwise, it will be 4096B * 2^QsZ with QsZ limited
> to 12.
>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> Fixes: f8a233dedf2 ("ppc/xive2: Introduce a XIVE2 core framework")
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 25 +++++++++++++++++++------
> include/hw/ppc/xive2_regs.h | 1 +
> 2 files changed, 20 insertions(+), 6 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 7d584dfafa..790152a2a6 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -188,12 +188,27 @@ void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
> (uint32_t) xive_get_field64(EAS2_END_DATA, eas->w));
> }
>
> +#define XIVE2_QSIZE_CHUNK_CL 128
> +#define XIVE2_QSIZE_CHUNK_4k 4096
> +/* Calculate max number of queue entries for an END */
> +static uint32_t xive2_end_get_qentries(Xive2End *end)
> +{
> + uint32_t w3 = end->w3;
> + uint32_t qsize = xive_get_field32(END2_W3_QSIZE, w3);
> + if (xive_get_field32(END2_W3_CL, w3)) {
> + g_assert(qsize <= 4);
> + return (XIVE2_QSIZE_CHUNK_CL << qsize) / sizeof(uint32_t);
> + } else {
> + g_assert(qsize <= 12);
> + return (XIVE2_QSIZE_CHUNK_4k << qsize) / sizeof(uint32_t);
> + }
> +}
> +
> void xive2_end_queue_pic_print_info(Xive2End *end, uint32_t width, GString *buf)
> {
> uint64_t qaddr_base = xive2_end_qaddr(end);
> - uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
> uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
> - uint32_t qentries = 1 << (qsize + 10);
> + uint32_t qentries = xive2_end_get_qentries(end);
> int i;
>
> /*
> @@ -223,8 +238,7 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf)
> uint64_t qaddr_base = xive2_end_qaddr(end);
> uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
> uint32_t qgen = xive_get_field32(END2_W1_GENERATION, end->w1);
> - uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
> - uint32_t qentries = 1 << (qsize + 10);
> + uint32_t qentries = xive2_end_get_qentries(end);
>
> uint32_t nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6);
> uint32_t nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6);
> @@ -341,13 +355,12 @@ void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx, GString *buf)
> static void xive2_end_enqueue(Xive2End *end, uint32_t data)
> {
> uint64_t qaddr_base = xive2_end_qaddr(end);
> - uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
> uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
> uint32_t qgen = xive_get_field32(END2_W1_GENERATION, end->w1);
>
> uint64_t qaddr = qaddr_base + (qindex << 2);
> uint32_t qdata = cpu_to_be32((qgen << 31) | (data & 0x7fffffff));
> - uint32_t qentries = 1 << (qsize + 10);
> + uint32_t qentries = xive2_end_get_qentries(end);
>
> if (dma_memory_write(&address_space_memory, qaddr, &qdata, sizeof(qdata),
> MEMTXATTRS_UNSPECIFIED)) {
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index b11395c563..3c28de8a30 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -87,6 +87,7 @@ typedef struct Xive2End {
> #define END2_W2_EQ_ADDR_HI PPC_BITMASK32(8, 31)
> uint32_t w3;
> #define END2_W3_EQ_ADDR_LO PPC_BITMASK32(0, 24)
> +#define END2_W3_CL PPC_BIT32(27)
> #define END2_W3_QSIZE PPC_BITMASK32(28, 31)
> uint32_t w4;
> #define END2_W4_END_BLOCK PPC_BITMASK32(4, 7)
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address
2025-05-12 3:10 ` [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address Nicholas Piggin
` (2 preceding siblings ...)
2025-05-15 15:34 ` Miles Glenn
@ 2025-05-16 0:08 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:08 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> In a multi chip environment there will be remote/forwarded VSDs. The check
> to find a matching INT controller (XIVE) of the remote block number was
> checking the INTs chip number. Block numbers are not tied to a chip number.
> The matching remote INT is the one that matches the forwarded VSD address
> with VSD types associated MMIO BAR.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 25 +++++++++++++++++--------
> 1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index d1713b406c..30b4ab2efe 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -102,12 +102,10 @@ static uint32_t pnv_xive2_block_id(PnvXive2 *xive)
> }
>
> /*
> - * Remote access to controllers. HW uses MMIOs. For now, a simple scan
> - * of the chips is good enough.
> - *
> - * TODO: Block scope support
> + * Remote access to INT controllers. HW uses MMIOs(?). For now, a simple
> + * scan of all the chips INT controller is good enough.
> */
> -static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
> +static PnvXive2 *pnv_xive2_get_remote(uint32_t vsd_type, hwaddr fwd_addr)
> {
> PnvMachineState *pnv = PNV_MACHINE(qdev_get_machine());
> int i;
> @@ -116,10 +114,22 @@ static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
> Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
> PnvXive2 *xive = &chip10->xive;
>
> - if (pnv_xive2_block_id(xive) == blk) {
> + /*
> + * Is this the XIVE matching the forwarded VSD address is for this
> + * VSD type
> + */
> + if ((vsd_type == VST_ESB && fwd_addr == xive->esb_base) ||
> + (vsd_type == VST_END && fwd_addr == xive->end_base) ||
> + ((vsd_type == VST_NVP ||
> + vsd_type == VST_NVG) && fwd_addr == xive->nvpg_base) ||
> + (vsd_type == VST_NVC && fwd_addr == xive->nvc_base)) {
> return xive;
> }
> }
> +
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "XIVE: >>>>> pnv_xive2_get_remote() vsd_type %u fwd_addr 0x%lX NOT FOUND\n",
> + vsd_type, fwd_addr);
> return NULL;
> }
>
> @@ -252,8 +262,7 @@ static uint64_t pnv_xive2_vst_addr(PnvXive2 *xive, uint32_t type, uint8_t blk,
>
> /* Remote VST access */
> if (GETFIELD(VSD_MODE, vsd) == VSD_MODE_FORWARD) {
> - xive = pnv_xive2_get_remote(blk);
> -
> + xive = pnv_xive2_get_remote(type, (vsd & VSD_ADDRESS_MASK));
> return xive ? pnv_xive2_vst_addr(xive, type, blk, idx) : 0;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch
2025-05-12 3:10 ` [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch Nicholas Piggin
` (2 preceding siblings ...)
2025-05-15 15:41 ` Miles Glenn
@ 2025-05-16 0:09 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:09 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> When the END Event Queue wraps the END EQ Generation bit is flipped and the
> Generation Flipped bit is set to one. On a END cache Watch read operation,
> the Generation Flipped bit needs to be reset.
>
> While debugging an error modified END not valid error messages to include
> the method since all were the same.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 3 ++-
> hw/intc/xive2.c | 4 ++--
> 2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 30b4ab2efe..72cdf0f20c 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1325,10 +1325,11 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
> case VC_ENDC_WATCH3_DATA0:
> /*
> * Load DATA registers from cache with data requested by the
> - * SPEC register
> + * SPEC register. Clear gen_flipped bit in word 1.
> */
> watch_engine = (offset - VC_ENDC_WATCH0_DATA0) >> 6;
> pnv_xive2_end_cache_load(xive, watch_engine);
> + xive->vc_regs[reg] &= ~(uint64_t)END2_W1_GEN_FLIPPED;
> val = xive->vc_regs[reg];
> break;
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 4dd04a0398..453fe37f18 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -374,8 +374,8 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
> qgen ^= 1;
> end->w1 = xive_set_field32(END2_W1_GENERATION, end->w1, qgen);
>
> - /* TODO(PowerNV): reset GF bit on a cache watch operation */
> - end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, qgen);
> + /* Set gen flipped to 1, it gets reset on a cache watch operation */
> + end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, 1);
> }
> end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm
2025-05-12 3:10 ` [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm Nicholas Piggin
` (2 preceding siblings ...)
2025-05-15 15:42 ` Miles Glenn
@ 2025-05-16 0:12 ` Nicholas Piggin
2025-05-16 16:22 ` Mike Kowal
3 siblings, 1 reply; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:12 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> The current xive algorithm for finding a matching group vCPU
> target always uses the first vCPU found. And, since it always
> starts the search with thread 0 of a core, thread 0 is almost
> always used to handle group interrupts. This can lead to additional
> interrupt latency and poor performance for interrupt intensive
> work loads.
>
> Changing this to use a simple round-robin algorithm for deciding which
> thread number to use when starting a search, which leads to a more
> distributed use of threads for handling group interrupts.
>
Does hardware always do the "histogram" distribution? I wonder if
there would be any performance benefit to do something like send
to an idle thread/core with preference. I guess the xive controller
might have a difficult time querying the state of a bunch of cores
before sending so it's probably not practical for real hardware.
In any case this is a nice improvement for group delivery.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> [npiggin: Also round-robin among threads, not just cores]
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 72cdf0f20c..d7ca97ecbb 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -643,13 +643,18 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> int i, j;
> bool gen1_tima_os =
> xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
> + static int next_start_core;
> + static int next_start_thread;
> + int start_core = next_start_core;
> + int start_thread = next_start_thread;
>
> for (i = 0; i < chip->nr_cores; i++) {
> - PnvCore *pc = chip->cores[i];
> + PnvCore *pc = chip->cores[(i + start_core) % chip->nr_cores];
> CPUCore *cc = CPU_CORE(pc);
>
> for (j = 0; j < cc->nr_threads; j++) {
> - PowerPCCPU *cpu = pc->threads[j];
> + /* Start search for match with different thread each call */
> + PowerPCCPU *cpu = pc->threads[(j + start_thread) % cc->nr_threads];
> XiveTCTX *tctx;
> int ring;
>
> @@ -694,6 +699,15 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
> if (!match->tctx) {
> match->ring = ring;
> match->tctx = tctx;
> +
> + next_start_thread = j + start_thread + 1;
> + if (next_start_thread >= cc->nr_threads) {
> + next_start_thread = 0;
> + next_start_core = i + start_core + 1;
> + if (next_start_core >= chip->nr_cores) {
> + next_start_core = 0;
> + }
> + }
> }
> count++;
> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 09/50] ppc/xive2: Fix irq preempted by lower priority group irq
2025-05-12 3:10 ` [PATCH 09/50] ppc/xive2: Fix irq preempted by lower priority group irq Nicholas Piggin
2025-05-14 14:31 ` Caleb Schlossin
2025-05-14 18:52 ` Mike Kowal
@ 2025-05-16 0:12 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:12 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> A problem was seen where uart interrupts would be lost resulting in the
> console hanging. Traces showed that a lower priority interrupt was
> preempting a higher priority interrupt, which would result in the higher
> priority interrupt never being handled.
>
> The new interrupt's priority was being compared against the CPPR
> (Current Processor Priority Register) instead of the PIPR (Post
> Interrupt Priority Register), as was required by the XIVE spec.
> This allowed for a window between raising an interrupt and ACK'ing
> the interrupt where a lower priority interrupt could slip in.
>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> Fixes: 26c55b99418 ("ppc/xive2: Process group backlog when updating the CPPR")
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 453fe37f18..2b4d0f51be 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1283,7 +1283,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
> * priority to know if the thread can take the interrupt now or if
> * it is precluded.
> */
> - if (priority < alt_regs[TM_CPPR]) {
> + if (priority < alt_regs[TM_PIPR]) {
> return false;
> }
> return true;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update
2025-05-12 3:10 ` [PATCH 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update Nicholas Piggin
2025-05-14 14:32 ` Caleb Schlossin
2025-05-14 18:53 ` Mike Kowal
@ 2025-05-16 0:15 ` Nicholas Piggin
2 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:15 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> According to the XIVE spec, updating the CPPR should also update the
> PIPR. The final value of the PIPR depends on other factors, but it
> should never be set to a value that is above the CPPR.
>
> Also added support for redistributing an active group interrupt when it
> is precluded as a result of changing the CPPR value.
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
The second paragraph in there I believe was my fault when splitting
the patch into this one and the previous. I will remove the second
paragraph from this changelog and with that,
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> hw/intc/xive2.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 2b4d0f51be..1971c05fa1 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -995,7 +995,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> }
> }
> }
> - regs[TM_PIPR] = pipr_min;
> +
> + /* PIPR should not be set to a value greater than CPPR */
> + regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
>
> rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
> if (rc) {
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 18/50] pnv/xive2: Print value in invalid register write logging
2025-05-12 3:10 ` [PATCH 18/50] pnv/xive2: Print value in invalid register write logging Nicholas Piggin
` (2 preceding siblings ...)
2025-05-15 15:50 ` Miles Glenn
@ 2025-05-16 0:15 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:15 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> This can make it easier to see what the target system is trying to
> do.
>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> [npiggin: split from larger patch]
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 24 ++++++++++++++++--------
> 1 file changed, 16 insertions(+), 8 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index d7ca97ecbb..fcf5b2e75c 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1197,7 +1197,8 @@ static void pnv_xive2_ic_cq_write(void *opaque, hwaddr offset,
> case CQ_FIRMASK_OR: /* FIR error reporting */
> break;
> default:
> - xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx, offset);
> + xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1495,7 +1496,8 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "VC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "VC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1703,7 +1705,8 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "PC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "PC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
> @@ -1790,7 +1793,8 @@ static void pnv_xive2_ic_tctxt_write(void *opaque, hwaddr offset,
> xive->tctxt_regs[reg] = val;
> break;
> default:
> - xive2_error(xive, "TCTXT: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "TCTXT: invalid write @0x%"HWADDR_PRIx
> + " data 0x%"PRIx64, offset, val);
> return;
> }
> }
> @@ -1861,7 +1865,8 @@ static void pnv_xive2_xscom_write(void *opaque, hwaddr offset,
> pnv_xive2_ic_tctxt_write(opaque, mmio_offset, val, size);
> break;
> default:
> - xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx
> + " value 0x%"PRIx64, offset, val);
> }
> }
>
> @@ -1929,7 +1934,8 @@ static void pnv_xive2_ic_notify_write(void *opaque, hwaddr offset,
> break;
>
> default:
> - xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx
> + " value 0x%"PRIx64, offset, val);
> }
> }
>
> @@ -1971,7 +1977,8 @@ static void pnv_xive2_ic_lsi_write(void *opaque, hwaddr offset,
> {
> PnvXive2 *xive = PNV_XIVE2(opaque);
>
> - xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> }
>
> static const MemoryRegionOps pnv_xive2_ic_lsi_ops = {
> @@ -2074,7 +2081,8 @@ static void pnv_xive2_ic_sync_write(void *opaque, hwaddr offset,
> inject_type = PNV_XIVE2_QUEUE_NXC_ST_RMT_CI;
> break;
> default:
> - xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx, offset);
> + xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
> + offset, val);
> return;
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers
2025-05-12 3:10 ` [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers Nicholas Piggin
` (2 preceding siblings ...)
2025-05-15 15:52 ` Miles Glenn
@ 2025-05-16 0:18 ` Nicholas Piggin
3 siblings, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:18 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Michael Kowal <kowal@linux.ibm.com>
>
> Writes to the Flush Control registers were logged as invalid
> when they are allowed. Clearing the unsupported want_cache_disable
> feature is supported, so don't log an error in that case.
I guess there are no other fields in here that should be warned about
attempting to set.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>
> Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 36 ++++++++++++++++++++++++++++++++----
> 1 file changed, 32 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index 3c26cd6b77..c9374f0eee 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -1411,7 +1411,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> /*
> * ESB cache updates (not modeled)
> */
> - /* case VC_ESBC_FLUSH_CTRL: */
> + case VC_ESBC_FLUSH_CTRL:
> + if (val & VC_ESBC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_ESBC_FLUSH_POLL:
> xive->vc_regs[VC_ESBC_FLUSH_CTRL >> 3] |= VC_ESBC_FLUSH_CTRL_POLL_VALID;
> /* ESB update */
> @@ -1427,7 +1434,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> /*
> * EAS cache updates (not modeled)
> */
> - /* case VC_EASC_FLUSH_CTRL: */
> + case VC_EASC_FLUSH_CTRL:
> + if (val & VC_EASC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_EASC_FLUSH_POLL:
> xive->vc_regs[VC_EASC_FLUSH_CTRL >> 3] |= VC_EASC_FLUSH_CTRL_POLL_VALID;
> /* EAS update */
> @@ -1466,7 +1480,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
> break;
>
>
> - /* case VC_ENDC_FLUSH_CTRL: */
> + case VC_ENDC_FLUSH_CTRL:
> + if (val & VC_ENDC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case VC_ENDC_FLUSH_POLL:
> xive->vc_regs[VC_ENDC_FLUSH_CTRL >> 3] |= VC_ENDC_FLUSH_CTRL_POLL_VALID;
> break;
> @@ -1687,7 +1708,14 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
> pnv_xive2_nxc_update(xive, watch_engine);
> break;
>
> - /* case PC_NXC_FLUSH_CTRL: */
> + case PC_NXC_FLUSH_CTRL:
> + if (val & PC_NXC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
> + xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
> + " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
> + offset, val);
> + return;
> + }
> + break;
> case PC_NXC_FLUSH_POLL:
> xive->pc_regs[PC_NXC_FLUSH_CTRL >> 3] |= PC_NXC_FLUSH_CTRL_POLL_VALID;
> break;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 21/50] ppc/xive2: add interrupt priority configuration flags
2025-05-12 3:10 ` [PATCH 21/50] ppc/xive2: add interrupt priority configuration flags Nicholas Piggin
2025-05-14 19:41 ` Mike Kowal
@ 2025-05-16 0:18 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:18 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Adds support for extracting additional configuration flags from
> the XIVE configuration register that are needed for redistribution
> of group interrupts.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/pnv_xive2.c | 16 ++++++++++++----
> hw/intc/pnv_xive2_regs.h | 1 +
> include/hw/ppc/xive2.h | 8 +++++---
> 3 files changed, 18 insertions(+), 7 deletions(-)
>
> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
> index c9374f0eee..96b8851b7e 100644
> --- a/hw/intc/pnv_xive2.c
> +++ b/hw/intc/pnv_xive2.c
> @@ -605,20 +605,28 @@ static uint32_t pnv_xive2_get_config(Xive2Router *xrtr)
> {
> PnvXive2 *xive = PNV_XIVE2(xrtr);
> uint32_t cfg = 0;
> + uint64_t reg = xive->cq_regs[CQ_XIVE_CFG >> 3];
>
> - if (xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS) {
> + if (reg & CQ_XIVE_CFG_GEN1_TIMA_OS) {
> cfg |= XIVE2_GEN1_TIMA_OS;
> }
>
> - if (xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_EN_VP_SAVE_RESTORE) {
> + if (reg & CQ_XIVE_CFG_EN_VP_SAVE_RESTORE) {
> cfg |= XIVE2_VP_SAVE_RESTORE;
> }
>
> - if (GETFIELD(CQ_XIVE_CFG_HYP_HARD_RANGE,
> - xive->cq_regs[CQ_XIVE_CFG >> 3]) == CQ_XIVE_CFG_THREADID_8BITS) {
> + if (GETFIELD(CQ_XIVE_CFG_HYP_HARD_RANGE, reg) ==
> + CQ_XIVE_CFG_THREADID_8BITS) {
> cfg |= XIVE2_THREADID_8BITS;
> }
>
> + if (reg & CQ_XIVE_CFG_EN_VP_GRP_PRIORITY) {
> + cfg |= XIVE2_EN_VP_GRP_PRIORITY;
> + }
> +
> + cfg = SETFIELD(XIVE2_VP_INT_PRIO, cfg,
> + GETFIELD(CQ_XIVE_CFG_VP_INT_PRIO, reg));
> +
> return cfg;
> }
>
> diff --git a/hw/intc/pnv_xive2_regs.h b/hw/intc/pnv_xive2_regs.h
> index e8b87b3d2c..d53300f709 100644
> --- a/hw/intc/pnv_xive2_regs.h
> +++ b/hw/intc/pnv_xive2_regs.h
> @@ -66,6 +66,7 @@
> #define CQ_XIVE_CFG_GEN1_TIMA_HYP_BLK0 PPC_BIT(26) /* 0 if bit[25]=0 */
> #define CQ_XIVE_CFG_GEN1_TIMA_CROWD_DIS PPC_BIT(27) /* 0 if bit[25]=0 */
> #define CQ_XIVE_CFG_GEN1_END_ESX PPC_BIT(28)
> +#define CQ_XIVE_CFG_EN_VP_GRP_PRIORITY PPC_BIT(32) /* 0 if bit[25]=1 */
> #define CQ_XIVE_CFG_EN_VP_SAVE_RESTORE PPC_BIT(38) /* 0 if bit[25]=1 */
> #define CQ_XIVE_CFG_EN_VP_SAVE_REST_STRICT PPC_BIT(39) /* 0 if bit[25]=1 */
>
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 2436ddb5e5..760b94a962 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -29,9 +29,11 @@ OBJECT_DECLARE_TYPE(Xive2Router, Xive2RouterClass, XIVE2_ROUTER);
> * Configuration flags
> */
>
> -#define XIVE2_GEN1_TIMA_OS 0x00000001
> -#define XIVE2_VP_SAVE_RESTORE 0x00000002
> -#define XIVE2_THREADID_8BITS 0x00000004
> +#define XIVE2_GEN1_TIMA_OS 0x00000001
> +#define XIVE2_VP_SAVE_RESTORE 0x00000002
> +#define XIVE2_THREADID_8BITS 0x00000004
> +#define XIVE2_EN_VP_GRP_PRIORITY 0x00000008
> +#define XIVE2_VP_INT_PRIO 0x00000030
>
> typedef struct Xive2RouterClass {
> SysBusDeviceClass parent;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 22/50] ppc/xive2: Support redistribution of group interrupts
2025-05-12 3:10 ` [PATCH 22/50] ppc/xive2: Support redistribution of group interrupts Nicholas Piggin
2025-05-14 19:42 ` Mike Kowal
@ 2025-05-16 0:19 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:19 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> When an XIVE context is pulled while it has an active, unacknowledged
> group interrupt, XIVE will check to see if a context on another thread
> can handle the interrupt and, if so, notify that context. If there
> are no contexts that can handle the interrupt, then the interrupt is
> added to a backlog and XIVE will attempt to escalate the interrupt,
> if configured to do so, allowing the higher privileged handler to
> activate a context that can handle the original interrupt.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 84 +++++++++++++++++++++++++++++++++++--
> include/hw/ppc/xive2_regs.h | 3 ++
> 2 files changed, 83 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 0993e792cc..34fc561c9c 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -19,6 +19,10 @@
> #include "hw/ppc/xive2_regs.h"
> #include "trace.h"
>
> +static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> + uint32_t end_idx, uint32_t end_data,
> + bool redistribute);
> +
> uint32_t xive2_router_get_config(Xive2Router *xrtr)
> {
> Xive2RouterClass *xrc = XIVE2_ROUTER_GET_CLASS(xrtr);
> @@ -597,6 +601,68 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
> return xive2_nvp_cam_line(blk, 1 << tid_shift | (pir & tid_mask));
> }
>
> +static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
> + uint8_t nvp_blk, uint32_t nvp_idx, uint8_t ring)
> +{
> + uint8_t nsr = tctx->regs[ring + TM_NSR];
> + uint8_t crowd = NVx_CROWD_LVL(nsr);
> + uint8_t group = NVx_GROUP_LVL(nsr);
> + uint8_t nvgc_blk;
> + uint8_t nvgc_idx;
> + uint8_t end_blk;
> + uint32_t end_idx;
> + uint8_t pipr = tctx->regs[ring + TM_PIPR];
> + Xive2Nvgc nvgc;
> + uint8_t prio_limit;
> + uint32_t cfg;
> +
> + /* convert crowd/group to blk/idx */
> + if (group > 0) {
> + nvgc_idx = (nvp_idx & (0xffffffff << group)) |
> + ((1 << (group - 1)) - 1);
> + } else {
> + nvgc_idx = nvp_idx;
> + }
> +
> + if (crowd > 0) {
> + crowd = (crowd == 3) ? 4 : crowd;
> + nvgc_blk = (nvp_blk & (0xffffffff << crowd)) |
> + ((1 << (crowd - 1)) - 1);
> + } else {
> + nvgc_blk = nvp_blk;
> + }
> +
> + /* Use blk/idx to retrieve the NVGC */
> + if (xive2_router_get_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, &nvgc)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n",
> + crowd ? "NVC" : "NVG", nvgc_blk, nvgc_idx);
> + return;
> + }
> +
> + /* retrieve the END blk/idx from the NVGC */
> + end_blk = xive_get_field32(NVGC2_W1_END_BLK, nvgc.w1);
> + end_idx = xive_get_field32(NVGC2_W1_END_IDX, nvgc.w1);
> +
> + /* determine number of priorities being used */
> + cfg = xive2_router_get_config(xrtr);
> + if (cfg & XIVE2_EN_VP_GRP_PRIORITY) {
> + prio_limit = 1 << GETFIELD(NVGC2_W1_PSIZE, nvgc.w1);
> + } else {
> + prio_limit = 1 << GETFIELD(XIVE2_VP_INT_PRIO, cfg);
> + }
> +
> + /* add priority offset to end index */
> + end_idx += pipr % prio_limit;
> +
> + /* trigger the group END */
> + xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
> +
> + /* clear interrupt indication for the context */
> + tctx->regs[ring + TM_NSR] = 0;
> + tctx->regs[ring + TM_PIPR] = tctx->regs[ring + TM_CPPR];
> + xive_tctx_reset_signal(tctx, ring);
> +}
> +
> static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, unsigned size, uint8_t ring)
> {
> @@ -608,6 +674,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> uint8_t cur_ring;
> bool valid;
> bool do_save;
> + uint8_t nsr;
>
> xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &do_save);
>
> @@ -624,6 +691,12 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
> }
>
> + /* Active group/crowd interrupts need to be redistributed */
> + nsr = tctx->regs[ring + TM_NSR];
> + if (xive_nsr_indicates_group_exception(ring, nsr)) {
> + xive2_redistribute(xrtr, tctx, nvp_blk, nvp_idx, ring);
> + }
> +
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> xive2_tctx_save_ctx(xrtr, tctx, nvp_blk, nvp_idx, ring);
> }
> @@ -1352,7 +1425,8 @@ static bool xive2_router_end_es_notify(Xive2Router *xrtr, uint8_t end_blk,
> * message has the same parameters than in the function below.
> */
> static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> - uint32_t end_idx, uint32_t end_data)
> + uint32_t end_idx, uint32_t end_data,
> + bool redistribute)
> {
> Xive2End end;
> uint8_t priority;
> @@ -1380,7 +1454,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> return;
> }
>
> - if (xive2_end_is_enqueue(&end)) {
> + if (!redistribute && xive2_end_is_enqueue(&end)) {
> xive2_end_enqueue(&end, end_data);
> /* Enqueuing event data modifies the EQ toggle and index */
> xive2_router_write_end(xrtr, end_blk, end_idx, &end, 1);
> @@ -1560,7 +1634,8 @@ do_escalation:
> xive2_router_end_notify(xrtr,
> xive_get_field32(END2_W4_END_BLOCK, end.w4),
> xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> - xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> + xive_get_field32(END2_W5_ESC_END_DATA, end.w5),
> + false);
> } /* end END adaptive escalation */
>
> else {
> @@ -1641,7 +1716,8 @@ void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
> xive2_router_end_notify(xrtr,
> xive_get_field64(EAS2_END_BLOCK, eas.w),
> xive_get_field64(EAS2_END_INDEX, eas.w),
> - xive_get_field64(EAS2_END_DATA, eas.w));
> + xive_get_field64(EAS2_END_DATA, eas.w),
> + false);
> return;
> }
>
> diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
> index 2c535ec0d0..e222038143 100644
> --- a/include/hw/ppc/xive2_regs.h
> +++ b/include/hw/ppc/xive2_regs.h
> @@ -224,6 +224,9 @@ typedef struct Xive2Nvgc {
> #define NVGC2_W0_VALID PPC_BIT32(0)
> #define NVGC2_W0_PGONEXT PPC_BITMASK32(26, 31)
> uint32_t w1;
> +#define NVGC2_W1_PSIZE PPC_BITMASK32(0, 1)
> +#define NVGC2_W1_END_BLK PPC_BITMASK32(4, 7)
> +#define NVGC2_W1_END_IDX PPC_BITMASK32(8, 31)
> uint32_t w2;
> uint32_t w3;
> uint32_t w4;
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 23/50] ppc/xive: Add more interrupt notification tracing
2025-05-12 3:10 ` [PATCH 23/50] ppc/xive: Add more interrupt notification tracing Nicholas Piggin
2025-05-14 19:46 ` Mike Kowal
@ 2025-05-16 0:19 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:19 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Add more tracing around notification, redistribution, and escalation.
>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/trace-events | 6 ++++++
> hw/intc/xive.c | 3 +++
> hw/intc/xive2.c | 13 ++++++++-----
> 3 files changed, 17 insertions(+), 5 deletions(-)
>
> diff --git a/hw/intc/trace-events b/hw/intc/trace-events
> index f77f9733c9..9eca0925b6 100644
> --- a/hw/intc/trace-events
> +++ b/hw/intc/trace-events
> @@ -279,6 +279,8 @@ xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_
> xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
> xive_source_esb_read(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
> xive_source_esb_write(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
> +xive_source_notify(uint32_t srcno) "Processing notification for queued IRQ 0x%x"
> +xive_source_blocked(uint32_t srcno) "No action needed for IRQ 0x%x currently"
> xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "END 0x%02x/0x%04x -> enqueue 0x%08x"
> xive_router_end_escalate(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t end_data) "END 0x%02x/0x%04x -> escalate END 0x%02x/0x%04x data 0x%08x"
> xive_tctx_tm_write(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
> @@ -289,6 +291,10 @@ xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x
> # xive2.c
> xive_nvp_backlog_op(uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint8_t rc) "NVP 0x%x/0x%x operation=%d priority=%d rc=%d"
> xive_nvgc_backlog_op(bool c, uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint32_t rc) "NVGC crowd=%d 0x%x/0x%x operation=%d priority=%d rc=%d"
> +xive_redistribute(uint32_t index, uint8_t ring, uint8_t end_blk, uint32_t end_idx) "Redistribute from target=%d ring=0x%x NVP 0x%x/0x%x"
> +xive_end_enqueue(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "Queue event for END 0x%x/0x%x data=0x%x"
> +xive_escalate_end(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t esc_data) "Escalate from END 0x%x/0x%x to END 0x%x/0x%x data=0x%x"
> +xive_escalate_esb(uint8_t end_blk, uint32_t end_idx, uint32_t lisn) "Escalate from END 0x%x/0x%x to LISN=0x%x"
>
> # pnv_xive.c
> pnv_xive_ic_hw_trigger(uint64_t addr, uint64_t val) "@0x%"PRIx64" val=0x%"PRIx64
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 1a94642c62..7461dbecb8 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -1276,6 +1276,7 @@ static uint64_t xive_source_esb_read(void *opaque, hwaddr addr, unsigned size)
>
> /* Forward the source event notification for routing */
> if (ret) {
> + trace_xive_source_notify(srcno);
> xive_source_notify(xsrc, srcno);
> }
> break;
> @@ -1371,6 +1372,8 @@ out:
> /* Forward the source event notification for routing */
> if (notify) {
> xive_source_notify(xsrc, srcno);
> + } else {
> + trace_xive_source_blocked(srcno);
> }
> }
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 34fc561c9c..968b698677 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -616,6 +616,7 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
> uint8_t prio_limit;
> uint32_t cfg;
>
> + trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
> /* convert crowd/group to blk/idx */
> if (group > 0) {
> nvgc_idx = (nvp_idx & (0xffffffff << group)) |
> @@ -1455,6 +1456,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
> }
>
> if (!redistribute && xive2_end_is_enqueue(&end)) {
> + trace_xive_end_enqueue(end_blk, end_idx, end_data);
> xive2_end_enqueue(&end, end_data);
> /* Enqueuing event data modifies the EQ toggle and index */
> xive2_router_write_end(xrtr, end_blk, end_idx, &end, 1);
> @@ -1631,11 +1633,11 @@ do_escalation:
> * Perform END Adaptive escalation processing
> * The END trigger becomes an Escalation trigger
> */
> - xive2_router_end_notify(xrtr,
> - xive_get_field32(END2_W4_END_BLOCK, end.w4),
> - xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> - xive_get_field32(END2_W5_ESC_END_DATA, end.w5),
> - false);
> + uint8_t esc_blk = xive_get_field32(END2_W4_END_BLOCK, end.w4);
> + uint32_t esc_idx = xive_get_field32(END2_W4_ESC_END_INDEX, end.w4);
> + uint32_t esc_data = xive_get_field32(END2_W5_ESC_END_DATA, end.w5);
> + trace_xive_escalate_end(end_blk, end_idx, esc_blk, esc_idx, esc_data);
> + xive2_router_end_notify(xrtr, esc_blk, esc_idx, esc_data, false);
> } /* end END adaptive escalation */
>
> else {
> @@ -1652,6 +1654,7 @@ do_escalation:
> lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
> xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
>
> + trace_xive_escalate_esb(end_blk, end_idx, lisn);
> xive2_notify(xrtr, lisn, true /* pq_checked */);
> }
>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 24/50] ppc/xive2: Improve pool regs variable name
2025-05-12 3:10 ` [PATCH 24/50] ppc/xive2: Improve pool regs variable name Nicholas Piggin
2025-05-14 19:47 ` Mike Kowal
@ 2025-05-16 0:19 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:19 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Change pregs to pool_regs, for clarity.
>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
> [npiggin: split from larger patch]
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 11 +++++------
> 1 file changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 968b698677..ec4b9320b4 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1044,13 +1044,12 @@ again:
>
> /* PHYS updates also depend on POOL values */
> if (ring == TM_QW3_HV_PHYS) {
> - uint8_t *pregs = &tctx->regs[TM_QW2_HV_POOL];
> + uint8_t *pool_regs = &tctx->regs[TM_QW2_HV_POOL];
>
> /* POOL values only matter if POOL ctx is valid */
> - if (pregs[TM_WORD2] & 0x80) {
> -
> - uint8_t pool_pipr = xive_ipb_to_pipr(pregs[TM_IPB]);
> - uint8_t pool_lsmfb = pregs[TM_LSMFB];
> + if (pool_regs[TM_WORD2] & 0x80) {
> + uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
> + uint8_t pool_lsmfb = pool_regs[TM_LSMFB];
>
> /*
> * Determine highest priority interrupt and
> @@ -1064,7 +1063,7 @@ again:
> }
>
> /* Values needed for group priority calculation */
> - if (pregs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
> + if (pool_regs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
> group_enabled = true;
> lsmfb_min = pool_lsmfb;
> if (lsmfb_min < pipr_min) {
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
2025-05-12 3:10 ` [PATCH 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op Nicholas Piggin
2025-05-14 19:48 ` Mike Kowal
@ 2025-05-16 0:20 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:20 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Booting AIX in a PowerVM partition requires the use of the "Acknowledge
> O/S Interrupt to even O/S reporting line" special operation provided by
> the IBM XIVE interrupt controller. This operation is invoked by writing
> a byte (data is irrelevant) to offset 0xC10 of the Thread Interrupt
> Management Area (TIMA). It can be used by software to notify the XIVE
> logic that the interrupt was received.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive.c | 8 ++++---
> hw/intc/xive2.c | 50 ++++++++++++++++++++++++++++++++++++++++++
> include/hw/ppc/xive.h | 1 +
> include/hw/ppc/xive2.h | 3 ++-
> 4 files changed, 58 insertions(+), 4 deletions(-)
>
> diff --git a/hw/intc/xive.c b/hw/intc/xive.c
> index 7461dbecb8..9ec1193dfc 100644
> --- a/hw/intc/xive.c
> +++ b/hw/intc/xive.c
> @@ -80,7 +80,7 @@ static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
> }
> }
>
> -static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> +uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
> {
> uint8_t *regs = &tctx->regs[ring];
> uint8_t nsr = regs[TM_NSR];
> @@ -340,14 +340,14 @@ static uint64_t xive_tm_vt_poll(XivePresenter *xptr, XiveTCTX *tctx,
>
> static const uint8_t xive_tm_hw_view[] = {
> 3, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-0 User */
> - 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-1 OS */
> + 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 3, /* QW-1 OS */
> 0, 0, 3, 3, 0, 3, 3, 0, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-2 POOL */
> 3, 3, 3, 3, 0, 3, 0, 2, 3, 0, 0, 3, 3, 3, 3, 0, /* QW-3 PHYS */
> };
>
> static const uint8_t xive_tm_hv_view[] = {
> 3, 0, 0, 0, 0, 0, 0, 0, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-0 User */
> - 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 0, /* QW-1 OS */
> + 3, 3, 3, 3, 3, 3, 0, 2, 3, 3, 3, 3, 0, 0, 0, 3, /* QW-1 OS */
> 0, 0, 3, 3, 0, 3, 3, 0, 0, 3, 3, 3, 0, 0, 0, 0, /* QW-2 POOL */
> 3, 3, 3, 3, 0, 3, 0, 2, 3, 0, 0, 3, 0, 0, 0, 0, /* QW-3 PHYS */
> };
> @@ -718,6 +718,8 @@ static const XiveTmOp xive2_tm_operations[] = {
> xive_tm_pull_phys_ctx },
> { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL, 1, xive2_tm_pull_phys_ctx_ol,
> NULL },
> + { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL, 1, xive2_tm_ack_os_el,
> + NULL },
> };
>
> static const XiveTmOp *xive_tm_find_op(XivePresenter *xptr, hwaddr offset,
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index ec4b9320b4..68be138335 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -1009,6 +1009,56 @@ static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
> return 0;
> }
>
> +static void xive2_tctx_accept_el(XivePresenter *xptr, XiveTCTX *tctx,
> + uint8_t ring, uint8_t cl_ring)
> +{
> + uint64_t rd;
> + Xive2Router *xrtr = XIVE2_ROUTER(xptr);
> + uint32_t nvp_blk, nvp_idx, xive2_cfg;
> + Xive2Nvp nvp;
> + uint64_t phys_addr;
> + uint8_t OGen = 0;
> +
> + xive2_tctx_get_nvp_indexes(tctx, cl_ring, &nvp_blk, &nvp_idx);
> +
> + if (xive2_router_get_nvp(xrtr, (uint8_t)nvp_blk, nvp_idx, &nvp)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
> + nvp_blk, nvp_idx);
> + return;
> + }
> +
> + if (!xive2_nvp_is_valid(&nvp)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
> + nvp_blk, nvp_idx);
> + return;
> + }
> +
> +
> + rd = xive_tctx_accept(tctx, ring);
> +
> + if (ring == TM_QW1_OS) {
> + OGen = tctx->regs[ring + TM_OGEN];
> + }
> + xive2_cfg = xive2_router_get_config(xrtr);
> + phys_addr = xive2_nvp_reporting_addr(&nvp);
> + uint8_t report_data[REPORT_LINE_GEN1_SIZE];
> + memset(report_data, 0xff, sizeof(report_data));
> + if ((OGen == 1) || (xive2_cfg & XIVE2_GEN1_TIMA_OS)) {
> + report_data[8] = (rd >> 8) & 0xff;
> + report_data[9] = rd & 0xff;
> + } else {
> + report_data[0] = (rd >> 8) & 0xff;
> + report_data[1] = rd & 0xff;
> + }
> + cpu_physical_memory_write(phys_addr, report_data, REPORT_LINE_GEN1_SIZE);
> +}
> +
> +void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size)
> +{
> + xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
> +}
> +
> static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> uint8_t *regs = &tctx->regs[ring];
> diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
> index 28f0f1b79a..46d05d74fb 100644
> --- a/include/hw/ppc/xive.h
> +++ b/include/hw/ppc/xive.h
> @@ -561,6 +561,7 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
> uint8_t group_level);
> void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
> void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
> +uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
>
> /*
> * KVM XIVE device helpers
> diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
> index 760b94a962..ff02ce2549 100644
> --- a/include/hw/ppc/xive2.h
> +++ b/include/hw/ppc/xive2.h
> @@ -142,5 +142,6 @@ void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
> hwaddr offset, uint64_t value, unsigned size);
> -
> +void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> + hwaddr offset, uint64_t value, unsigned size);
> #endif /* PPC_XIVE2_H */
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update
2025-05-12 3:10 ` [PATCH 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update Nicholas Piggin
2025-05-14 19:48 ` Mike Kowal
@ 2025-05-16 0:20 ` Nicholas Piggin
1 sibling, 0 replies; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 0:20 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> From: Glenn Miles <milesg@linux.ibm.com>
>
> Add support for redistributing a presented group interrupt if it
> is precluded as a result of changing the CPPR value. Without this,
> group interrupts can be lost.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>
> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
> ---
> hw/intc/xive2.c | 82 ++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 60 insertions(+), 22 deletions(-)
>
> diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> index 68be138335..92dbbad8d4 100644
> --- a/hw/intc/xive2.c
> +++ b/hw/intc/xive2.c
> @@ -601,20 +601,37 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
> return xive2_nvp_cam_line(blk, 1 << tid_shift | (pir & tid_mask));
> }
>
> -static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
> - uint8_t nvp_blk, uint32_t nvp_idx, uint8_t ring)
> +static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
> {
> - uint8_t nsr = tctx->regs[ring + TM_NSR];
> + uint8_t *regs = &tctx->regs[ring];
> + uint8_t nsr = regs[TM_NSR];
> + uint8_t pipr = regs[TM_PIPR];
> uint8_t crowd = NVx_CROWD_LVL(nsr);
> uint8_t group = NVx_GROUP_LVL(nsr);
> - uint8_t nvgc_blk;
> - uint8_t nvgc_idx;
> - uint8_t end_blk;
> - uint32_t end_idx;
> - uint8_t pipr = tctx->regs[ring + TM_PIPR];
> + uint8_t nvgc_blk, end_blk, nvp_blk;
> + uint32_t nvgc_idx, end_idx, nvp_idx;
> Xive2Nvgc nvgc;
> uint8_t prio_limit;
> uint32_t cfg;
> + uint8_t alt_ring;
> + uint32_t target_ringw2;
> + uint32_t cam;
> + bool valid;
> + bool hw;
> +
> + /* redistribution is only for group/crowd interrupts */
> + if (!xive_nsr_indicates_group_exception(ring, nsr)) {
> + return;
> + }
> +
> + alt_ring = xive_nsr_exception_ring(ring, nsr);
> + target_ringw2 = xive_tctx_word2(&tctx->regs[alt_ring]);
> + cam = be32_to_cpu(target_ringw2);
> +
> + /* extract nvp block and index from targeted ring's cam */
> + xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &hw);
> +
> + trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
>
> trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
> /* convert crowd/group to blk/idx */
> @@ -659,8 +676,8 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
> xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
>
> /* clear interrupt indication for the context */
> - tctx->regs[ring + TM_NSR] = 0;
> - tctx->regs[ring + TM_PIPR] = tctx->regs[ring + TM_CPPR];
> + regs[TM_NSR] = 0;
> + regs[TM_PIPR] = regs[TM_CPPR];
> xive_tctx_reset_signal(tctx, ring);
> }
>
> @@ -695,7 +712,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
> /* Active group/crowd interrupts need to be redistributed */
> nsr = tctx->regs[ring + TM_NSR];
> if (xive_nsr_indicates_group_exception(ring, nsr)) {
> - xive2_redistribute(xrtr, tctx, nvp_blk, nvp_idx, ring);
> + xive2_redistribute(xrtr, tctx, ring);
> }
>
> if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
> @@ -1059,6 +1076,7 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
> xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
> }
>
> +/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
> static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> {
> uint8_t *regs = &tctx->regs[ring];
> @@ -1069,10 +1087,11 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> uint32_t nvp_blk, nvp_idx;
> Xive2Nvp nvp;
> int rc;
> + uint8_t nsr = regs[TM_NSR];
>
> trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
> regs[TM_IPB], regs[TM_PIPR],
> - cppr, regs[TM_NSR]);
> + cppr, nsr);
>
> if (cppr > XIVE_PRIORITY_MAX) {
> cppr = 0xff;
> @@ -1081,6 +1100,35 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
> old_cppr = regs[TM_CPPR];
> regs[TM_CPPR] = cppr;
>
> + /* Handle increased CPPR priority (lower value) */
> + if (cppr < old_cppr) {
> + if (cppr <= regs[TM_PIPR]) {
> + /* CPPR lowered below PIPR, must un-present interrupt */
> + if (xive_nsr_indicates_exception(ring, nsr)) {
> + if (xive_nsr_indicates_group_exception(ring, nsr)) {
> + /* redistribute precluded active grp interrupt */
> + xive2_redistribute(xrtr, tctx, ring);
> + return;
> + }
> + }
> +
> + /* interrupt is VP directed, pending in IPB */
> + regs[TM_PIPR] = cppr;
> + xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
> + return;
> + } else {
> + /* CPPR was lowered, but still above PIPR. No action needed. */
> + return;
> + }
> + }
> +
> + /* CPPR didn't change, nothing needs to be done */
> + if (cppr == old_cppr) {
> + return;
> + }
> +
> + /* CPPR priority decreased (higher value) */
> +
> /*
> * Recompute the PIPR based on local pending interrupts. It will
> * be adjusted below if needed in case of pending group interrupts.
> @@ -1129,16 +1177,6 @@ again:
> return;
> }
>
> - if (cppr < old_cppr) {
> - /*
> - * FIXME: check if there's a group interrupt being presented
> - * and if the new cppr prevents it. If so, then the group
> - * interrupt needs to be re-added to the backlog and
> - * re-triggered (see re-trigger END info in the NVGC
> - * structure)
> - */
> - }
> -
> if (group_enabled &&
> lsmfb_min < cppr &&
> lsmfb_min < pipr_min) {
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 00/50] ppc/xive: updates for PowerVM
2025-05-15 15:36 ` [PATCH 00/50] ppc/xive: updates for PowerVM Cédric Le Goater
@ 2025-05-16 1:29 ` Nicholas Piggin
2025-07-20 21:26 ` Cédric Le Goater
0 siblings, 1 reply; 192+ messages in thread
From: Nicholas Piggin @ 2025-05-16 1:29 UTC (permalink / raw)
To: Cédric Le Goater, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On Fri May 16, 2025 at 1:36 AM AEST, Cédric Le Goater wrote:
> On 5/12/25 05:10, Nicholas Piggin wrote:
>> These changes gets the powernv xive2 to the point it is able to run
>> PowerVM with good stability.
>>
>> * Various bug fixes around lost interrupts particularly.
>> * Major group interrupt work, in particular around redistributing
>> interrupts. Upstream group support is not in a complete or usable
>> state as it is.
>> * Significant context push/pull improvements, particularly pool and
>> phys context handling was quite incomplete beyond trivial OPAL
>> case that pushes at boot.
>> * Improved tracing and checking for unimp and guest error situations.
>> * Various other missing feature support.
>>
>> The ordering and grouping of patches in the series is not perfect,
>> because it had been an ongoing development, and PowerVM only started
>> to become stable toward the end. I did try to rearrange and improve
>> things, but some were not worth rebasing cost (e.g., some of the
>> pool/phys pull redistribution patches should have ideally been squashed
>> or moved together), so please bear that in mind. Suggestions for
>> further rearranging the series are fine, but I might just find they are
>> too much effort to be worthwhile.
>>
>> Thanks,
>> Nick
>>
>> Glenn Miles (12):
>> ppc/xive2: Fix calculation of END queue sizes
>> ppc/xive2: Use fair irq target search algorithm
>> ppc/xive2: Fix irq preempted by lower priority group irq
>> ppc/xive2: Fix treatment of PIPR in CPPR update
>> pnv/xive2: Support ESB Escalation
>> ppc/xive2: add interrupt priority configuration flags
>> ppc/xive2: Support redistribution of group interrupts
>> ppc/xive: Add more interrupt notification tracing
>> ppc/xive2: Improve pool regs variable name
>> ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
>> ppc/xive2: Redistribute group interrupt precluded by CPPR update
>> ppc/xive2: redistribute irqs for pool and phys ctx pull
>>
>> Michael Kowal (4):
>> ppc/xive2: Remote VSDs need to match on forwarding address
>> ppc/xive2: Reset Generation Flipped bit on END Cache Watch
>> pnv/xive2: Print value in invalid register write logging
>> pnv/xive2: Permit valid writes to VC/PC Flush Control registers
>>
>> Nicholas Piggin (34):
>> ppc/xive: Fix xive trace event output
>> ppc/xive: Report access size in XIVE TM operation error logs
>> ppc/xive2: fix context push calculation of IPB priority
>> ppc/xive: Fix PHYS NSR ring matching
>> ppc/xive2: Do not present group interrupt on OS-push if precluded by
>> CPPR
>> ppc/xive2: Set CPPR delivery should account for group priority
>> ppc/xive: tctx_notify should clear the precluded interrupt
>> ppc/xive: Explicitly zero NSR after accepting
>> ppc/xive: Move NSR decoding into helper functions
>> ppc/xive: Fix pulling pool and phys contexts
>> pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
>> ppc/xive: Change presenter .match_nvt to match not present
>> ppc/xive2: Redistribute group interrupt preempted by higher priority
>> interrupt
>> ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
>> ppc/xive: Fix high prio group interrupt being preempted by low prio VP
>> ppc/xive: Split xive recompute from IPB function
>> ppc/xive: tctx signaling registers rework
>> ppc/xive: tctx_accept only lower irq line if an interrupt was
>> presented
>> ppc/xive: Add xive_tctx_pipr_set() helper function
>> ppc/xive2: split tctx presentation processing from set CPPR
>> ppc/xive2: Consolidate presentation processing in context push
>> ppc/xive2: Avoid needless interrupt re-check on CPPR set
>> ppc/xive: Assert group interrupts were redistributed
>> ppc/xive2: implement NVP context save restore for POOL ring
>> ppc/xive2: Prevent pulling of pool context losing phys interrupt
>> ppc/xive: Redistribute phys after pulling of pool context
>> ppc/xive: Check TIMA operations validity
>> ppc/xive2: Implement pool context push TIMA op
>> ppc/xive2: redistribute group interrupts on context push
>> ppc/xive2: Implement set_os_pending TIMA op
>> ppc/xive2: Implement POOL LGS push TIMA op
>> ppc/xive2: Implement PHYS ring VP push TIMA op
>> ppc/xive: Split need_resend into restore_nvp
>> ppc/xive2: Enable lower level contexts on VP push
>>
>> hw/intc/pnv_xive.c | 16 +-
>> hw/intc/pnv_xive2.c | 139 +++++--
>> hw/intc/pnv_xive2_regs.h | 1 +
>> hw/intc/spapr_xive.c | 18 +-
>> hw/intc/trace-events | 12 +-
>> hw/intc/xive.c | 555 ++++++++++++++++++----------
>> hw/intc/xive2.c | 717 +++++++++++++++++++++++++++---------
>> hw/ppc/pnv.c | 48 +--
>> hw/ppc/spapr.c | 21 +-
>> include/hw/ppc/xive.h | 66 +++-
>> include/hw/ppc/xive2.h | 22 +-
>> include/hw/ppc/xive2_regs.h | 22 +-
>> 12 files changed, 1145 insertions(+), 492 deletions(-)
>>
>
> I am impressed :) and glad that you are still taking care of XIVE.
>
> I suggest adding new names under the XIVE entry in the MAINTAINERS file.
Yeah it's good to see. They are building a lot more cool stuff with
powernv at the moment, hopefully almost all should get upstreamed
eventually.
I will try to convince them to add MAINTAINER entries :)
Thanks,
Nick
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 17/50] pnv/xive2: Support ESB Escalation
2025-05-16 0:05 ` Nicholas Piggin
@ 2025-05-16 15:44 ` Miles Glenn
0 siblings, 0 replies; 192+ messages in thread
From: Miles Glenn @ 2025-05-16 15:44 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin, Glenn Miles
On Fri, 2025-05-16 at 10:05 +1000, Nicholas Piggin wrote:
> On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
> > From: Glenn Miles <milesg@linux.vnet.ibm.com>
> >
> > Add support for XIVE ESB Interrupt Escalation.
> >
> > Suggested-by: Michael Kowal <kowal@linux.ibm.com>
> > [This change was taken from a patch provided by Michael Kowal.]
> > Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
> > ---
> > hw/intc/xive2.c | 62 ++++++++++++++++++++++++++++++-------
> > include/hw/ppc/xive2.h | 1 +
> > include/hw/ppc/xive2_regs.h | 13 +++++---
> > 3 files changed, 59 insertions(+), 17 deletions(-)
> >
> > diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
> > index de139dcfbf..0993e792cc 100644
> > --- a/hw/intc/xive2.c
> > +++ b/hw/intc/xive2.c
> > @@ -1552,18 +1552,39 @@ do_escalation:
> > }
> > }
> >
> > - /*
> > - * The END trigger becomes an Escalation trigger
> > - */
> > - xive2_router_end_notify(xrtr,
> > - xive_get_field32(END2_W4_END_BLOCK, end.w4),
> > - xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> > - xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> > + if (xive2_end_is_escalate_end(&end)) {
> > + /*
> > + * Perform END Adaptive escalation processing
> > + * The END trigger becomes an Escalation trigger
> > + */
> > + xive2_router_end_notify(xrtr,
> > + xive_get_field32(END2_W4_END_BLOCK, end.w4),
> > + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
> > + xive_get_field32(END2_W5_ESC_END_DATA, end.w5));
> > + } /* end END adaptive escalation */
> > +
> > + else {
> > + uint32_t lisn; /* Logical Interrupt Source Number */
> > +
> > + /*
> > + * Perform ESB escalation processing
> > + * E[N] == 1 --> N
> > + * Req[Block] <- E[ESB_Block]
> > + * Req[Index] <- E[ESB_Index]
> > + * Req[Offset] <- 0x000
> > + * Execute <ESB Store> Req command
> > + */
> > + lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK, end.w4),
> > + xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
> > +
> > + xive2_notify(xrtr, lisn, true /* pq_checked */);
>
> Sorry I forgot to squash in a fix for the issues here. These should be
> _ESB_ constants not _END_, and we believe pq_checked should be false
> here so the ESB state machine is run.
>
> https://lore.kernel.org/qemu-devel/D8CFK7Z5AJF8.ALT8MMH6EYYT@gmail.com/
>
> I think we took discussion offline after that but that was the
> conclusion. I will sqash that fix in here. With that,
>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
>
> I also wonder if Mike should be author of this patch since
> that's what the note indicates? Or co-author? Better give your
> Signed-off-by too, if so.
>
> Thanks,
> Nick
Yes, this commit was taken verbatim from a diff that Mike Kowal
provided me. I think he certainly deserves the credit. I wasn't sure
how to do that.
Glenn
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm
2025-05-16 0:12 ` Nicholas Piggin
@ 2025-05-16 16:22 ` Mike Kowal
0 siblings, 0 replies; 192+ messages in thread
From: Mike Kowal @ 2025-05-16 16:22 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles,
Caleb Schlossin
On 5/15/2025 7:12 PM, Nicholas Piggin wrote:
> On Mon May 12, 2025 at 1:10 PM AEST, Nicholas Piggin wrote:
>> From: Glenn Miles <milesg@linux.ibm.com>
>>
>> The current xive algorithm for finding a matching group vCPU
>> target always uses the first vCPU found. And, since it always
>> starts the search with thread 0 of a core, thread 0 is almost
>> always used to handle group interrupts. This can lead to additional
>> interrupt latency and poor performance for interrupt intensive
>> work loads.
>>
>> Changing this to use a simple round-robin algorithm for deciding which
>> thread number to use when starting a search, which leads to a more
>> distributed use of threads for handling group interrupts.
>>
> Does hardware always do the "histogram" distribution? I wonder if
> there would be any performance benefit to do something like send
> to an idle thread/core with preference. I guess the xive controller
> might have a difficult time querying the state of a bunch of cores
> before sending so it's probably not practical for real hardware.
>
> In any case this is a nice improvement for group delivery.
>
> Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Yes. The hardware does a histogram to determine the best 'match' for
multiple matches. When we implemented Simics Florian and FW said it
wasn't important to implement so we just came up with the round-robin
starting point when looking for a match.
MAK
>> [npiggin: Also round-robin among threads, not just cores]
>> Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
>> ---
>> hw/intc/pnv_xive2.c | 18 ++++++++++++++++--
>> 1 file changed, 16 insertions(+), 2 deletions(-)
>>
>> diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
>> index 72cdf0f20c..d7ca97ecbb 100644
>> --- a/hw/intc/pnv_xive2.c
>> +++ b/hw/intc/pnv_xive2.c
>> @@ -643,13 +643,18 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
>> int i, j;
>> bool gen1_tima_os =
>> xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
>> + static int next_start_core;
>> + static int next_start_thread;
>> + int start_core = next_start_core;
>> + int start_thread = next_start_thread;
>>
>> for (i = 0; i < chip->nr_cores; i++) {
>> - PnvCore *pc = chip->cores[i];
>> + PnvCore *pc = chip->cores[(i + start_core) % chip->nr_cores];
>> CPUCore *cc = CPU_CORE(pc);
>>
>> for (j = 0; j < cc->nr_threads; j++) {
>> - PowerPCCPU *cpu = pc->threads[j];
>> + /* Start search for match with different thread each call */
>> + PowerPCCPU *cpu = pc->threads[(j + start_thread) % cc->nr_threads];
>> XiveTCTX *tctx;
>> int ring;
>>
>> @@ -694,6 +699,15 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
>> if (!match->tctx) {
>> match->ring = ring;
>> match->tctx = tctx;
>> +
>> + next_start_thread = j + start_thread + 1;
>> + if (next_start_thread >= cc->nr_threads) {
>> + next_start_thread = 0;
>> + next_start_core = i + start_core + 1;
>> + if (next_start_core >= chip->nr_cores) {
>> + next_start_core = 0;
>> + }
>> + }
>> }
>> count++;
>> }
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 00/50] ppc/xive: updates for PowerVM
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
` (50 preceding siblings ...)
2025-05-15 15:36 ` [PATCH 00/50] ppc/xive: updates for PowerVM Cédric Le Goater
@ 2025-07-03 9:37 ` Gautam Menghani
51 siblings, 0 replies; 192+ messages in thread
From: Gautam Menghani @ 2025-07-03 9:37 UTC (permalink / raw)
To: Nicholas Piggin
Cc: qemu-ppc, qemu-devel, Frédéric Barrat, Glenn Miles,
Michael Kowal, Caleb Schlossin
Hi Nick,
I did some sanity testing of this series on KVM on LPAR (P10) with the help of
avocado test suites, LGTM
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 00/50] ppc/xive: updates for PowerVM
2025-05-16 1:29 ` Nicholas Piggin
@ 2025-07-20 21:26 ` Cédric Le Goater
2025-08-04 17:37 ` Miles Glenn
0 siblings, 1 reply; 192+ messages in thread
From: Cédric Le Goater @ 2025-07-20 21:26 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Glenn Miles, Michael Kowal,
Caleb Schlossin
On 5/16/25 03:29, Nicholas Piggin wrote:
> On Fri May 16, 2025 at 1:36 AM AEST, Cédric Le Goater wrote:
>> On 5/12/25 05:10, Nicholas Piggin wrote:
>>> These changes gets the powernv xive2 to the point it is able to run
>>> PowerVM with good stability.
>>>
>>> * Various bug fixes around lost interrupts particularly.
>>> * Major group interrupt work, in particular around redistributing
>>> interrupts. Upstream group support is not in a complete or usable
>>> state as it is.
>>> * Significant context push/pull improvements, particularly pool and
>>> phys context handling was quite incomplete beyond trivial OPAL
>>> case that pushes at boot.
>>> * Improved tracing and checking for unimp and guest error situations.
>>> * Various other missing feature support.
>>>
>>> The ordering and grouping of patches in the series is not perfect,
>>> because it had been an ongoing development, and PowerVM only started
>>> to become stable toward the end. I did try to rearrange and improve
>>> things, but some were not worth rebasing cost (e.g., some of the
>>> pool/phys pull redistribution patches should have ideally been squashed
>>> or moved together), so please bear that in mind. Suggestions for
>>> further rearranging the series are fine, but I might just find they are
>>> too much effort to be worthwhile.
>>>
>>> Thanks,
>>> Nick
>>>
>>> Glenn Miles (12):
>>> ppc/xive2: Fix calculation of END queue sizes
>>> ppc/xive2: Use fair irq target search algorithm
>>> ppc/xive2: Fix irq preempted by lower priority group irq
>>> ppc/xive2: Fix treatment of PIPR in CPPR update
>>> pnv/xive2: Support ESB Escalation
>>> ppc/xive2: add interrupt priority configuration flags
>>> ppc/xive2: Support redistribution of group interrupts
>>> ppc/xive: Add more interrupt notification tracing
>>> ppc/xive2: Improve pool regs variable name
>>> ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
>>> ppc/xive2: Redistribute group interrupt precluded by CPPR update
>>> ppc/xive2: redistribute irqs for pool and phys ctx pull
>>>
>>> Michael Kowal (4):
>>> ppc/xive2: Remote VSDs need to match on forwarding address
>>> ppc/xive2: Reset Generation Flipped bit on END Cache Watch
>>> pnv/xive2: Print value in invalid register write logging
>>> pnv/xive2: Permit valid writes to VC/PC Flush Control registers
>>>
>>> Nicholas Piggin (34):
>>> ppc/xive: Fix xive trace event output
>>> ppc/xive: Report access size in XIVE TM operation error logs
>>> ppc/xive2: fix context push calculation of IPB priority
>>> ppc/xive: Fix PHYS NSR ring matching
>>> ppc/xive2: Do not present group interrupt on OS-push if precluded by
>>> CPPR
>>> ppc/xive2: Set CPPR delivery should account for group priority
>>> ppc/xive: tctx_notify should clear the precluded interrupt
>>> ppc/xive: Explicitly zero NSR after accepting
>>> ppc/xive: Move NSR decoding into helper functions
>>> ppc/xive: Fix pulling pool and phys contexts
>>> pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
>>> ppc/xive: Change presenter .match_nvt to match not present
>>> ppc/xive2: Redistribute group interrupt preempted by higher priority
>>> interrupt
>>> ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
>>> ppc/xive: Fix high prio group interrupt being preempted by low prio VP
>>> ppc/xive: Split xive recompute from IPB function
>>> ppc/xive: tctx signaling registers rework
>>> ppc/xive: tctx_accept only lower irq line if an interrupt was
>>> presented
>>> ppc/xive: Add xive_tctx_pipr_set() helper function
>>> ppc/xive2: split tctx presentation processing from set CPPR
>>> ppc/xive2: Consolidate presentation processing in context push
>>> ppc/xive2: Avoid needless interrupt re-check on CPPR set
>>> ppc/xive: Assert group interrupts were redistributed
>>> ppc/xive2: implement NVP context save restore for POOL ring
>>> ppc/xive2: Prevent pulling of pool context losing phys interrupt
>>> ppc/xive: Redistribute phys after pulling of pool context
>>> ppc/xive: Check TIMA operations validity
>>> ppc/xive2: Implement pool context push TIMA op
>>> ppc/xive2: redistribute group interrupts on context push
>>> ppc/xive2: Implement set_os_pending TIMA op
>>> ppc/xive2: Implement POOL LGS push TIMA op
>>> ppc/xive2: Implement PHYS ring VP push TIMA op
>>> ppc/xive: Split need_resend into restore_nvp
>>> ppc/xive2: Enable lower level contexts on VP push
>>>
>>> hw/intc/pnv_xive.c | 16 +-
>>> hw/intc/pnv_xive2.c | 139 +++++--
>>> hw/intc/pnv_xive2_regs.h | 1 +
>>> hw/intc/spapr_xive.c | 18 +-
>>> hw/intc/trace-events | 12 +-
>>> hw/intc/xive.c | 555 ++++++++++++++++++----------
>>> hw/intc/xive2.c | 717 +++++++++++++++++++++++++++---------
>>> hw/ppc/pnv.c | 48 +--
>>> hw/ppc/spapr.c | 21 +-
>>> include/hw/ppc/xive.h | 66 +++-
>>> include/hw/ppc/xive2.h | 22 +-
>>> include/hw/ppc/xive2_regs.h | 22 +-
>>> 12 files changed, 1145 insertions(+), 492 deletions(-)
>>>
>>
>> I am impressed :) and glad that you are still taking care of XIVE.
>>
>> I suggest adding new names under the XIVE entry in the MAINTAINERS file.
>
> Yeah it's good to see. They are building a lot more cool stuff with
> powernv at the moment, hopefully almost all should get upstreamed
> eventually.
>
> I will try to convince them to add MAINTAINER entries :)
>
> Thanks,
> Nick
>
This is a major update for XIVE and, since I am not sure anyone
is going to send a PR for QEMU 10.1, I am volunteering to do
it again on monday, once and only for these fixes.
We should clarify in the next cycle who is charge of ppc. IMO,
If we don't have maintainers, we should orphan all non-pseries
PPC components. I can send a maintainer update on this as soon
as the QEMU 10.2 cycle opens.
Thanks,
C.
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 00/50] ppc/xive: updates for PowerVM
2025-07-20 21:26 ` Cédric Le Goater
@ 2025-08-04 17:37 ` Miles Glenn
2025-08-05 5:09 ` Cédric Le Goater
0 siblings, 1 reply; 192+ messages in thread
From: Miles Glenn @ 2025-08-04 17:37 UTC (permalink / raw)
To: Cédric Le Goater, Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin
On Sun, 2025-07-20 at 23:26 +0200, Cédric Le Goater wrote:
> On 5/16/25 03:29, Nicholas Piggin wrote:
> > On Fri May 16, 2025 at 1:36 AM AEST, Cédric Le Goater wrote:
> > > On 5/12/25 05:10, Nicholas Piggin wrote:
> > > > These changes gets the powernv xive2 to the point it is able to run
> > > > PowerVM with good stability.
> > > >
> > > > * Various bug fixes around lost interrupts particularly.
> > > > * Major group interrupt work, in particular around redistributing
> > > > interrupts. Upstream group support is not in a complete or usable
> > > > state as it is.
> > > > * Significant context push/pull improvements, particularly pool and
> > > > phys context handling was quite incomplete beyond trivial OPAL
> > > > case that pushes at boot.
> > > > * Improved tracing and checking for unimp and guest error situations.
> > > > * Various other missing feature support.
> > > >
> > > > The ordering and grouping of patches in the series is not perfect,
> > > > because it had been an ongoing development, and PowerVM only started
> > > > to become stable toward the end. I did try to rearrange and improve
> > > > things, but some were not worth rebasing cost (e.g., some of the
> > > > pool/phys pull redistribution patches should have ideally been squashed
> > > > or moved together), so please bear that in mind. Suggestions for
> > > > further rearranging the series are fine, but I might just find they are
> > > > too much effort to be worthwhile.
> > > >
> > > > Thanks,
> > > > Nick
> > > >
> > > > Glenn Miles (12):
> > > > ppc/xive2: Fix calculation of END queue sizes
> > > > ppc/xive2: Use fair irq target search algorithm
> > > > ppc/xive2: Fix irq preempted by lower priority group irq
> > > > ppc/xive2: Fix treatment of PIPR in CPPR update
> > > > pnv/xive2: Support ESB Escalation
> > > > ppc/xive2: add interrupt priority configuration flags
> > > > ppc/xive2: Support redistribution of group interrupts
> > > > ppc/xive: Add more interrupt notification tracing
> > > > ppc/xive2: Improve pool regs variable name
> > > > ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
> > > > ppc/xive2: Redistribute group interrupt precluded by CPPR update
> > > > ppc/xive2: redistribute irqs for pool and phys ctx pull
> > > >
> > > > Michael Kowal (4):
> > > > ppc/xive2: Remote VSDs need to match on forwarding address
> > > > ppc/xive2: Reset Generation Flipped bit on END Cache Watch
> > > > pnv/xive2: Print value in invalid register write logging
> > > > pnv/xive2: Permit valid writes to VC/PC Flush Control registers
> > > >
> > > > Nicholas Piggin (34):
> > > > ppc/xive: Fix xive trace event output
> > > > ppc/xive: Report access size in XIVE TM operation error logs
> > > > ppc/xive2: fix context push calculation of IPB priority
> > > > ppc/xive: Fix PHYS NSR ring matching
> > > > ppc/xive2: Do not present group interrupt on OS-push if precluded by
> > > > CPPR
> > > > ppc/xive2: Set CPPR delivery should account for group priority
> > > > ppc/xive: tctx_notify should clear the precluded interrupt
> > > > ppc/xive: Explicitly zero NSR after accepting
> > > > ppc/xive: Move NSR decoding into helper functions
> > > > ppc/xive: Fix pulling pool and phys contexts
> > > > pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
> > > > ppc/xive: Change presenter .match_nvt to match not present
> > > > ppc/xive2: Redistribute group interrupt preempted by higher priority
> > > > interrupt
> > > > ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
> > > > ppc/xive: Fix high prio group interrupt being preempted by low prio VP
> > > > ppc/xive: Split xive recompute from IPB function
> > > > ppc/xive: tctx signaling registers rework
> > > > ppc/xive: tctx_accept only lower irq line if an interrupt was
> > > > presented
> > > > ppc/xive: Add xive_tctx_pipr_set() helper function
> > > > ppc/xive2: split tctx presentation processing from set CPPR
> > > > ppc/xive2: Consolidate presentation processing in context push
> > > > ppc/xive2: Avoid needless interrupt re-check on CPPR set
> > > > ppc/xive: Assert group interrupts were redistributed
> > > > ppc/xive2: implement NVP context save restore for POOL ring
> > > > ppc/xive2: Prevent pulling of pool context losing phys interrupt
> > > > ppc/xive: Redistribute phys after pulling of pool context
> > > > ppc/xive: Check TIMA operations validity
> > > > ppc/xive2: Implement pool context push TIMA op
> > > > ppc/xive2: redistribute group interrupts on context push
> > > > ppc/xive2: Implement set_os_pending TIMA op
> > > > ppc/xive2: Implement POOL LGS push TIMA op
> > > > ppc/xive2: Implement PHYS ring VP push TIMA op
> > > > ppc/xive: Split need_resend into restore_nvp
> > > > ppc/xive2: Enable lower level contexts on VP push
> > > >
> > > > hw/intc/pnv_xive.c | 16 +-
> > > > hw/intc/pnv_xive2.c | 139 +++++--
> > > > hw/intc/pnv_xive2_regs.h | 1 +
> > > > hw/intc/spapr_xive.c | 18 +-
> > > > hw/intc/trace-events | 12 +-
> > > > hw/intc/xive.c | 555 ++++++++++++++++++----------
> > > > hw/intc/xive2.c | 717 +++++++++++++++++++++++++++---------
> > > > hw/ppc/pnv.c | 48 +--
> > > > hw/ppc/spapr.c | 21 +-
> > > > include/hw/ppc/xive.h | 66 +++-
> > > > include/hw/ppc/xive2.h | 22 +-
> > > > include/hw/ppc/xive2_regs.h | 22 +-
> > > > 12 files changed, 1145 insertions(+), 492 deletions(-)
> > > >
> > >
> > > I am impressed :) and glad that you are still taking care of XIVE.
> > >
> > > I suggest adding new names under the XIVE entry in the MAINTAINERS file.
> >
> > Yeah it's good to see. They are building a lot more cool stuff with
> > powernv at the moment, hopefully almost all should get upstreamed
> > eventually.
> >
> > I will try to convince them to add MAINTAINER entries :)
> >
> > Thanks,
> > Nick
> >
>
> This is a major update for XIVE and, since I am not sure anyone
> is going to send a PR for QEMU 10.1, I am volunteering to do
> it again on monday, once and only for these fixes.
>
> We should clarify in the next cycle who is charge of ppc. IMO,
> If we don't have maintainers, we should orphan all non-pseries
> PPC components. I can send a maintainer update on this as soon
> as the QEMU 10.2 cycle opens.
>
>
> Thanks,
>
> C.
>
Cédric,
Thanks for doing the PR for these XIVE changes! It sounds like if we
want to continue having our XIVE changes upstreamed we will need
someone on our IBM QEMU development team to volunteer as a maintainer.
Does becoming a maintainer still require physically attending a key
signing party at KVM Forum?
Thanks,
Glenn
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 00/50] ppc/xive: updates for PowerVM
2025-08-04 17:37 ` Miles Glenn
@ 2025-08-05 5:09 ` Cédric Le Goater
2025-08-05 15:52 ` Miles Glenn
0 siblings, 1 reply; 192+ messages in thread
From: Cédric Le Goater @ 2025-08-05 5:09 UTC (permalink / raw)
To: milesg, Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin, Harsh Prateek Bora
Hello Glenn,
+Harsh
On 8/4/25 19:37, Miles Glenn wrote:
> On Sun, 2025-07-20 at 23:26 +0200, Cédric Le Goater wrote:
>> On 5/16/25 03:29, Nicholas Piggin wrote:
>>> On Fri May 16, 2025 at 1:36 AM AEST, Cédric Le Goater wrote:
>>>> On 5/12/25 05:10, Nicholas Piggin wrote:
>>>>> These changes gets the powernv xive2 to the point it is able to run
>>>>> PowerVM with good stability.
>>>>>
>>>>> * Various bug fixes around lost interrupts particularly.
>>>>> * Major group interrupt work, in particular around redistributing
>>>>> interrupts. Upstream group support is not in a complete or usable
>>>>> state as it is.
>>>>> * Significant context push/pull improvements, particularly pool and
>>>>> phys context handling was quite incomplete beyond trivial OPAL
>>>>> case that pushes at boot.
>>>>> * Improved tracing and checking for unimp and guest error situations.
>>>>> * Various other missing feature support.
>>>>>
>>>>> The ordering and grouping of patches in the series is not perfect,
>>>>> because it had been an ongoing development, and PowerVM only started
>>>>> to become stable toward the end. I did try to rearrange and improve
>>>>> things, but some were not worth rebasing cost (e.g., some of the
>>>>> pool/phys pull redistribution patches should have ideally been squashed
>>>>> or moved together), so please bear that in mind. Suggestions for
>>>>> further rearranging the series are fine, but I might just find they are
>>>>> too much effort to be worthwhile.
>>>>>
>>>>> Thanks,
>>>>> Nick
>>>>>
>>>>> Glenn Miles (12):
>>>>> ppc/xive2: Fix calculation of END queue sizes
>>>>> ppc/xive2: Use fair irq target search algorithm
>>>>> ppc/xive2: Fix irq preempted by lower priority group irq
>>>>> ppc/xive2: Fix treatment of PIPR in CPPR update
>>>>> pnv/xive2: Support ESB Escalation
>>>>> ppc/xive2: add interrupt priority configuration flags
>>>>> ppc/xive2: Support redistribution of group interrupts
>>>>> ppc/xive: Add more interrupt notification tracing
>>>>> ppc/xive2: Improve pool regs variable name
>>>>> ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
>>>>> ppc/xive2: Redistribute group interrupt precluded by CPPR update
>>>>> ppc/xive2: redistribute irqs for pool and phys ctx pull
>>>>>
>>>>> Michael Kowal (4):
>>>>> ppc/xive2: Remote VSDs need to match on forwarding address
>>>>> ppc/xive2: Reset Generation Flipped bit on END Cache Watch
>>>>> pnv/xive2: Print value in invalid register write logging
>>>>> pnv/xive2: Permit valid writes to VC/PC Flush Control registers
>>>>>
>>>>> Nicholas Piggin (34):
>>>>> ppc/xive: Fix xive trace event output
>>>>> ppc/xive: Report access size in XIVE TM operation error logs
>>>>> ppc/xive2: fix context push calculation of IPB priority
>>>>> ppc/xive: Fix PHYS NSR ring matching
>>>>> ppc/xive2: Do not present group interrupt on OS-push if precluded by
>>>>> CPPR
>>>>> ppc/xive2: Set CPPR delivery should account for group priority
>>>>> ppc/xive: tctx_notify should clear the precluded interrupt
>>>>> ppc/xive: Explicitly zero NSR after accepting
>>>>> ppc/xive: Move NSR decoding into helper functions
>>>>> ppc/xive: Fix pulling pool and phys contexts
>>>>> pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
>>>>> ppc/xive: Change presenter .match_nvt to match not present
>>>>> ppc/xive2: Redistribute group interrupt preempted by higher priority
>>>>> interrupt
>>>>> ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
>>>>> ppc/xive: Fix high prio group interrupt being preempted by low prio VP
>>>>> ppc/xive: Split xive recompute from IPB function
>>>>> ppc/xive: tctx signaling registers rework
>>>>> ppc/xive: tctx_accept only lower irq line if an interrupt was
>>>>> presented
>>>>> ppc/xive: Add xive_tctx_pipr_set() helper function
>>>>> ppc/xive2: split tctx presentation processing from set CPPR
>>>>> ppc/xive2: Consolidate presentation processing in context push
>>>>> ppc/xive2: Avoid needless interrupt re-check on CPPR set
>>>>> ppc/xive: Assert group interrupts were redistributed
>>>>> ppc/xive2: implement NVP context save restore for POOL ring
>>>>> ppc/xive2: Prevent pulling of pool context losing phys interrupt
>>>>> ppc/xive: Redistribute phys after pulling of pool context
>>>>> ppc/xive: Check TIMA operations validity
>>>>> ppc/xive2: Implement pool context push TIMA op
>>>>> ppc/xive2: redistribute group interrupts on context push
>>>>> ppc/xive2: Implement set_os_pending TIMA op
>>>>> ppc/xive2: Implement POOL LGS push TIMA op
>>>>> ppc/xive2: Implement PHYS ring VP push TIMA op
>>>>> ppc/xive: Split need_resend into restore_nvp
>>>>> ppc/xive2: Enable lower level contexts on VP push
>>>>>
>>>>> hw/intc/pnv_xive.c | 16 +-
>>>>> hw/intc/pnv_xive2.c | 139 +++++--
>>>>> hw/intc/pnv_xive2_regs.h | 1 +
>>>>> hw/intc/spapr_xive.c | 18 +-
>>>>> hw/intc/trace-events | 12 +-
>>>>> hw/intc/xive.c | 555 ++++++++++++++++++----------
>>>>> hw/intc/xive2.c | 717 +++++++++++++++++++++++++++---------
>>>>> hw/ppc/pnv.c | 48 +--
>>>>> hw/ppc/spapr.c | 21 +-
>>>>> include/hw/ppc/xive.h | 66 +++-
>>>>> include/hw/ppc/xive2.h | 22 +-
>>>>> include/hw/ppc/xive2_regs.h | 22 +-
>>>>> 12 files changed, 1145 insertions(+), 492 deletions(-)
>>>>>
>>>>
>>>> I am impressed :) and glad that you are still taking care of XIVE.
>>>>
>>>> I suggest adding new names under the XIVE entry in the MAINTAINERS file.
>>>
>>> Yeah it's good to see. They are building a lot more cool stuff with
>>> powernv at the moment, hopefully almost all should get upstreamed
>>> eventually.
>>>
>>> I will try to convince them to add MAINTAINER entries :)
>>>
>>> Thanks,
>>> Nick
>>>
>>
>> This is a major update for XIVE and, since I am not sure anyone
>> is going to send a PR for QEMU 10.1, I am volunteering to do
>> it again on monday, once and only for these fixes.
>>
>> We should clarify in the next cycle who is charge of ppc. IMO,
>> If we don't have maintainers, we should orphan all non-pseries
>> PPC components. I can send a maintainer update on this as soon
>> as the QEMU 10.2 cycle opens.
>>
>>
>> Thanks,
>>
>> C.
>>
>
> Cédric,
>
> Thanks for doing the PR for these XIVE changes! It sounds like if we
> want to continue having our XIVE changes upstreamed we will need
> someone on our IBM QEMU development team to volunteer as a maintainer.
We did some updates recently :
https://lore.kernel.org/qemu-devel/20250724133126.1695824-1-clg@redhat.com/
Given your knowledge of IBM Power servers, your relationships with
the hardware team, and the quality of your work within QEMU, you
should add your self as a Reviewer of PowerNV and XIVE (Needs a
Maintainer also). I can merge that for QEMU 10.1.
> Does becoming a maintainer still require physically attending a key
> signing party at KVM Forum?
To be able to send PRs, it is strongly recommended to have your
key signed by the people pulling in your changes. Being physically
present is always better to verify the identity of a person.
But that's not all, it's a chain of trust and a community involvement
in all areas. It takes time.
Btw, in series [1], there are several patches tagged as Fixes,
could you please reply to Michael [2] regarding which could be
backported to the stable branches ?
Thanks,
C.
[1] https://lore.kernel.org/qemu-devel/20250512031100.439842-1-npiggin@gmail.com/
[2] https://lore.kernel.org/qemu-devel/10177005-d549-41bc-b0eb-c98b7e475f97@tls.msk.ru/
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 00/50] ppc/xive: updates for PowerVM
2025-08-05 5:09 ` Cédric Le Goater
@ 2025-08-05 15:52 ` Miles Glenn
2025-08-05 20:09 ` Cédric Le Goater
0 siblings, 1 reply; 192+ messages in thread
From: Miles Glenn @ 2025-08-05 15:52 UTC (permalink / raw)
To: Cédric Le Goater, Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin, Harsh Prateek Bora
On Tue, 2025-08-05 at 07:09 +0200, Cédric Le Goater wrote:
> Hello Glenn,
>
> +Harsh
>
> On 8/4/25 19:37, Miles Glenn wrote:
> > On Sun, 2025-07-20 at 23:26 +0200, Cédric Le Goater wrote:
> > > On 5/16/25 03:29, Nicholas Piggin wrote:
> > > > On Fri May 16, 2025 at 1:36 AM AEST, Cédric Le Goater wrote:
> > > > > On 5/12/25 05:10, Nicholas Piggin wrote:
> > > > > > These changes gets the powernv xive2 to the point it is able to run
> > > > > > PowerVM with good stability.
> > > > > >
> > > > > > * Various bug fixes around lost interrupts particularly.
> > > > > > * Major group interrupt work, in particular around redistributing
> > > > > > interrupts. Upstream group support is not in a complete or usable
> > > > > > state as it is.
> > > > > > * Significant context push/pull improvements, particularly pool and
> > > > > > phys context handling was quite incomplete beyond trivial OPAL
> > > > > > case that pushes at boot.
> > > > > > * Improved tracing and checking for unimp and guest error situations.
> > > > > > * Various other missing feature support.
> > > > > >
> > > > > > The ordering and grouping of patches in the series is not perfect,
> > > > > > because it had been an ongoing development, and PowerVM only started
> > > > > > to become stable toward the end. I did try to rearrange and improve
> > > > > > things, but some were not worth rebasing cost (e.g., some of the
> > > > > > pool/phys pull redistribution patches should have ideally been squashed
> > > > > > or moved together), so please bear that in mind. Suggestions for
> > > > > > further rearranging the series are fine, but I might just find they are
> > > > > > too much effort to be worthwhile.
> > > > > >
> > > > > > Thanks,
> > > > > > Nick
> > > > > >
> > > > > > Glenn Miles (12):
> > > > > > ppc/xive2: Fix calculation of END queue sizes
> > > > > > ppc/xive2: Use fair irq target search algorithm
> > > > > > ppc/xive2: Fix irq preempted by lower priority group irq
> > > > > > ppc/xive2: Fix treatment of PIPR in CPPR update
> > > > > > pnv/xive2: Support ESB Escalation
> > > > > > ppc/xive2: add interrupt priority configuration flags
> > > > > > ppc/xive2: Support redistribution of group interrupts
> > > > > > ppc/xive: Add more interrupt notification tracing
> > > > > > ppc/xive2: Improve pool regs variable name
> > > > > > ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
> > > > > > ppc/xive2: Redistribute group interrupt precluded by CPPR update
> > > > > > ppc/xive2: redistribute irqs for pool and phys ctx pull
> > > > > >
> > > > > > Michael Kowal (4):
> > > > > > ppc/xive2: Remote VSDs need to match on forwarding address
> > > > > > ppc/xive2: Reset Generation Flipped bit on END Cache Watch
> > > > > > pnv/xive2: Print value in invalid register write logging
> > > > > > pnv/xive2: Permit valid writes to VC/PC Flush Control registers
> > > > > >
> > > > > > Nicholas Piggin (34):
> > > > > > ppc/xive: Fix xive trace event output
> > > > > > ppc/xive: Report access size in XIVE TM operation error logs
> > > > > > ppc/xive2: fix context push calculation of IPB priority
> > > > > > ppc/xive: Fix PHYS NSR ring matching
> > > > > > ppc/xive2: Do not present group interrupt on OS-push if precluded by
> > > > > > CPPR
> > > > > > ppc/xive2: Set CPPR delivery should account for group priority
> > > > > > ppc/xive: tctx_notify should clear the precluded interrupt
> > > > > > ppc/xive: Explicitly zero NSR after accepting
> > > > > > ppc/xive: Move NSR decoding into helper functions
> > > > > > ppc/xive: Fix pulling pool and phys contexts
> > > > > > pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
> > > > > > ppc/xive: Change presenter .match_nvt to match not present
> > > > > > ppc/xive2: Redistribute group interrupt preempted by higher priority
> > > > > > interrupt
> > > > > > ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
> > > > > > ppc/xive: Fix high prio group interrupt being preempted by low prio VP
> > > > > > ppc/xive: Split xive recompute from IPB function
> > > > > > ppc/xive: tctx signaling registers rework
> > > > > > ppc/xive: tctx_accept only lower irq line if an interrupt was
> > > > > > presented
> > > > > > ppc/xive: Add xive_tctx_pipr_set() helper function
> > > > > > ppc/xive2: split tctx presentation processing from set CPPR
> > > > > > ppc/xive2: Consolidate presentation processing in context push
> > > > > > ppc/xive2: Avoid needless interrupt re-check on CPPR set
> > > > > > ppc/xive: Assert group interrupts were redistributed
> > > > > > ppc/xive2: implement NVP context save restore for POOL ring
> > > > > > ppc/xive2: Prevent pulling of pool context losing phys interrupt
> > > > > > ppc/xive: Redistribute phys after pulling of pool context
> > > > > > ppc/xive: Check TIMA operations validity
> > > > > > ppc/xive2: Implement pool context push TIMA op
> > > > > > ppc/xive2: redistribute group interrupts on context push
> > > > > > ppc/xive2: Implement set_os_pending TIMA op
> > > > > > ppc/xive2: Implement POOL LGS push TIMA op
> > > > > > ppc/xive2: Implement PHYS ring VP push TIMA op
> > > > > > ppc/xive: Split need_resend into restore_nvp
> > > > > > ppc/xive2: Enable lower level contexts on VP push
> > > > > >
> > > > > > hw/intc/pnv_xive.c | 16 +-
> > > > > > hw/intc/pnv_xive2.c | 139 +++++--
> > > > > > hw/intc/pnv_xive2_regs.h | 1 +
> > > > > > hw/intc/spapr_xive.c | 18 +-
> > > > > > hw/intc/trace-events | 12 +-
> > > > > > hw/intc/xive.c | 555 ++++++++++++++++++----------
> > > > > > hw/intc/xive2.c | 717 +++++++++++++++++++++++++++---------
> > > > > > hw/ppc/pnv.c | 48 +--
> > > > > > hw/ppc/spapr.c | 21 +-
> > > > > > include/hw/ppc/xive.h | 66 +++-
> > > > > > include/hw/ppc/xive2.h | 22 +-
> > > > > > include/hw/ppc/xive2_regs.h | 22 +-
> > > > > > 12 files changed, 1145 insertions(+), 492 deletions(-)
> > > > > >
> > > > >
> > > > > I am impressed :) and glad that you are still taking care of XIVE.
> > > > >
> > > > > I suggest adding new names under the XIVE entry in the MAINTAINERS file.
> > > >
> > > > Yeah it's good to see. They are building a lot more cool stuff with
> > > > powernv at the moment, hopefully almost all should get upstreamed
> > > > eventually.
> > > >
> > > > I will try to convince them to add MAINTAINER entries :)
> > > >
> > > > Thanks,
> > > > Nick
> > > >
> > >
> > > This is a major update for XIVE and, since I am not sure anyone
> > > is going to send a PR for QEMU 10.1, I am volunteering to do
> > > it again on monday, once and only for these fixes.
> > >
> > > We should clarify in the next cycle who is charge of ppc. IMO,
> > > If we don't have maintainers, we should orphan all non-pseries
> > > PPC components. I can send a maintainer update on this as soon
> > > as the QEMU 10.2 cycle opens.
> > >
> > >
> > > Thanks,
> > >
> > > C.
> > >
> >
> > Cédric,
> >
> > Thanks for doing the PR for these XIVE changes! It sounds like if we
> > want to continue having our XIVE changes upstreamed we will need
> > someone on our IBM QEMU development team to volunteer as a maintainer.
>
> We did some updates recently :
>
> https://lore.kernel.org/qemu-devel/20250724133126.1695824-1-clg@redhat.com/
>
> Given your knowledge of IBM Power servers, your relationships with
> the hardware team, and the quality of your work within QEMU, you
> should add your self as a Reviewer of PowerNV and XIVE (Needs a
> Maintainer also). I can merge that for QEMU 10.1.
>
> > Does becoming a maintainer still require physically attending a key
> > signing party at KVM Forum?
>
> To be able to send PRs, it is strongly recommended to have your
> key signed by the people pulling in your changes. Being physically
> present is always better to verify the identity of a person.
>
> But that's not all, it's a chain of trust and a community involvement
> in all areas. It takes time.
>
> Btw, in series [1], there are several patches tagged as Fixes,
> could you please reply to Michael [2] regarding which could be
> backported to the stable branches ?
>
>
> Thanks,
>
> C.
>
>
> [1] https://lore.kernel.org/qemu-devel/20250512031100.439842-1-npiggin@gmail.com/
> [2] https://lore.kernel.org/qemu-devel/10177005-d549-41bc-b0eb-c98b7e475f97@tls.msk.ru/
>
Thanks Cédric,
I'll go ahead and add my name as a reviewer for powernv and xive. As
for a maintainer for the XIVE code, I would like to nominate Mike Kowal
for that role. And, yes, I will respond to Michael Tokarev's question
regarding backporting fixes.
Thanks,
Glenn
^ permalink raw reply [flat|nested] 192+ messages in thread
* Re: [PATCH 00/50] ppc/xive: updates for PowerVM
2025-08-05 15:52 ` Miles Glenn
@ 2025-08-05 20:09 ` Cédric Le Goater
0 siblings, 0 replies; 192+ messages in thread
From: Cédric Le Goater @ 2025-08-05 20:09 UTC (permalink / raw)
To: milesg, Nicholas Piggin, qemu-ppc
Cc: qemu-devel, Frédéric Barrat, Michael Kowal,
Caleb Schlossin, Harsh Prateek Bora
On 8/5/25 17:52, Miles Glenn wrote:
> On Tue, 2025-08-05 at 07:09 +0200, Cédric Le Goater wrote:
>> Hello Glenn,
>>
>> +Harsh
>>
>> On 8/4/25 19:37, Miles Glenn wrote:
>>> On Sun, 2025-07-20 at 23:26 +0200, Cédric Le Goater wrote:
>>>> On 5/16/25 03:29, Nicholas Piggin wrote:
>>>>> On Fri May 16, 2025 at 1:36 AM AEST, Cédric Le Goater wrote:
>>>>>> On 5/12/25 05:10, Nicholas Piggin wrote:
>>>>>>> These changes gets the powernv xive2 to the point it is able to run
>>>>>>> PowerVM with good stability.
>>>>>>>
>>>>>>> * Various bug fixes around lost interrupts particularly.
>>>>>>> * Major group interrupt work, in particular around redistributing
>>>>>>> interrupts. Upstream group support is not in a complete or usable
>>>>>>> state as it is.
>>>>>>> * Significant context push/pull improvements, particularly pool and
>>>>>>> phys context handling was quite incomplete beyond trivial OPAL
>>>>>>> case that pushes at boot.
>>>>>>> * Improved tracing and checking for unimp and guest error situations.
>>>>>>> * Various other missing feature support.
>>>>>>>
>>>>>>> The ordering and grouping of patches in the series is not perfect,
>>>>>>> because it had been an ongoing development, and PowerVM only started
>>>>>>> to become stable toward the end. I did try to rearrange and improve
>>>>>>> things, but some were not worth rebasing cost (e.g., some of the
>>>>>>> pool/phys pull redistribution patches should have ideally been squashed
>>>>>>> or moved together), so please bear that in mind. Suggestions for
>>>>>>> further rearranging the series are fine, but I might just find they are
>>>>>>> too much effort to be worthwhile.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Nick
>>>>>>>
>>>>>>> Glenn Miles (12):
>>>>>>> ppc/xive2: Fix calculation of END queue sizes
>>>>>>> ppc/xive2: Use fair irq target search algorithm
>>>>>>> ppc/xive2: Fix irq preempted by lower priority group irq
>>>>>>> ppc/xive2: Fix treatment of PIPR in CPPR update
>>>>>>> pnv/xive2: Support ESB Escalation
>>>>>>> ppc/xive2: add interrupt priority configuration flags
>>>>>>> ppc/xive2: Support redistribution of group interrupts
>>>>>>> ppc/xive: Add more interrupt notification tracing
>>>>>>> ppc/xive2: Improve pool regs variable name
>>>>>>> ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
>>>>>>> ppc/xive2: Redistribute group interrupt precluded by CPPR update
>>>>>>> ppc/xive2: redistribute irqs for pool and phys ctx pull
>>>>>>>
>>>>>>> Michael Kowal (4):
>>>>>>> ppc/xive2: Remote VSDs need to match on forwarding address
>>>>>>> ppc/xive2: Reset Generation Flipped bit on END Cache Watch
>>>>>>> pnv/xive2: Print value in invalid register write logging
>>>>>>> pnv/xive2: Permit valid writes to VC/PC Flush Control registers
>>>>>>>
>>>>>>> Nicholas Piggin (34):
>>>>>>> ppc/xive: Fix xive trace event output
>>>>>>> ppc/xive: Report access size in XIVE TM operation error logs
>>>>>>> ppc/xive2: fix context push calculation of IPB priority
>>>>>>> ppc/xive: Fix PHYS NSR ring matching
>>>>>>> ppc/xive2: Do not present group interrupt on OS-push if precluded by
>>>>>>> CPPR
>>>>>>> ppc/xive2: Set CPPR delivery should account for group priority
>>>>>>> ppc/xive: tctx_notify should clear the precluded interrupt
>>>>>>> ppc/xive: Explicitly zero NSR after accepting
>>>>>>> ppc/xive: Move NSR decoding into helper functions
>>>>>>> ppc/xive: Fix pulling pool and phys contexts
>>>>>>> pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
>>>>>>> ppc/xive: Change presenter .match_nvt to match not present
>>>>>>> ppc/xive2: Redistribute group interrupt preempted by higher priority
>>>>>>> interrupt
>>>>>>> ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
>>>>>>> ppc/xive: Fix high prio group interrupt being preempted by low prio VP
>>>>>>> ppc/xive: Split xive recompute from IPB function
>>>>>>> ppc/xive: tctx signaling registers rework
>>>>>>> ppc/xive: tctx_accept only lower irq line if an interrupt was
>>>>>>> presented
>>>>>>> ppc/xive: Add xive_tctx_pipr_set() helper function
>>>>>>> ppc/xive2: split tctx presentation processing from set CPPR
>>>>>>> ppc/xive2: Consolidate presentation processing in context push
>>>>>>> ppc/xive2: Avoid needless interrupt re-check on CPPR set
>>>>>>> ppc/xive: Assert group interrupts were redistributed
>>>>>>> ppc/xive2: implement NVP context save restore for POOL ring
>>>>>>> ppc/xive2: Prevent pulling of pool context losing phys interrupt
>>>>>>> ppc/xive: Redistribute phys after pulling of pool context
>>>>>>> ppc/xive: Check TIMA operations validity
>>>>>>> ppc/xive2: Implement pool context push TIMA op
>>>>>>> ppc/xive2: redistribute group interrupts on context push
>>>>>>> ppc/xive2: Implement set_os_pending TIMA op
>>>>>>> ppc/xive2: Implement POOL LGS push TIMA op
>>>>>>> ppc/xive2: Implement PHYS ring VP push TIMA op
>>>>>>> ppc/xive: Split need_resend into restore_nvp
>>>>>>> ppc/xive2: Enable lower level contexts on VP push
>>>>>>>
>>>>>>> hw/intc/pnv_xive.c | 16 +-
>>>>>>> hw/intc/pnv_xive2.c | 139 +++++--
>>>>>>> hw/intc/pnv_xive2_regs.h | 1 +
>>>>>>> hw/intc/spapr_xive.c | 18 +-
>>>>>>> hw/intc/trace-events | 12 +-
>>>>>>> hw/intc/xive.c | 555 ++++++++++++++++++----------
>>>>>>> hw/intc/xive2.c | 717 +++++++++++++++++++++++++++---------
>>>>>>> hw/ppc/pnv.c | 48 +--
>>>>>>> hw/ppc/spapr.c | 21 +-
>>>>>>> include/hw/ppc/xive.h | 66 +++-
>>>>>>> include/hw/ppc/xive2.h | 22 +-
>>>>>>> include/hw/ppc/xive2_regs.h | 22 +-
>>>>>>> 12 files changed, 1145 insertions(+), 492 deletions(-)
>>>>>>>
>>>>>>
>>>>>> I am impressed :) and glad that you are still taking care of XIVE.
>>>>>>
>>>>>> I suggest adding new names under the XIVE entry in the MAINTAINERS file.
>>>>>
>>>>> Yeah it's good to see. They are building a lot more cool stuff with
>>>>> powernv at the moment, hopefully almost all should get upstreamed
>>>>> eventually.
>>>>>
>>>>> I will try to convince them to add MAINTAINER entries :)
>>>>>
>>>>> Thanks,
>>>>> Nick
>>>>>
>>>>
>>>> This is a major update for XIVE and, since I am not sure anyone
>>>> is going to send a PR for QEMU 10.1, I am volunteering to do
>>>> it again on monday, once and only for these fixes.
>>>>
>>>> We should clarify in the next cycle who is charge of ppc. IMO,
>>>> If we don't have maintainers, we should orphan all non-pseries
>>>> PPC components. I can send a maintainer update on this as soon
>>>> as the QEMU 10.2 cycle opens.
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> C.
>>>>
>>>
>>> Cédric,
>>>
>>> Thanks for doing the PR for these XIVE changes! It sounds like if we
>>> want to continue having our XIVE changes upstreamed we will need
>>> someone on our IBM QEMU development team to volunteer as a maintainer.
>>
>> We did some updates recently :
>>
>> https://lore.kernel.org/qemu-devel/20250724133126.1695824-1-clg@redhat.com/
>>
>> Given your knowledge of IBM Power servers, your relationships with
>> the hardware team, and the quality of your work within QEMU, you
>> should add your self as a Reviewer of PowerNV and XIVE (Needs a
>> Maintainer also). I can merge that for QEMU 10.1.
>>
>>> Does becoming a maintainer still require physically attending a key
>>> signing party at KVM Forum?
>>
>> To be able to send PRs, it is strongly recommended to have your
>> key signed by the people pulling in your changes. Being physically
>> present is always better to verify the identity of a person.
>>
>> But that's not all, it's a chain of trust and a community involvement
>> in all areas. It takes time.
>>
>> Btw, in series [1], there are several patches tagged as Fixes,
>> could you please reply to Michael [2] regarding which could be
>> backported to the stable branches ?
>>
>>
>> Thanks,
>>
>> C.
>>
>>
>> [1] https://lore.kernel.org/qemu-devel/20250512031100.439842-1-npiggin@gmail.com/
>> [2] https://lore.kernel.org/qemu-devel/10177005-d549-41bc-b0eb-c98b7e475f97@tls.msk.ru/
>>
>
> Thanks Cédric,
>
> I'll go ahead and add my name as a reviewer for powernv and xive.
Please send a patch !
> As
> for a maintainer for the XIVE code, I would like to nominate Mike Kowal
> for that role.
Ah ! I tried that in the past. Mike is the right person indeed.
Please send a patch :)
> And, yes, I will respond to Michael Tokarev's question
> regarding backporting fixes.
Thanks,
C.
^ permalink raw reply [flat|nested] 192+ messages in thread
end of thread, other threads:[~2025-08-05 20:10 UTC | newest]
Thread overview: 192+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-12 3:10 [PATCH 00/50] ppc/xive: updates for PowerVM Nicholas Piggin
2025-05-12 3:10 ` [PATCH 01/50] ppc/xive: Fix xive trace event output Nicholas Piggin
2025-05-14 14:26 ` Caleb Schlossin
2025-05-14 18:41 ` Mike Kowal
2025-05-15 15:30 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 02/50] ppc/xive: Report access size in XIVE TM operation error logs Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
2025-05-14 18:42 ` Mike Kowal
2025-05-15 15:31 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 03/50] ppc/xive2: Fix calculation of END queue sizes Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
2025-05-14 18:45 ` Mike Kowal
2025-05-16 0:06 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 04/50] ppc/xive2: Remote VSDs need to match on forwarding address Nicholas Piggin
2025-05-14 14:27 ` Caleb Schlossin
2025-05-14 18:46 ` Mike Kowal
2025-05-15 15:34 ` Miles Glenn
2025-05-16 0:08 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 05/50] ppc/xive2: fix context push calculation of IPB priority Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
2025-05-14 18:48 ` Mike Kowal
2025-05-15 15:36 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 06/50] ppc/xive: Fix PHYS NSR ring matching Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
2025-05-14 18:49 ` Mike Kowal
2025-05-15 15:39 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch Nicholas Piggin
2025-05-14 14:30 ` Caleb Schlossin
2025-05-14 18:50 ` Mike Kowal
2025-05-15 15:41 ` Miles Glenn
2025-05-16 0:09 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 08/50] ppc/xive2: Use fair irq target search algorithm Nicholas Piggin
2025-05-14 14:31 ` Caleb Schlossin
2025-05-14 18:51 ` Mike Kowal
2025-05-15 15:42 ` Miles Glenn
2025-05-16 0:12 ` Nicholas Piggin
2025-05-16 16:22 ` Mike Kowal
2025-05-12 3:10 ` [PATCH 09/50] ppc/xive2: Fix irq preempted by lower priority group irq Nicholas Piggin
2025-05-14 14:31 ` Caleb Schlossin
2025-05-14 18:52 ` Mike Kowal
2025-05-16 0:12 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update Nicholas Piggin
2025-05-14 14:32 ` Caleb Schlossin
2025-05-14 18:53 ` Mike Kowal
2025-05-16 0:15 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR Nicholas Piggin
2025-05-14 14:32 ` Caleb Schlossin
2025-05-14 18:54 ` Mike Kowal
2025-05-15 15:43 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 12/50] ppc/xive2: Set CPPR delivery should account for group priority Nicholas Piggin
2025-05-14 14:33 ` Caleb Schlossin
2025-05-14 18:57 ` Mike Kowal
2025-05-15 15:45 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 13/50] ppc/xive: tctx_notify should clear the precluded interrupt Nicholas Piggin
2025-05-14 14:33 ` Caleb Schlossin
2025-05-14 18:58 ` Mike Kowal
2025-05-15 15:46 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 14/50] ppc/xive: Explicitly zero NSR after accepting Nicholas Piggin
2025-05-14 14:34 ` Caleb Schlossin
2025-05-14 19:07 ` Mike Kowal
2025-05-15 23:31 ` Nicholas Piggin
2025-05-15 15:47 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 15/50] ppc/xive: Move NSR decoding into helper functions Nicholas Piggin
2025-05-14 14:35 ` Caleb Schlossin
2025-05-14 19:04 ` Mike Kowal
2025-05-15 15:48 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 16/50] ppc/xive: Fix pulling pool and phys contexts Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
2025-05-14 19:01 ` Mike Kowal
2025-05-15 15:49 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 17/50] pnv/xive2: Support ESB Escalation Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
2025-05-14 19:00 ` Mike Kowal
2025-05-16 0:05 ` Nicholas Piggin
2025-05-16 15:44 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 18/50] pnv/xive2: Print value in invalid register write logging Nicholas Piggin
2025-05-14 14:36 ` Caleb Schlossin
2025-05-14 19:09 ` Mike Kowal
2025-05-15 15:50 ` Miles Glenn
2025-05-16 0:15 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL Nicholas Piggin
2025-05-14 14:37 ` Caleb Schlossin
2025-05-14 19:10 ` Mike Kowal
2025-05-15 15:51 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers Nicholas Piggin
2025-05-14 14:37 ` Caleb Schlossin
2025-05-14 19:11 ` Mike Kowal
2025-05-15 15:52 ` Miles Glenn
2025-05-16 0:18 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 21/50] ppc/xive2: add interrupt priority configuration flags Nicholas Piggin
2025-05-14 19:41 ` Mike Kowal
2025-05-16 0:18 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 22/50] ppc/xive2: Support redistribution of group interrupts Nicholas Piggin
2025-05-14 19:42 ` Mike Kowal
2025-05-16 0:19 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 23/50] ppc/xive: Add more interrupt notification tracing Nicholas Piggin
2025-05-14 19:46 ` Mike Kowal
2025-05-16 0:19 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 24/50] ppc/xive2: Improve pool regs variable name Nicholas Piggin
2025-05-14 19:47 ` Mike Kowal
2025-05-16 0:19 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op Nicholas Piggin
2025-05-14 19:48 ` Mike Kowal
2025-05-16 0:20 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update Nicholas Piggin
2025-05-14 19:48 ` Mike Kowal
2025-05-16 0:20 ` Nicholas Piggin
2025-05-12 3:10 ` [PATCH 27/50] ppc/xive2: redistribute irqs for pool and phys ctx pull Nicholas Piggin
2025-05-14 19:51 ` Mike Kowal
2025-05-12 3:10 ` [PATCH 28/50] ppc/xive: Change presenter .match_nvt to match not present Nicholas Piggin
2025-05-14 19:54 ` Mike Kowal
2025-05-15 23:40 ` Nicholas Piggin
2025-05-15 15:53 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt Nicholas Piggin
2025-05-14 19:55 ` Mike Kowal
2025-05-15 15:54 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt Nicholas Piggin
2025-05-14 20:10 ` Mike Kowal
2025-05-15 15:21 ` Mike Kowal
2025-05-15 23:51 ` Nicholas Piggin
2025-05-15 23:43 ` Nicholas Piggin
2025-05-15 15:55 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP Nicholas Piggin
2025-05-15 15:21 ` Mike Kowal
2025-05-15 15:55 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 32/50] ppc/xive: Split xive recompute from IPB function Nicholas Piggin
2025-05-14 20:42 ` Mike Kowal
2025-05-15 23:46 ` Nicholas Piggin
2025-05-15 15:56 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 33/50] ppc/xive: tctx signaling registers rework Nicholas Piggin
2025-05-14 20:49 ` Mike Kowal
2025-05-15 15:58 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented Nicholas Piggin
2025-05-15 15:16 ` Mike Kowal
2025-05-15 23:50 ` Nicholas Piggin
2025-05-15 16:04 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function Nicholas Piggin
2025-05-15 15:18 ` Mike Kowal
2025-05-15 16:05 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 36/50] ppc/xive2: split tctx presentation processing from set CPPR Nicholas Piggin
2025-05-15 15:24 ` Mike Kowal
2025-05-15 16:06 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 37/50] ppc/xive2: Consolidate presentation processing in context push Nicholas Piggin
2025-05-15 15:25 ` Mike Kowal
2025-05-15 16:06 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set Nicholas Piggin
2025-05-15 15:26 ` Mike Kowal
2025-05-15 16:07 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 39/50] ppc/xive: Assert group interrupts were redistributed Nicholas Piggin
2025-05-15 15:28 ` Mike Kowal
2025-05-15 16:08 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 40/50] ppc/xive2: implement NVP context save restore for POOL ring Nicholas Piggin
2025-05-15 15:36 ` Mike Kowal
2025-05-15 16:09 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt Nicholas Piggin
2025-05-15 15:43 ` Mike Kowal
2025-05-15 16:10 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 42/50] ppc/xive: Redistribute phys after pulling of pool context Nicholas Piggin
2025-05-15 15:46 ` Mike Kowal
2025-05-15 16:11 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 43/50] ppc/xive: Check TIMA operations validity Nicholas Piggin
2025-05-15 15:47 ` Mike Kowal
2025-05-15 16:12 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 44/50] ppc/xive2: Implement pool context push TIMA op Nicholas Piggin
2025-05-15 15:48 ` Mike Kowal
2025-05-15 16:13 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 45/50] ppc/xive2: redistribute group interrupts on context push Nicholas Piggin
2025-05-15 15:44 ` Mike Kowal
2025-05-15 16:13 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 46/50] ppc/xive2: Implement set_os_pending TIMA op Nicholas Piggin
2025-05-15 15:49 ` Mike Kowal
2025-05-15 16:14 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 47/50] ppc/xive2: Implement POOL LGS push " Nicholas Piggin
2025-05-15 15:50 ` Mike Kowal
2025-05-15 16:15 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 48/50] ppc/xive2: Implement PHYS ring VP " Nicholas Piggin
2025-05-15 15:50 ` Mike Kowal
2025-05-15 16:16 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 49/50] ppc/xive: Split need_resend into restore_nvp Nicholas Piggin
2025-05-15 15:57 ` Mike Kowal
2025-05-15 16:16 ` Miles Glenn
2025-05-12 3:10 ` [PATCH 50/50] ppc/xive2: Enable lower level contexts on VP push Nicholas Piggin
2025-05-15 15:54 ` Mike Kowal
2025-05-15 16:17 ` Miles Glenn
2025-05-15 15:36 ` [PATCH 00/50] ppc/xive: updates for PowerVM Cédric Le Goater
2025-05-16 1:29 ` Nicholas Piggin
2025-07-20 21:26 ` Cédric Le Goater
2025-08-04 17:37 ` Miles Glenn
2025-08-05 5:09 ` Cédric Le Goater
2025-08-05 15:52 ` Miles Glenn
2025-08-05 20:09 ` Cédric Le Goater
2025-07-03 9:37 ` Gautam Menghani
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).