qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PULL 00/50] ppc queue
@ 2025-07-21 16:21 Cédric Le Goater
  2025-07-21 16:21 ` [PULL 01/50] ppc/xive: Fix xive trace event output Cédric Le Goater
                   ` (51 more replies)
  0 siblings, 52 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Cédric Le Goater

The following changes since commit e82989544e38062beeeaad88c175afbeed0400f8:

  Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging (2025-07-18 14:10:02 -0400)

are available in the Git repository at:

  https://github.com/legoater/qemu/ tags/pull-ppc-20250721

for you to fetch changes up to df3614b7983e0629b0d422259968985ca0117bfa:

  ppc/xive2: Enable lower level contexts on VP push (2025-07-21 08:03:53 +0200)

----------------------------------------------------------------
ppc/xive queue:

* Various bug fixes around lost interrupts particularly.
* Major group interrupt work, in particular around redistributing
  interrupts. Upstream group support is not in a complete or usable
  state as it is.
* Significant context push/pull improvements, particularly pool and
  phys context handling was quite incomplete beyond trivial OPAL
  case that pushes at boot.
* Improved tracing and checking for unimp and guest error situations.
* Various other missing feature support.

----------------------------------------------------------------
Glenn Miles (12):
      ppc/xive2: Fix calculation of END queue sizes
      ppc/xive2: Use fair irq target search algorithm
      ppc/xive2: Fix irq preempted by lower priority group irq
      ppc/xive2: Fix treatment of PIPR in CPPR update
      pnv/xive2: Support ESB Escalation
      ppc/xive2: add interrupt priority configuration flags
      ppc/xive2: Support redistribution of group interrupts
      ppc/xive: Add more interrupt notification tracing
      ppc/xive2: Improve pool regs variable name
      ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
      ppc/xive2: Redistribute group interrupt precluded by CPPR update
      ppc/xive2: redistribute irqs for pool and phys ctx pull

Michael Kowal (4):
      ppc/xive2: Remote VSDs need to match on forwarding address
      ppc/xive2: Reset Generation Flipped bit on END Cache Watch
      pnv/xive2: Print value in invalid register write logging
      pnv/xive2: Permit valid writes to VC/PC Flush Control registers

Nicholas Piggin (34):
      ppc/xive: Fix xive trace event output
      ppc/xive: Report access size in XIVE TM operation error logs
      ppc/xive2: fix context push calculation of IPB priority
      ppc/xive: Fix PHYS NSR ring matching
      ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR
      ppc/xive2: Set CPPR delivery should account for group priority
      ppc/xive: tctx_notify should clear the precluded interrupt
      ppc/xive: Explicitly zero NSR after accepting
      ppc/xive: Move NSR decoding into helper functions
      ppc/xive: Fix pulling pool and phys contexts
      pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
      ppc/xive: Change presenter .match_nvt to match not present
      ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt
      ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
      ppc/xive: Fix high prio group interrupt being preempted by low prio VP
      ppc/xive: Split xive recompute from IPB function
      ppc/xive: tctx signaling registers rework
      ppc/xive: tctx_accept only lower irq line if an interrupt was presented
      ppc/xive: Add xive_tctx_pipr_set() helper function
      ppc/xive2: split tctx presentation processing from set CPPR
      ppc/xive2: Consolidate presentation processing in context push
      ppc/xive2: Avoid needless interrupt re-check on CPPR set
      ppc/xive: Assert group interrupts were redistributed
      ppc/xive2: implement NVP context save restore for POOL ring
      ppc/xive2: Prevent pulling of pool context losing phys interrupt
      ppc/xive: Redistribute phys after pulling of pool context
      ppc/xive: Check TIMA operations validity
      ppc/xive2: Implement pool context push TIMA op
      ppc/xive2: redistribute group interrupts on context push
      ppc/xive2: Implement set_os_pending TIMA op
      ppc/xive2: Implement POOL LGS push TIMA op
      ppc/xive2: Implement PHYS ring VP push TIMA op
      ppc/xive: Split need_resend into restore_nvp
      ppc/xive2: Enable lower level contexts on VP push

 hw/intc/pnv_xive2_regs.h    |   1 +
 include/hw/ppc/xive.h       |  66 +++-
 include/hw/ppc/xive2.h      |  22 +-
 include/hw/ppc/xive2_regs.h |  22 +-
 hw/intc/pnv_xive.c          |  16 +-
 hw/intc/pnv_xive2.c         | 140 ++++++---
 hw/intc/spapr_xive.c        |  18 +-
 hw/intc/xive.c              | 555 ++++++++++++++++++++++------------
 hw/intc/xive2.c             | 717 +++++++++++++++++++++++++++++++++-----------
 hw/ppc/pnv.c                |  48 +--
 hw/ppc/spapr.c              |  21 +-
 hw/intc/trace-events        |  12 +-
 12 files changed, 1146 insertions(+), 492 deletions(-)



^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PULL 01/50] ppc/xive: Fix xive trace event output
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 02/50] ppc/xive: Report access size in XIVE TM operation error logs Cédric Le Goater
                   ` (50 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Typo, IBP should be IPB.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-2-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/trace-events | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/intc/trace-events b/hw/intc/trace-events
index 334aa6a97bad..9ed2616e58fe 100644
--- a/hw/intc/trace-events
+++ b/hw/intc/trace-events
@@ -274,9 +274,9 @@ kvm_xive_cpu_connect(uint32_t id) "connect CPU%d to KVM device"
 kvm_xive_source_reset(uint32_t srcno) "IRQ 0x%x"
 
 # xive.c
-xive_tctx_accept(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x ACK"
-xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x raise !"
-xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IBP=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
+xive_tctx_accept(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x ACK"
+xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x CPPR=0x%02x NSR=0x%02x raise !"
+xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
 xive_source_esb_read(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
 xive_source_esb_write(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
 xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "END 0x%02x/0x%04x -> enqueue 0x%08x"
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 02/50] ppc/xive: Report access size in XIVE TM operation error logs
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
  2025-07-21 16:21 ` [PULL 01/50] ppc/xive: Fix xive trace event output Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 03/50] ppc/xive2: Fix calculation of END queue sizes Cédric Le Goater
                   ` (49 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Report access size in XIVE TM operation error logs.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-3-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 27b473e4d762..120376fb6b6d 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -326,7 +326,7 @@ static void xive_tm_raw_write(XiveTCTX *tctx, hwaddr offset, uint64_t value,
      */
     if (size < 4 || !mask || ring_offset == TM_QW0_USER) {
         qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA @%"
-                      HWADDR_PRIx"\n", offset);
+                      HWADDR_PRIx" size %d\n", offset, size);
         return;
     }
 
@@ -357,7 +357,7 @@ static uint64_t xive_tm_raw_read(XiveTCTX *tctx, hwaddr offset, unsigned size)
      */
     if (size < 4 || !mask || ring_offset == TM_QW0_USER) {
         qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access at TIMA @%"
-                      HWADDR_PRIx"\n", offset);
+                      HWADDR_PRIx" size %d\n", offset, size);
         return -1;
     }
 
@@ -688,7 +688,7 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
         xto = xive_tm_find_op(tctx->xptr, offset, size, true);
         if (!xto) {
             qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA "
-                          "@%"HWADDR_PRIx"\n", offset);
+                          "@%"HWADDR_PRIx" size %d\n", offset, size);
         } else {
             xto->write_handler(xptr, tctx, offset, value, size);
         }
@@ -727,7 +727,7 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
         xto = xive_tm_find_op(tctx->xptr, offset, size, false);
         if (!xto) {
             qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access to TIMA"
-                          "@%"HWADDR_PRIx"\n", offset);
+                          "@%"HWADDR_PRIx" size %d\n", offset, size);
             return -1;
         }
         ret = xto->read_handler(xptr, tctx, offset, size);
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 03/50] ppc/xive2: Fix calculation of END queue sizes
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
  2025-07-21 16:21 ` [PULL 01/50] ppc/xive: Fix xive trace event output Cédric Le Goater
  2025-07-21 16:21 ` [PULL 02/50] ppc/xive: Report access size in XIVE TM operation error logs Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 04/50] ppc/xive2: Remote VSDs need to match on forwarding address Cédric Le Goater
                   ` (48 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

The queue size of an Event Notification Descriptor (END)
is determined by the 'cl' and QsZ fields of the END.
If the cl field is 1, then the queue size (in bytes) will
be the size of a cache line 128B * 2^QsZ and QsZ is limited
to 4.  Otherwise, it will be 4096B * 2^QsZ with QsZ limited
to 12.

Fixes: f8a233dedf2 ("ppc/xive2: Introduce a XIVE2 core framework")
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-4-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive2_regs.h |  1 +
 hw/intc/xive2.c             | 25 +++++++++++++++++++------
 2 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index b11395c56350..3c28de8a304d 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -87,6 +87,7 @@ typedef struct Xive2End {
 #define END2_W2_EQ_ADDR_HI         PPC_BITMASK32(8, 31)
         uint32_t       w3;
 #define END2_W3_EQ_ADDR_LO         PPC_BITMASK32(0, 24)
+#define END2_W3_CL                 PPC_BIT32(27)
 #define END2_W3_QSIZE              PPC_BITMASK32(28, 31)
         uint32_t       w4;
 #define END2_W4_END_BLOCK          PPC_BITMASK32(4, 7)
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index a08cf906d0e6..cb75ca879853 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -188,12 +188,27 @@ void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf)
                            (uint32_t) xive_get_field64(EAS2_END_DATA, eas->w));
 }
 
+#define XIVE2_QSIZE_CHUNK_CL    128
+#define XIVE2_QSIZE_CHUNK_4k   4096
+/* Calculate max number of queue entries for an END */
+static uint32_t xive2_end_get_qentries(Xive2End *end)
+{
+    uint32_t w3 = end->w3;
+    uint32_t qsize = xive_get_field32(END2_W3_QSIZE, w3);
+    if (xive_get_field32(END2_W3_CL, w3)) {
+        g_assert(qsize <= 4);
+        return (XIVE2_QSIZE_CHUNK_CL << qsize) / sizeof(uint32_t);
+    } else {
+        g_assert(qsize <= 12);
+        return (XIVE2_QSIZE_CHUNK_4k << qsize) / sizeof(uint32_t);
+    }
+}
+
 void xive2_end_queue_pic_print_info(Xive2End *end, uint32_t width, GString *buf)
 {
     uint64_t qaddr_base = xive2_end_qaddr(end);
-    uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
     uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
-    uint32_t qentries = 1 << (qsize + 10);
+    uint32_t qentries = xive2_end_get_qentries(end);
     int i;
 
     /*
@@ -223,8 +238,7 @@ void xive2_end_pic_print_info(Xive2End *end, uint32_t end_idx, GString *buf)
     uint64_t qaddr_base = xive2_end_qaddr(end);
     uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
     uint32_t qgen = xive_get_field32(END2_W1_GENERATION, end->w1);
-    uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
-    uint32_t qentries = 1 << (qsize + 10);
+    uint32_t qentries = xive2_end_get_qentries(end);
 
     uint32_t nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end->w6);
     uint32_t nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end->w6);
@@ -341,13 +355,12 @@ void xive2_nvgc_pic_print_info(Xive2Nvgc *nvgc, uint32_t nvgc_idx, GString *buf)
 static void xive2_end_enqueue(Xive2End *end, uint32_t data)
 {
     uint64_t qaddr_base = xive2_end_qaddr(end);
-    uint32_t qsize = xive_get_field32(END2_W3_QSIZE, end->w3);
     uint32_t qindex = xive_get_field32(END2_W1_PAGE_OFF, end->w1);
     uint32_t qgen = xive_get_field32(END2_W1_GENERATION, end->w1);
 
     uint64_t qaddr = qaddr_base + (qindex << 2);
     uint32_t qdata = cpu_to_be32((qgen << 31) | (data & 0x7fffffff));
-    uint32_t qentries = 1 << (qsize + 10);
+    uint32_t qentries = xive2_end_get_qentries(end);
 
     if (dma_memory_write(&address_space_memory, qaddr, &qdata, sizeof(qdata),
                          MEMTXATTRS_UNSPECIFIED)) {
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 04/50] ppc/xive2: Remote VSDs need to match on forwarding address
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (2 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 03/50] ppc/xive2: Fix calculation of END queue sizes Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 05/50] ppc/xive2: fix context push calculation of IPB priority Cédric Le Goater
                   ` (47 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Glenn Miles, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Michael Kowal <kowal@linux.ibm.com>

In a multi chip environment there will be remote/forwarded VSDs.  The check
to find a matching INT controller (XIVE) of the remote block number was
checking the INTs chip number.  Block numbers are not tied to a chip number.
The matching remote INT is the one that matches the forwarded VSD address
with VSD types associated MMIO BAR.

Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-5-npiggin@gmail.com
[ clg: Fixed log format in pnv_xive2_get_remote() ]
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/pnv_xive2.c | 26 ++++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index ec8b0c68f1a4..6b724fe762f6 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -101,12 +101,10 @@ static uint32_t pnv_xive2_block_id(PnvXive2 *xive)
 }
 
 /*
- * Remote access to controllers. HW uses MMIOs. For now, a simple scan
- * of the chips is good enough.
- *
- * TODO: Block scope support
+ * Remote access to INT controllers. HW uses MMIOs(?). For now, a simple
+ * scan of all the chips INT controller is good enough.
  */
-static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
+static PnvXive2 *pnv_xive2_get_remote(uint32_t vsd_type, hwaddr fwd_addr)
 {
     PnvMachineState *pnv = PNV_MACHINE(qdev_get_machine());
     int i;
@@ -115,10 +113,23 @@ static PnvXive2 *pnv_xive2_get_remote(uint8_t blk)
         Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
         PnvXive2 *xive = &chip10->xive;
 
-        if (pnv_xive2_block_id(xive) == blk) {
+        /*
+         * Is this the XIVE matching the forwarded VSD address is for this
+         * VSD type
+         */
+        if ((vsd_type == VST_ESB   && fwd_addr == xive->esb_base) ||
+            (vsd_type == VST_END   && fwd_addr == xive->end_base)  ||
+            ((vsd_type == VST_NVP ||
+              vsd_type == VST_NVG) && fwd_addr == xive->nvpg_base) ||
+            (vsd_type == VST_NVC   && fwd_addr == xive->nvc_base)) {
             return xive;
         }
     }
+
+    qemu_log_mask(LOG_GUEST_ERROR,
+                 "XIVE: >>>>> %s vsd_type %u  fwd_addr 0x%"HWADDR_PRIx
+                  " NOT FOUND\n",
+                  __func__, vsd_type, fwd_addr);
     return NULL;
 }
 
@@ -251,8 +262,7 @@ static uint64_t pnv_xive2_vst_addr(PnvXive2 *xive, uint32_t type, uint8_t blk,
 
     /* Remote VST access */
     if (GETFIELD(VSD_MODE, vsd) == VSD_MODE_FORWARD) {
-        xive = pnv_xive2_get_remote(blk);
-
+        xive = pnv_xive2_get_remote(type, (vsd & VSD_ADDRESS_MASK));
         return xive ? pnv_xive2_vst_addr(xive, type, blk, idx) : 0;
     }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 05/50] ppc/xive2: fix context push calculation of IPB priority
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (3 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 04/50] ppc/xive2: Remote VSDs need to match on forwarding address Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 06/50] ppc/xive: Fix PHYS NSR ring matching Cédric Le Goater
                   ` (46 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Caleb Schlossin, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Pushing a context and loading IPB from NVP is defined to merge ('or')
that IPB into the TIMA IPB register. PIPR should therefore be calculated
based on the final IPB value, not just the NVP value.

Fixes: 9d2b6058c5b ("ppc/xive2: Add grouping level to notification")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-6-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index cb75ca879853..01cf96a2af65 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -835,8 +835,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
         nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, 0);
         xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 2);
     }
+    /* IPB bits in the backlog are merged with the TIMA IPB bits */
     regs[TM_IPB] |= ipb;
-    backlog_prio = xive_ipb_to_pipr(ipb);
+    backlog_prio = xive_ipb_to_pipr(regs[TM_IPB]);
     backlog_level = 0;
 
     first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 06/50] ppc/xive: Fix PHYS NSR ring matching
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (4 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 05/50] ppc/xive2: fix context push calculation of IPB priority Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch Cédric Le Goater
                   ` (45 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Test that the NSR exception bit field is equal to the pool ring value,
rather than any common bits set, which is more correct (although there
is no practical bug because the LSI NSR type is not implemented and
POOL/PHYS NSR are encoded with exclusive bits).

Fixes: 4c3ccac636 ("pnv/xive: Add special handling for pool targets")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-7-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 120376fb6b6d..bc829bebe9d0 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -54,7 +54,8 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
         uint8_t *alt_regs;
 
         /* POOL interrupt uses IPB in QW2, POOL ring */
-        if ((ring == TM_QW3_HV_PHYS) && (nsr & (TM_QW3_NSR_HE_POOL << 6))) {
+        if ((ring == TM_QW3_HV_PHYS) &&
+            ((nsr & TM_QW3_NSR_HE) == (TM_QW3_NSR_HE_POOL << 6))) {
             alt_ring = TM_QW2_HV_POOL;
         } else {
             alt_ring = ring;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (5 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 06/50] ppc/xive: Fix PHYS NSR ring matching Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 08/50] ppc/xive2: Use fair irq target search algorithm Cédric Le Goater
                   ` (44 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Glenn Miles, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Michael Kowal <kowal@linux.ibm.com>

When the END Event Queue wraps the END EQ Generation bit is flipped and the
Generation Flipped bit is set to one.  On a END cache Watch read operation,
the Generation Flipped bit needs to be reset.

While debugging an error modified END not valid error messages to include
the method since all were the same.

Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-8-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/pnv_xive2.c | 3 ++-
 hw/intc/xive2.c     | 4 ++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 6b724fe762f6..ec247ce48ff7 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -1325,10 +1325,11 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
     case VC_ENDC_WATCH3_DATA0:
         /*
          * Load DATA registers from cache with data requested by the
-         * SPEC register
+         * SPEC register.  Clear gen_flipped bit in word 1.
          */
         watch_engine = (offset - VC_ENDC_WATCH0_DATA0) >> 6;
         pnv_xive2_end_cache_load(xive, watch_engine);
+        xive->vc_regs[reg] &= ~(uint64_t)END2_W1_GEN_FLIPPED;
         val = xive->vc_regs[reg];
         break;
 
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 01cf96a2af65..edf5d9eb94cb 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -374,8 +374,8 @@ static void xive2_end_enqueue(Xive2End *end, uint32_t data)
         qgen ^= 1;
         end->w1 = xive_set_field32(END2_W1_GENERATION, end->w1, qgen);
 
-        /* TODO(PowerNV): reset GF bit on a cache watch operation */
-        end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, qgen);
+        /* Set gen flipped to 1, it gets reset on a cache watch operation */
+        end->w1 = xive_set_field32(END2_W1_GEN_FLIPPED, end->w1, 1);
     }
     end->w1 = xive_set_field32(END2_W1_PAGE_OFF, end->w1, qindex);
 }
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 08/50] ppc/xive2: Use fair irq target search algorithm
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (6 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 09/50] ppc/xive2: Fix irq preempted by lower priority group irq Cédric Le Goater
                   ` (43 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

The current xive algorithm for finding a matching group vCPU
target always uses the first vCPU found.  And, since it always
starts the search with thread 0 of a core, thread 0 is almost
always used to handle group interrupts.  This can lead to additional
interrupt latency and poor performance for interrupt intensive
work loads.

Changing this to use a simple round-robin algorithm for deciding which
thread number to use when starting a search, which leads to a more
distributed use of threads for handling group interrupts.

[npiggin: Also round-robin among threads, not just cores]

Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-9-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/pnv_xive2.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index ec247ce48ff7..25dc8a372d2f 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -643,13 +643,18 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
     int i, j;
     bool gen1_tima_os =
         xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
+    static int next_start_core;
+    static int next_start_thread;
+    int start_core = next_start_core;
+    int start_thread = next_start_thread;
 
     for (i = 0; i < chip->nr_cores; i++) {
-        PnvCore *pc = chip->cores[i];
+        PnvCore *pc = chip->cores[(i + start_core) % chip->nr_cores];
         CPUCore *cc = CPU_CORE(pc);
 
         for (j = 0; j < cc->nr_threads; j++) {
-            PowerPCCPU *cpu = pc->threads[j];
+            /* Start search for match with different thread each call */
+            PowerPCCPU *cpu = pc->threads[(j + start_thread) % cc->nr_threads];
             XiveTCTX *tctx;
             int ring;
 
@@ -694,6 +699,15 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
                     if (!match->tctx) {
                         match->ring = ring;
                         match->tctx = tctx;
+
+                        next_start_thread = j + start_thread + 1;
+                        if (next_start_thread >= cc->nr_threads) {
+                            next_start_thread = 0;
+                            next_start_core = i + start_core + 1;
+                            if (next_start_core >= chip->nr_cores) {
+                                next_start_core = 0;
+                            }
+                        }
                     }
                     count++;
                 }
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 09/50] ppc/xive2: Fix irq preempted by lower priority group irq
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (7 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 08/50] ppc/xive2: Use fair irq target search algorithm Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update Cédric Le Goater
                   ` (42 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

A problem was seen where uart interrupts would be lost resulting in the
console hanging. Traces showed that a lower priority interrupt was
preempting a higher priority interrupt, which would result in the higher
priority interrupt never being handled.

The new interrupt's priority was being compared against the CPPR
(Current Processor Priority Register) instead of the PIPR (Post
Interrupt Priority Register), as was required by the XIVE spec.
This allowed for a window between raising an interrupt and ACK'ing
the interrupt where a lower priority interrupt could slip in.

Fixes: 26c55b99418 ("ppc/xive2: Process group backlog when updating the CPPR")
Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-10-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index edf5d9eb94cb..36e842f041e5 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1283,7 +1283,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
      * priority to know if the thread can take the interrupt now or if
      * it is precluded.
      */
-    if (priority < alt_regs[TM_CPPR]) {
+    if (priority < alt_regs[TM_PIPR]) {
         return false;
     }
     return true;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (8 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 09/50] ppc/xive2: Fix irq preempted by lower priority group irq Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR Cédric Le Goater
                   ` (41 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

According to the XIVE spec, updating the CPPR should also update the
PIPR. The final value of the PIPR depends on other factors, but it
should never be set to a value that is above the CPPR.

Also added support for redistributing an active group interrupt when it
is precluded as a result of changing the CPPR value.

Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-11-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 36e842f041e5..c23933f8f550 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -995,7 +995,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
             }
         }
     }
-    regs[TM_PIPR] = pipr_min;
+
+    /* PIPR should not be set to a value greater than CPPR */
+    regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
 
     rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
     if (rc) {
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (9 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 12/50] ppc/xive2: Set CPPR delivery should account for group priority Cédric Le Goater
                   ` (40 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Group interrupts should not be taken from the backlog and presented
if they are precluded by CPPR.

Fixes: 855434b3b8 ("ppc/xive2: Process group backlog when pushing an OS context")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-12-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index c23933f8f550..181d1ae5f940 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -845,7 +845,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
         group_prio = xive2_presenter_backlog_scan(xptr, nvp_blk, nvp_idx,
                                                   first_group, &group_level);
         regs[TM_LSMFB] = group_prio;
-        if (regs[TM_LGS] && group_prio < backlog_prio) {
+        if (regs[TM_LGS] && group_prio < backlog_prio &&
+            group_prio < regs[TM_CPPR]) {
+
             /* VP can take a group interrupt */
             xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
                                          group_prio, group_level);
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 12/50] ppc/xive2: Set CPPR delivery should account for group priority
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (10 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 13/50] ppc/xive: tctx_notify should clear the precluded interrupt Cédric Le Goater
                   ` (39 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Caleb Schlossin, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

The group interrupt delivery flow selects the group backlog scan if
LSMFB < IPB, but that scan may find an interrupt with a priority >=
IPB. In that case, the VP-direct interrupt should be chosen. This
extends to selecting the lowest prio between POOL and PHYS rings.

Implement this just by re-starting the selection logic if the
backlog irq was not found or priority did not match LSMFB (LSMFB
is updated so next time around it would see the right value and
not loop infinitely).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-13-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 32 ++++++++++++++++++++++----------
 1 file changed, 22 insertions(+), 10 deletions(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 181d1ae5f940..cca121b5f13c 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -939,7 +939,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
 {
     uint8_t *regs = &tctx->regs[ring];
     Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
-    uint8_t old_cppr, backlog_prio, first_group, group_level = 0;
+    uint8_t old_cppr, backlog_prio, first_group, group_level;
     uint8_t pipr_min, lsmfb_min, ring_min;
     bool group_enabled;
     uint32_t nvp_blk, nvp_idx;
@@ -961,10 +961,12 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
      * Recompute the PIPR based on local pending interrupts. It will
      * be adjusted below if needed in case of pending group interrupts.
      */
+again:
     pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
     group_enabled = !!regs[TM_LGS];
-    lsmfb_min = (group_enabled) ? regs[TM_LSMFB] : 0xff;
+    lsmfb_min = group_enabled ? regs[TM_LSMFB] : 0xff;
     ring_min = ring;
+    group_level = 0;
 
     /* PHYS updates also depend on POOL values */
     if (ring == TM_QW3_HV_PHYS) {
@@ -998,9 +1000,6 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
         }
     }
 
-    /* PIPR should not be set to a value greater than CPPR */
-    regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
-
     rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
     if (rc) {
         qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
@@ -1019,7 +1018,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
 
     if (group_enabled &&
         lsmfb_min < cppr &&
-        lsmfb_min < regs[TM_PIPR]) {
+        lsmfb_min < pipr_min) {
         /*
          * Thread has seen a group interrupt with a higher priority
          * than the new cppr or pending local interrupt. Check the
@@ -1048,12 +1047,25 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
                                                     nvp_blk, nvp_idx,
                                                     first_group, &group_level);
         tctx->regs[ring_min + TM_LSMFB] = backlog_prio;
-        if (backlog_prio != 0xFF) {
-            xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
-                                         backlog_prio, group_level);
-            regs[TM_PIPR] = backlog_prio;
+        if (backlog_prio != lsmfb_min) {
+            /*
+             * If the group backlog scan finds a less favored or no interrupt,
+             * then re-do the processing which may turn up a more favored
+             * interrupt from IPB or the other pool. Backlog should not
+             * find a priority < LSMFB.
+             */
+            g_assert(backlog_prio >= lsmfb_min);
+            goto again;
         }
+
+        xive2_presenter_backlog_decr(tctx->xptr, nvp_blk, nvp_idx,
+                                     backlog_prio, group_level);
+        pipr_min = backlog_prio;
     }
+
+    /* PIPR should not be set to a value greater than CPPR */
+    regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
+
     /* CPPR has changed, check if we need to raise a pending exception */
     xive_tctx_notify(tctx, ring_min, group_level);
 }
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 13/50] ppc/xive: tctx_notify should clear the precluded interrupt
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (11 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 12/50] ppc/xive2: Set CPPR delivery should account for group priority Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 14/50] ppc/xive: Explicitly zero NSR after accepting Cédric Le Goater
                   ` (38 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

If CPPR is lowered to preclude the pending interrupt, NSR should be
cleared and the qemu_irq should be lowered. This avoids some cases
of supurious interrupts.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-14-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index bc829bebe9d0..a0a60a24f510 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -110,6 +110,9 @@ void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
                                regs[TM_IPB], alt_regs[TM_PIPR],
                                alt_regs[TM_CPPR], alt_regs[TM_NSR]);
         qemu_irq_raise(xive_tctx_output(tctx, ring));
+    } else {
+        alt_regs[TM_NSR] = 0;
+        qemu_irq_lower(xive_tctx_output(tctx, ring));
     }
 }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 14/50] ppc/xive: Explicitly zero NSR after accepting
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (12 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 13/50] ppc/xive: tctx_notify should clear the precluded interrupt Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 15/50] ppc/xive: Move NSR decoding into helper functions Cédric Le Goater
                   ` (37 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Have xive_tctx_accept clear NSR in one shot rather than masking out bits
as they are tested, which makes it clear it's reset to 0, and does not
have a partial NSR value in the register.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-15-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index a0a60a24f510..b35d2ec1793e 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -68,13 +68,11 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
          * If the interrupt was for a specific VP, reset the pending
          * buffer bit, otherwise clear the logical server indicator
          */
-        if (regs[TM_NSR] & TM_NSR_GRP_LVL) {
-            regs[TM_NSR] &= ~TM_NSR_GRP_LVL;
-        } else {
+        if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
             alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
         }
 
-        /* Drop the exception bit and any group/crowd */
+        /* Clear the exception from NSR */
         regs[TM_NSR] = 0;
 
         trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 15/50] ppc/xive: Move NSR decoding into helper functions
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (13 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 14/50] ppc/xive: Explicitly zero NSR after accepting Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:21 ` [PULL 16/50] ppc/xive: Fix pulling pool and phys contexts Cédric Le Goater
                   ` (36 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Rather than functions to return masks to test NSR bits, have functions
to test those bits directly. This should be no functional change, it
just makes the code more readable.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-16-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive.h |  4 ++++
 hw/intc/xive.c        | 51 +++++++++++++++++++++++++++++++++++--------
 2 files changed, 46 insertions(+), 9 deletions(-)

diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 538f43868172..28f0f1b79ad7 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -365,6 +365,10 @@ static inline uint32_t xive_tctx_word2(uint8_t *ring)
     return *((uint32_t *) &ring[TM_WORD2]);
 }
 
+bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr);
+bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr);
+uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr);
+
 /*
  * XIVE Router
  */
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index b35d2ec1793e..8537cad27b2a 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -25,6 +25,45 @@
 /*
  * XIVE Thread Interrupt Management context
  */
+bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr)
+{
+    switch (ring) {
+    case TM_QW1_OS:
+        return !!(nsr & TM_QW1_NSR_EO);
+    case TM_QW2_HV_POOL:
+    case TM_QW3_HV_PHYS:
+        return !!(nsr & TM_QW3_NSR_HE);
+    default:
+        g_assert_not_reached();
+    }
+}
+
+bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr)
+{
+    if ((nsr & TM_NSR_GRP_LVL) > 0) {
+        g_assert(xive_nsr_indicates_exception(ring, nsr));
+        return true;
+    }
+    return false;
+}
+
+uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr)
+{
+    /* NSR determines if pool/phys ring is for phys or pool interrupt */
+    if ((ring == TM_QW3_HV_PHYS) || (ring == TM_QW2_HV_POOL)) {
+        uint8_t he = (nsr & TM_QW3_NSR_HE) >> 6;
+
+        if (he == TM_QW3_NSR_HE_PHYS) {
+            return TM_QW3_HV_PHYS;
+        } else if (he == TM_QW3_NSR_HE_POOL) {
+            return TM_QW2_HV_POOL;
+        } else {
+            /* Don't support LSI mode */
+            g_assert_not_reached();
+        }
+    }
+    return ring;
+}
 
 static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
 {
@@ -48,18 +87,12 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
 
     qemu_irq_lower(xive_tctx_output(tctx, ring));
 
-    if (regs[TM_NSR] != 0) {
+    if (xive_nsr_indicates_exception(ring, nsr)) {
         uint8_t cppr = regs[TM_PIPR];
         uint8_t alt_ring;
         uint8_t *alt_regs;
 
-        /* POOL interrupt uses IPB in QW2, POOL ring */
-        if ((ring == TM_QW3_HV_PHYS) &&
-            ((nsr & TM_QW3_NSR_HE) == (TM_QW3_NSR_HE_POOL << 6))) {
-            alt_ring = TM_QW2_HV_POOL;
-        } else {
-            alt_ring = ring;
-        }
+        alt_ring = xive_nsr_exception_ring(ring, nsr);
         alt_regs = &tctx->regs[alt_ring];
 
         regs[TM_CPPR] = cppr;
@@ -68,7 +101,7 @@ static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
          * If the interrupt was for a specific VP, reset the pending
          * buffer bit, otherwise clear the logical server indicator
          */
-        if (!(regs[TM_NSR] & TM_NSR_GRP_LVL)) {
+        if (!xive_nsr_indicates_group_exception(ring, nsr)) {
             alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
         }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 16/50] ppc/xive: Fix pulling pool and phys contexts
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (14 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 15/50] ppc/xive: Move NSR decoding into helper functions Cédric Le Goater
@ 2025-07-21 16:21 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 17/50] pnv/xive2: Support ESB Escalation Cédric Le Goater
                   ` (35 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:21 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

This improves the implementation of pulling pool and phys contexts in
XIVE1, by following closer the OS pulling code.

In particular, the old ring data is returned rather than the modified,
and irq signals are reset on pull.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-17-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c | 66 ++++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 58 insertions(+), 8 deletions(-)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 8537cad27b2a..5483a815ef07 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -241,25 +241,75 @@ static uint64_t xive_tm_ack_hv_reg(XivePresenter *xptr, XiveTCTX *tctx,
     return xive_tctx_accept(tctx, TM_QW3_HV_PHYS);
 }
 
+static void xive_pool_cam_decode(uint32_t cam, uint8_t *nvt_blk,
+                                 uint32_t *nvt_idx, bool *vp)
+{
+    if (nvt_blk) {
+        *nvt_blk = xive_nvt_blk(cam);
+    }
+    if (nvt_idx) {
+        *nvt_idx = xive_nvt_idx(cam);
+    }
+    if (vp) {
+        *vp = !!(cam & TM_QW2W2_VP);
+    }
+}
+
+static uint32_t xive_tctx_get_pool_cam(XiveTCTX *tctx, uint8_t *nvt_blk,
+                                       uint32_t *nvt_idx, bool *vp)
+{
+    uint32_t qw2w2 = xive_tctx_word2(&tctx->regs[TM_QW2_HV_POOL]);
+    uint32_t cam = be32_to_cpu(qw2w2);
+
+    xive_pool_cam_decode(cam, nvt_blk, nvt_idx, vp);
+    return qw2w2;
+}
+
+static void xive_tctx_set_pool_cam(XiveTCTX *tctx, uint32_t qw2w2)
+{
+    memcpy(&tctx->regs[TM_QW2_HV_POOL + TM_WORD2], &qw2w2, 4);
+}
+
 static uint64_t xive_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                                       hwaddr offset, unsigned size)
 {
-    uint32_t qw2w2_prev = xive_tctx_word2(&tctx->regs[TM_QW2_HV_POOL]);
     uint32_t qw2w2;
+    uint32_t qw2w2_new;
+    uint8_t nvt_blk;
+    uint32_t nvt_idx;
+    bool vp;
 
-    qw2w2 = xive_set_field32(TM_QW2W2_VP, qw2w2_prev, 0);
-    memcpy(&tctx->regs[TM_QW2_HV_POOL + TM_WORD2], &qw2w2, 4);
+    qw2w2 = xive_tctx_get_pool_cam(tctx, &nvt_blk, &nvt_idx, &vp);
+
+    if (!vp) {
+        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pull invalid POOL NVT %x/%x !?\n",
+                      nvt_blk, nvt_idx);
+    }
+
+    /* Invalidate CAM line */
+    qw2w2_new = xive_set_field32(TM_QW2W2_VP, qw2w2, 0);
+    xive_tctx_set_pool_cam(tctx, qw2w2_new);
+
+    xive_tctx_reset_signal(tctx, TM_QW1_OS);
+    xive_tctx_reset_signal(tctx, TM_QW2_HV_POOL);
     return qw2w2;
 }
 
 static uint64_t xive_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                                       hwaddr offset, unsigned size)
 {
-    uint8_t qw3b8_prev = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
-    uint8_t qw3b8;
+    uint8_t qw3b8 = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
+    uint8_t qw3b8_new;
+
+    qw3b8 = tctx->regs[TM_QW3_HV_PHYS + TM_WORD2];
+    if (!(qw3b8 & TM_QW3B8_VT)) {
+        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid PHYS thread!?\n");
+    }
+    qw3b8_new = qw3b8 & ~TM_QW3B8_VT;
+    tctx->regs[TM_QW3_HV_PHYS + TM_WORD2] = qw3b8_new;
 
-    qw3b8 = qw3b8_prev & ~TM_QW3B8_VT;
-    tctx->regs[TM_QW3_HV_PHYS + TM_WORD2] = qw3b8;
+    xive_tctx_reset_signal(tctx, TM_QW1_OS);
+    xive_tctx_reset_signal(tctx, TM_QW3_HV_PHYS);
     return qw3b8;
 }
 
@@ -489,7 +539,7 @@ static uint64_t xive_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
     qw1w2 = xive_tctx_get_os_cam(tctx, &nvt_blk, &nvt_idx, &vo);
 
     if (!vo) {
-        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid NVT %x/%x !?\n",
+        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pull invalid OS NVT %x/%x !?\n",
                       nvt_blk, nvt_idx);
     }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 17/50] pnv/xive2: Support ESB Escalation
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (15 preceding siblings ...)
  2025-07-21 16:21 ` [PULL 16/50] ppc/xive: Fix pulling pool and phys contexts Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 18/50] pnv/xive2: Print value in invalid register write logging Cédric Le Goater
                   ` (34 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Glenn Miles <milesg@linux.vnet.ibm.com>

Add support for XIVE ESB Interrupt Escalation.

Suggested-by: Michael Kowal <kowal@linux.ibm.com>
[This change was taken from a patch provided by Michael Kowal.]
Signed-off-by: Glenn Miles <milesg@linux.vnet.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-18-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive2.h      |  1 +
 include/hw/ppc/xive2_regs.h | 13 +++++---
 hw/intc/xive2.c             | 62 ++++++++++++++++++++++++++++++-------
 3 files changed, 59 insertions(+), 17 deletions(-)

diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 8cdf8191742e..2436ddb5e53c 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -80,6 +80,7 @@ int xive2_router_write_nvgc(Xive2Router *xrtr, bool crowd,
 uint32_t xive2_router_get_config(Xive2Router *xrtr);
 
 void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked);
+void xive2_notify(Xive2Router *xrtr, uint32_t lisn, bool pq_checked);
 
 /*
  * XIVE2 Presenter (POWER10)
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 3c28de8a304d..2c535ec0d0f1 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -39,15 +39,18 @@
 
 typedef struct Xive2Eas {
         uint64_t       w;
-#define EAS2_VALID                 PPC_BIT(0)
-#define EAS2_END_BLOCK             PPC_BITMASK(4, 7) /* Destination EQ block# */
-#define EAS2_END_INDEX             PPC_BITMASK(8, 31) /* Destination EQ index */
-#define EAS2_MASKED                PPC_BIT(32) /* Masked                 */
-#define EAS2_END_DATA              PPC_BITMASK(33, 63) /* written to the EQ */
+#define EAS2_VALID         PPC_BIT(0)
+#define EAS2_QOS           PPC_BIT(1, 2)       /* Quality of Service(unimp) */
+#define EAS2_RESUME        PPC_BIT(3)          /* END Resume(unimp) */
+#define EAS2_END_BLOCK     PPC_BITMASK(4, 7)   /* Destination EQ block# */
+#define EAS2_END_INDEX     PPC_BITMASK(8, 31)  /* Destination EQ index */
+#define EAS2_MASKED        PPC_BIT(32)         /* Masked */
+#define EAS2_END_DATA      PPC_BITMASK(33, 63) /* written to the EQ */
 } Xive2Eas;
 
 #define xive2_eas_is_valid(eas)   (be64_to_cpu((eas)->w) & EAS2_VALID)
 #define xive2_eas_is_masked(eas)  (be64_to_cpu((eas)->w) & EAS2_MASKED)
+#define xive2_eas_is_resume(eas)  (be64_to_cpu((eas)->w) & EAS2_RESUME)
 
 void xive2_eas_pic_print_info(Xive2Eas *eas, uint32_t lisn, GString *buf);
 
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index cca121b5f13c..541b05225cd2 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1552,18 +1552,39 @@ do_escalation:
         }
     }
 
-    /*
-     * The END trigger becomes an Escalation trigger
-     */
-    xive2_router_end_notify(xrtr,
-                           xive_get_field32(END2_W4_END_BLOCK,     end.w4),
-                           xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
-                           xive_get_field32(END2_W5_ESC_END_DATA,  end.w5));
+    if (xive2_end_is_escalate_end(&end)) {
+        /*
+         * Perform END Adaptive escalation processing
+         * The END trigger becomes an Escalation trigger
+         */
+        xive2_router_end_notify(xrtr,
+                               xive_get_field32(END2_W4_END_BLOCK,     end.w4),
+                               xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
+                               xive_get_field32(END2_W5_ESC_END_DATA,  end.w5));
+    } /* end END adaptive escalation */
+
+    else {
+        uint32_t lisn;              /* Logical Interrupt Source Number */
+
+        /*
+         *  Perform ESB escalation processing
+         *      E[N] == 1 --> N
+         *      Req[Block] <- E[ESB_Block]
+         *      Req[Index] <- E[ESB_Index]
+         *      Req[Offset] <- 0x000
+         *      Execute <ESB Store> Req command
+         */
+        lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK,     end.w4),
+                        xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
+
+        xive2_notify(xrtr, lisn, true /* pq_checked */);
+    }
+
+    return;
 }
 
-void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
+void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
 {
-    Xive2Router *xrtr = XIVE2_ROUTER(xn);
     uint8_t eas_blk = XIVE_EAS_BLOCK(lisn);
     uint32_t eas_idx = XIVE_EAS_INDEX(lisn);
     Xive2Eas eas;
@@ -1606,13 +1627,30 @@ void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
         return;
     }
 
+    /* TODO: add support for EAS resume */
+    if (xive2_eas_is_resume(&eas)) {
+        qemu_log_mask(LOG_UNIMP,
+                      "XIVE: EAS resume processing unimplemented - LISN %x\n",
+                      lisn);
+        return;
+    }
+
     /*
      * The event trigger becomes an END trigger
      */
     xive2_router_end_notify(xrtr,
-                             xive_get_field64(EAS2_END_BLOCK, eas.w),
-                             xive_get_field64(EAS2_END_INDEX, eas.w),
-                             xive_get_field64(EAS2_END_DATA,  eas.w));
+                            xive_get_field64(EAS2_END_BLOCK, eas.w),
+                            xive_get_field64(EAS2_END_INDEX, eas.w),
+                            xive_get_field64(EAS2_END_DATA,  eas.w));
+    return;
+}
+
+void xive2_router_notify(XiveNotifier *xn, uint32_t lisn, bool pq_checked)
+{
+    Xive2Router *xrtr = XIVE2_ROUTER(xn);
+
+    xive2_notify(xrtr, lisn, pq_checked);
+    return;
 }
 
 static const Property xive2_router_properties[] = {
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 18/50] pnv/xive2: Print value in invalid register write logging
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (16 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 17/50] pnv/xive2: Support ESB Escalation Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL Cédric Le Goater
                   ` (33 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Glenn Miles, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Michael Kowal <kowal@linux.ibm.com>

This can make it easier to see what the target system is trying to
do.

[npiggin: split from larger patch]

Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-19-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/pnv_xive2.c | 24 ++++++++++++++++--------
 1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 25dc8a372d2f..9d53537e3e09 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -1197,7 +1197,8 @@ static void pnv_xive2_ic_cq_write(void *opaque, hwaddr offset,
     case CQ_FIRMASK_OR: /* FIR error reporting */
         break;
     default:
-        xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx, offset);
+        xive2_error(xive, "CQ: invalid write 0x%"HWADDR_PRIx" value 0x%"PRIx64,
+                    offset, val);
         return;
     }
 
@@ -1495,7 +1496,8 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
         break;
 
     default:
-        xive2_error(xive, "VC: invalid write @%"HWADDR_PRIx, offset);
+        xive2_error(xive, "VC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
+                    offset, val);
         return;
     }
 
@@ -1703,7 +1705,8 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
         break;
 
     default:
-        xive2_error(xive, "PC: invalid write @%"HWADDR_PRIx, offset);
+        xive2_error(xive, "PC: invalid write @0x%"HWADDR_PRIx" value 0x%"PRIx64,
+                    offset, val);
         return;
     }
 
@@ -1790,7 +1793,8 @@ static void pnv_xive2_ic_tctxt_write(void *opaque, hwaddr offset,
         xive->tctxt_regs[reg] = val;
         break;
     default:
-        xive2_error(xive, "TCTXT: invalid write @%"HWADDR_PRIx, offset);
+        xive2_error(xive, "TCTXT: invalid write @0x%"HWADDR_PRIx
+                    " data 0x%"PRIx64, offset, val);
         return;
     }
 }
@@ -1861,7 +1865,8 @@ static void pnv_xive2_xscom_write(void *opaque, hwaddr offset,
         pnv_xive2_ic_tctxt_write(opaque, mmio_offset, val, size);
         break;
     default:
-        xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx, offset);
+        xive2_error(xive, "XSCOM: invalid write @%"HWADDR_PRIx
+                    " value 0x%"PRIx64, offset, val);
     }
 }
 
@@ -1929,7 +1934,8 @@ static void pnv_xive2_ic_notify_write(void *opaque, hwaddr offset,
         break;
 
     default:
-        xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx, offset);
+        xive2_error(xive, "NOTIFY: invalid write @%"HWADDR_PRIx
+                    " value 0x%"PRIx64, offset, val);
     }
 }
 
@@ -1971,7 +1977,8 @@ static void pnv_xive2_ic_lsi_write(void *opaque, hwaddr offset,
 {
     PnvXive2 *xive = PNV_XIVE2(opaque);
 
-    xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx, offset);
+    xive2_error(xive, "LSI: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
+                offset, val);
 }
 
 static const MemoryRegionOps pnv_xive2_ic_lsi_ops = {
@@ -2074,7 +2081,8 @@ static void pnv_xive2_ic_sync_write(void *opaque, hwaddr offset,
         inject_type = PNV_XIVE2_QUEUE_NXC_ST_RMT_CI;
         break;
     default:
-        xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx, offset);
+        xive2_error(xive, "SYNC: invalid write @%"HWADDR_PRIx" value 0x%"PRIx64,
+                    offset, val);
         return;
     }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (17 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 18/50] pnv/xive2: Print value in invalid register write logging Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers Cédric Le Goater
                   ` (32 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Firmware expects to read back the WATCH_FULL bit from the VC_ENDC_WATCH_SPEC
register, so don't clear it on read.

Don't bother clearing the reads-as-zero CONFLICT bit because it's masked
at write already.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-20-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/pnv_xive2.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 9d53537e3e09..e15f414d0bb3 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -1329,7 +1329,6 @@ static uint64_t pnv_xive2_ic_vc_read(void *opaque, hwaddr offset,
     case VC_ENDC_WATCH2_SPEC:
     case VC_ENDC_WATCH3_SPEC:
         watch_engine = (offset - VC_ENDC_WATCH0_SPEC) >> 6;
-        xive->vc_regs[reg] &= ~(VC_ENDC_WATCH_FULL | VC_ENDC_WATCH_CONFLICT);
         pnv_xive2_endc_cache_watch_release(xive, watch_engine);
         val = xive->vc_regs[reg];
         break;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (18 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 21/50] ppc/xive2: add interrupt priority configuration flags Cédric Le Goater
                   ` (31 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Glenn Miles, Caleb Schlossin, Gautam Menghani,
	Cédric Le Goater

From: Michael Kowal <kowal@linux.ibm.com>

Writes to the Flush Control registers were logged as invalid
when they are allowed. Clearing the unsupported want_cache_disable
feature is supported, so don't log an error in that case.

Signed-off-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Reviewed-by: Caleb Schlossin <calebs@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-21-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/pnv_xive2.c | 36 ++++++++++++++++++++++++++++++++----
 1 file changed, 32 insertions(+), 4 deletions(-)

diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index e15f414d0bb3..386175a68b29 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -1411,7 +1411,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
     /*
      * ESB cache updates (not modeled)
      */
-    /* case VC_ESBC_FLUSH_CTRL: */
+    case VC_ESBC_FLUSH_CTRL:
+        if (val & VC_ESBC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
+            xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
+                        " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
+                        offset, val);
+            return;
+        }
+        break;
     case VC_ESBC_FLUSH_POLL:
         xive->vc_regs[VC_ESBC_FLUSH_CTRL >> 3] |= VC_ESBC_FLUSH_CTRL_POLL_VALID;
         /* ESB update */
@@ -1427,7 +1434,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
     /*
      * EAS cache updates (not modeled)
      */
-    /* case VC_EASC_FLUSH_CTRL: */
+    case VC_EASC_FLUSH_CTRL:
+        if (val & VC_EASC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
+            xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
+                        " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
+                        offset, val);
+            return;
+        }
+        break;
     case VC_EASC_FLUSH_POLL:
         xive->vc_regs[VC_EASC_FLUSH_CTRL >> 3] |= VC_EASC_FLUSH_CTRL_POLL_VALID;
         /* EAS update */
@@ -1466,7 +1480,14 @@ static void pnv_xive2_ic_vc_write(void *opaque, hwaddr offset,
         break;
 
 
-    /* case VC_ENDC_FLUSH_CTRL: */
+    case VC_ENDC_FLUSH_CTRL:
+        if (val & VC_ENDC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
+            xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
+                        " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
+                        offset, val);
+            return;
+        }
+        break;
     case VC_ENDC_FLUSH_POLL:
         xive->vc_regs[VC_ENDC_FLUSH_CTRL >> 3] |= VC_ENDC_FLUSH_CTRL_POLL_VALID;
         break;
@@ -1687,7 +1708,14 @@ static void pnv_xive2_ic_pc_write(void *opaque, hwaddr offset,
         pnv_xive2_nxc_update(xive, watch_engine);
         break;
 
-   /* case PC_NXC_FLUSH_CTRL: */
+    case PC_NXC_FLUSH_CTRL:
+        if (val & PC_NXC_FLUSH_CTRL_WANT_CACHE_DISABLE) {
+            xive2_error(xive, "VC: unsupported write @0x%"HWADDR_PRIx
+                        " value 0x%"PRIx64" bit[2] poll_want_cache_disable",
+                        offset, val);
+            return;
+        }
+        break;
     case PC_NXC_FLUSH_POLL:
         xive->pc_regs[PC_NXC_FLUSH_CTRL >> 3] |= PC_NXC_FLUSH_CTRL_POLL_VALID;
         break;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 21/50] ppc/xive2: add interrupt priority configuration flags
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (19 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 22/50] ppc/xive2: Support redistribution of group interrupts Cédric Le Goater
                   ` (30 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

Adds support for extracting additional configuration flags from
the XIVE configuration register that are needed for redistribution
of group interrupts.

Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-22-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/pnv_xive2_regs.h |  1 +
 include/hw/ppc/xive2.h   |  8 +++++---
 hw/intc/pnv_xive2.c      | 16 ++++++++++++----
 3 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/hw/intc/pnv_xive2_regs.h b/hw/intc/pnv_xive2_regs.h
index e8b87b3d2c13..d53300f709b0 100644
--- a/hw/intc/pnv_xive2_regs.h
+++ b/hw/intc/pnv_xive2_regs.h
@@ -66,6 +66,7 @@
 #define    CQ_XIVE_CFG_GEN1_TIMA_HYP_BLK0       PPC_BIT(26) /* 0 if bit[25]=0 */
 #define    CQ_XIVE_CFG_GEN1_TIMA_CROWD_DIS      PPC_BIT(27) /* 0 if bit[25]=0 */
 #define    CQ_XIVE_CFG_GEN1_END_ESX             PPC_BIT(28)
+#define    CQ_XIVE_CFG_EN_VP_GRP_PRIORITY       PPC_BIT(32) /* 0 if bit[25]=1 */
 #define    CQ_XIVE_CFG_EN_VP_SAVE_RESTORE       PPC_BIT(38) /* 0 if bit[25]=1 */
 #define    CQ_XIVE_CFG_EN_VP_SAVE_REST_STRICT   PPC_BIT(39) /* 0 if bit[25]=1 */
 
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 2436ddb5e53c..760b94a962e7 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -29,9 +29,11 @@ OBJECT_DECLARE_TYPE(Xive2Router, Xive2RouterClass, XIVE2_ROUTER);
  * Configuration flags
  */
 
-#define XIVE2_GEN1_TIMA_OS      0x00000001
-#define XIVE2_VP_SAVE_RESTORE   0x00000002
-#define XIVE2_THREADID_8BITS    0x00000004
+#define XIVE2_GEN1_TIMA_OS          0x00000001
+#define XIVE2_VP_SAVE_RESTORE       0x00000002
+#define XIVE2_THREADID_8BITS        0x00000004
+#define XIVE2_EN_VP_GRP_PRIORITY    0x00000008
+#define XIVE2_VP_INT_PRIO           0x00000030
 
 typedef struct Xive2RouterClass {
     SysBusDeviceClass parent;
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 386175a68b29..7b4a33228e05 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -605,20 +605,28 @@ static uint32_t pnv_xive2_get_config(Xive2Router *xrtr)
 {
     PnvXive2 *xive = PNV_XIVE2(xrtr);
     uint32_t cfg = 0;
+    uint64_t reg = xive->cq_regs[CQ_XIVE_CFG >> 3];
 
-    if (xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS) {
+    if (reg & CQ_XIVE_CFG_GEN1_TIMA_OS) {
         cfg |= XIVE2_GEN1_TIMA_OS;
     }
 
-    if (xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_EN_VP_SAVE_RESTORE) {
+    if (reg & CQ_XIVE_CFG_EN_VP_SAVE_RESTORE) {
         cfg |= XIVE2_VP_SAVE_RESTORE;
     }
 
-    if (GETFIELD(CQ_XIVE_CFG_HYP_HARD_RANGE,
-              xive->cq_regs[CQ_XIVE_CFG >> 3]) == CQ_XIVE_CFG_THREADID_8BITS) {
+    if (GETFIELD(CQ_XIVE_CFG_HYP_HARD_RANGE, reg) ==
+                      CQ_XIVE_CFG_THREADID_8BITS) {
         cfg |= XIVE2_THREADID_8BITS;
     }
 
+    if (reg & CQ_XIVE_CFG_EN_VP_GRP_PRIORITY) {
+        cfg |= XIVE2_EN_VP_GRP_PRIORITY;
+    }
+
+    cfg = SETFIELD(XIVE2_VP_INT_PRIO, cfg,
+                   GETFIELD(CQ_XIVE_CFG_VP_INT_PRIO, reg));
+
     return cfg;
 }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 22/50] ppc/xive2: Support redistribution of group interrupts
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (20 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 21/50] ppc/xive2: add interrupt priority configuration flags Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 23/50] ppc/xive: Add more interrupt notification tracing Cédric Le Goater
                   ` (29 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

When an XIVE context is pulled while it has an active, unacknowledged
group interrupt, XIVE will check to see if a context on another thread
can handle the interrupt and, if so, notify that context.  If there
are no contexts that can handle the interrupt, then the interrupt is
added to a backlog and XIVE will attempt to escalate the interrupt,
if configured to do so, allowing the higher privileged handler to
activate a context that can handle the original interrupt.

Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-23-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive2_regs.h |  3 ++
 hw/intc/xive2.c             | 84 +++++++++++++++++++++++++++++++++++--
 2 files changed, 83 insertions(+), 4 deletions(-)

diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index 2c535ec0d0f1..e22203814312 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -224,6 +224,9 @@ typedef struct Xive2Nvgc {
 #define NVGC2_W0_VALID             PPC_BIT32(0)
 #define NVGC2_W0_PGONEXT           PPC_BITMASK32(26, 31)
         uint32_t        w1;
+#define NVGC2_W1_PSIZE             PPC_BITMASK32(0, 1)
+#define NVGC2_W1_END_BLK           PPC_BITMASK32(4, 7)
+#define NVGC2_W1_END_IDX           PPC_BITMASK32(8, 31)
         uint32_t        w2;
         uint32_t        w3;
         uint32_t        w4;
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 541b05225cd2..9ef372b6d1d9 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -19,6 +19,10 @@
 #include "hw/ppc/xive2_regs.h"
 #include "trace.h"
 
+static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
+                                    uint32_t end_idx, uint32_t end_data,
+                                    bool redistribute);
+
 uint32_t xive2_router_get_config(Xive2Router *xrtr)
 {
     Xive2RouterClass *xrc = XIVE2_ROUTER_GET_CLASS(xrtr);
@@ -597,6 +601,68 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
     return xive2_nvp_cam_line(blk, 1 << tid_shift | (pir & tid_mask));
 }
 
+static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
+                               uint8_t nvp_blk, uint32_t nvp_idx, uint8_t ring)
+{
+    uint8_t nsr = tctx->regs[ring + TM_NSR];
+    uint8_t crowd = NVx_CROWD_LVL(nsr);
+    uint8_t group = NVx_GROUP_LVL(nsr);
+    uint8_t nvgc_blk;
+    uint8_t nvgc_idx;
+    uint8_t end_blk;
+    uint32_t end_idx;
+    uint8_t pipr = tctx->regs[ring + TM_PIPR];
+    Xive2Nvgc nvgc;
+    uint8_t prio_limit;
+    uint32_t cfg;
+
+    /* convert crowd/group to blk/idx */
+    if (group > 0) {
+        nvgc_idx = (nvp_idx & (0xffffffff << group)) |
+                   ((1 << (group - 1)) - 1);
+    } else {
+        nvgc_idx = nvp_idx;
+    }
+
+    if (crowd > 0) {
+        crowd = (crowd == 3) ? 4 : crowd;
+        nvgc_blk = (nvp_blk & (0xffffffff << crowd)) |
+                   ((1 << (crowd - 1)) - 1);
+    } else {
+        nvgc_blk = nvp_blk;
+    }
+
+    /* Use blk/idx to retrieve the NVGC */
+    if (xive2_router_get_nvgc(xrtr, crowd, nvgc_blk, nvgc_idx, &nvgc)) {
+        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: no %s %x/%x\n",
+                      crowd ? "NVC" : "NVG", nvgc_blk, nvgc_idx);
+        return;
+    }
+
+    /* retrieve the END blk/idx from the NVGC */
+    end_blk = xive_get_field32(NVGC2_W1_END_BLK, nvgc.w1);
+    end_idx = xive_get_field32(NVGC2_W1_END_IDX, nvgc.w1);
+
+    /* determine number of priorities being used */
+    cfg = xive2_router_get_config(xrtr);
+    if (cfg & XIVE2_EN_VP_GRP_PRIORITY) {
+        prio_limit = 1 << GETFIELD(NVGC2_W1_PSIZE, nvgc.w1);
+    } else {
+        prio_limit = 1 << GETFIELD(XIVE2_VP_INT_PRIO, cfg);
+    }
+
+    /* add priority offset to end index */
+    end_idx += pipr % prio_limit;
+
+    /* trigger the group END */
+    xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
+
+    /* clear interrupt indication for the context */
+    tctx->regs[ring + TM_NSR] = 0;
+    tctx->regs[ring + TM_PIPR] = tctx->regs[ring + TM_CPPR];
+    xive_tctx_reset_signal(tctx, ring);
+}
+
 static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                                   hwaddr offset, unsigned size, uint8_t ring)
 {
@@ -608,6 +674,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
     uint8_t cur_ring;
     bool valid;
     bool do_save;
+    uint8_t nsr;
 
     xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &do_save);
 
@@ -624,6 +691,12 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
         memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
     }
 
+    /* Active group/crowd interrupts need to be redistributed */
+    nsr = tctx->regs[ring + TM_NSR];
+    if (xive_nsr_indicates_group_exception(ring, nsr)) {
+        xive2_redistribute(xrtr, tctx, nvp_blk, nvp_idx, ring);
+    }
+
     if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
         xive2_tctx_save_ctx(xrtr, tctx, nvp_blk, nvp_idx, ring);
     }
@@ -1352,7 +1425,8 @@ static bool xive2_router_end_es_notify(Xive2Router *xrtr, uint8_t end_blk,
  * message has the same parameters than in the function below.
  */
 static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
-                                    uint32_t end_idx, uint32_t end_data)
+                                    uint32_t end_idx, uint32_t end_data,
+                                    bool redistribute)
 {
     Xive2End end;
     uint8_t priority;
@@ -1380,7 +1454,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
         return;
     }
 
-    if (xive2_end_is_enqueue(&end)) {
+    if (!redistribute && xive2_end_is_enqueue(&end)) {
         xive2_end_enqueue(&end, end_data);
         /* Enqueuing event data modifies the EQ toggle and index */
         xive2_router_write_end(xrtr, end_blk, end_idx, &end, 1);
@@ -1560,7 +1634,8 @@ do_escalation:
         xive2_router_end_notify(xrtr,
                                xive_get_field32(END2_W4_END_BLOCK,     end.w4),
                                xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
-                               xive_get_field32(END2_W5_ESC_END_DATA,  end.w5));
+                               xive_get_field32(END2_W5_ESC_END_DATA,  end.w5),
+                               false);
     } /* end END adaptive escalation */
 
     else {
@@ -1641,7 +1716,8 @@ void xive2_notify(Xive2Router *xrtr , uint32_t lisn, bool pq_checked)
     xive2_router_end_notify(xrtr,
                             xive_get_field64(EAS2_END_BLOCK, eas.w),
                             xive_get_field64(EAS2_END_INDEX, eas.w),
-                            xive_get_field64(EAS2_END_DATA,  eas.w));
+                            xive_get_field64(EAS2_END_DATA,  eas.w),
+                            false);
     return;
 }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 23/50] ppc/xive: Add more interrupt notification tracing
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (21 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 22/50] ppc/xive2: Support redistribution of group interrupts Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 24/50] ppc/xive2: Improve pool regs variable name Cédric Le Goater
                   ` (28 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

Add more tracing around notification, redistribution, and escalation.

Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-24-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c       |  3 +++
 hw/intc/xive2.c      | 13 ++++++++-----
 hw/intc/trace-events |  6 ++++++
 3 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 5483a815ef07..d65394651650 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1276,6 +1276,7 @@ static uint64_t xive_source_esb_read(void *opaque, hwaddr addr, unsigned size)
 
         /* Forward the source event notification for routing */
         if (ret) {
+            trace_xive_source_notify(srcno);
             xive_source_notify(xsrc, srcno);
         }
         break;
@@ -1371,6 +1372,8 @@ out:
     /* Forward the source event notification for routing */
     if (notify) {
         xive_source_notify(xsrc, srcno);
+    } else {
+        trace_xive_source_blocked(srcno);
     }
 }
 
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 9ef372b6d1d9..f810e716dee3 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -616,6 +616,7 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
     uint8_t prio_limit;
     uint32_t cfg;
 
+    trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
     /* convert crowd/group to blk/idx */
     if (group > 0) {
         nvgc_idx = (nvp_idx & (0xffffffff << group)) |
@@ -1455,6 +1456,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
     }
 
     if (!redistribute && xive2_end_is_enqueue(&end)) {
+        trace_xive_end_enqueue(end_blk, end_idx, end_data);
         xive2_end_enqueue(&end, end_data);
         /* Enqueuing event data modifies the EQ toggle and index */
         xive2_router_write_end(xrtr, end_blk, end_idx, &end, 1);
@@ -1631,11 +1633,11 @@ do_escalation:
          * Perform END Adaptive escalation processing
          * The END trigger becomes an Escalation trigger
          */
-        xive2_router_end_notify(xrtr,
-                               xive_get_field32(END2_W4_END_BLOCK,     end.w4),
-                               xive_get_field32(END2_W4_ESC_END_INDEX, end.w4),
-                               xive_get_field32(END2_W5_ESC_END_DATA,  end.w5),
-                               false);
+        uint8_t esc_blk = xive_get_field32(END2_W4_END_BLOCK, end.w4);
+        uint32_t esc_idx = xive_get_field32(END2_W4_ESC_END_INDEX, end.w4);
+        uint32_t esc_data = xive_get_field32(END2_W5_ESC_END_DATA, end.w5);
+        trace_xive_escalate_end(end_blk, end_idx, esc_blk, esc_idx, esc_data);
+        xive2_router_end_notify(xrtr, esc_blk, esc_idx, esc_data, false);
     } /* end END adaptive escalation */
 
     else {
@@ -1652,6 +1654,7 @@ do_escalation:
         lisn = XIVE_EAS(xive_get_field32(END2_W4_END_BLOCK,     end.w4),
                         xive_get_field32(END2_W4_ESC_END_INDEX, end.w4));
 
+        trace_xive_escalate_esb(end_blk, end_idx, lisn);
         xive2_notify(xrtr, lisn, true /* pq_checked */);
     }
 
diff --git a/hw/intc/trace-events b/hw/intc/trace-events
index 9ed2616e58fe..018c609ca5eb 100644
--- a/hw/intc/trace-events
+++ b/hw/intc/trace-events
@@ -279,6 +279,8 @@ xive_tctx_notify(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_
 xive_tctx_set_cppr(uint32_t index, uint8_t ring, uint8_t ipb, uint8_t pipr, uint8_t cppr, uint8_t nsr) "target=%d ring=0x%x IPB=0x%02x PIPR=0x%02x new CPPR=0x%02x NSR=0x%02x"
 xive_source_esb_read(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
 xive_source_esb_write(uint64_t addr, uint32_t srcno, uint64_t value) "@0x%"PRIx64" IRQ 0x%x val=0x%"PRIx64
+xive_source_notify(uint32_t srcno) "Processing notification for queued IRQ 0x%x"
+xive_source_blocked(uint32_t srcno) "No action needed for IRQ 0x%x currently"
 xive_router_end_notify(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "END 0x%02x/0x%04x -> enqueue 0x%08x"
 xive_router_end_escalate(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t end_data) "END 0x%02x/0x%04x -> escalate END 0x%02x/0x%04x data 0x%08x"
 xive_tctx_tm_write(uint32_t index, uint64_t offset, unsigned int size, uint64_t value) "target=%d @0x%"PRIx64" sz=%d val=0x%" PRIx64
@@ -289,6 +291,10 @@ xive_end_source_read(uint8_t end_blk, uint32_t end_idx, uint64_t addr) "END 0x%x
 # xive2.c
 xive_nvp_backlog_op(uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint8_t rc) "NVP 0x%x/0x%x operation=%d priority=%d rc=%d"
 xive_nvgc_backlog_op(bool c, uint8_t blk, uint32_t idx, uint8_t op, uint8_t priority, uint32_t rc) "NVGC crowd=%d 0x%x/0x%x operation=%d priority=%d rc=%d"
+xive_redistribute(uint32_t index, uint8_t ring, uint8_t end_blk, uint32_t end_idx) "Redistribute from target=%d ring=0x%x NVP 0x%x/0x%x"
+xive_end_enqueue(uint8_t end_blk, uint32_t end_idx, uint32_t end_data) "Queue event for END 0x%x/0x%x data=0x%x"
+xive_escalate_end(uint8_t end_blk, uint32_t end_idx, uint8_t esc_blk, uint32_t esc_idx, uint32_t esc_data) "Escalate from END 0x%x/0x%x to END 0x%x/0x%x data=0x%x"
+xive_escalate_esb(uint8_t end_blk, uint32_t end_idx, uint32_t lisn) "Escalate from END 0x%x/0x%x to LISN=0x%x"
 
 # pnv_xive.c
 pnv_xive_ic_hw_trigger(uint64_t addr, uint64_t val) "@0x%"PRIx64" val=0x%"PRIx64
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 24/50] ppc/xive2: Improve pool regs variable name
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (22 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 23/50] ppc/xive: Add more interrupt notification tracing Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op Cédric Le Goater
                   ` (27 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

Change pregs to pool_regs, for clarity.

[npiggin: split from larger patch]

Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-25-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index f810e716dee3..1f4713dabeb1 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1044,13 +1044,12 @@ again:
 
     /* PHYS updates also depend on POOL values */
     if (ring == TM_QW3_HV_PHYS) {
-        uint8_t *pregs = &tctx->regs[TM_QW2_HV_POOL];
+        uint8_t *pool_regs = &tctx->regs[TM_QW2_HV_POOL];
 
         /* POOL values only matter if POOL ctx is valid */
-        if (pregs[TM_WORD2] & 0x80) {
-
-            uint8_t pool_pipr = xive_ipb_to_pipr(pregs[TM_IPB]);
-            uint8_t pool_lsmfb = pregs[TM_LSMFB];
+        if (pool_regs[TM_WORD2] & 0x80) {
+            uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
+            uint8_t pool_lsmfb = pool_regs[TM_LSMFB];
 
             /*
              * Determine highest priority interrupt and
@@ -1064,7 +1063,7 @@ again:
             }
 
             /* Values needed for group priority calculation */
-            if (pregs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
+            if (pool_regs[TM_LGS] && (pool_lsmfb < lsmfb_min)) {
                 group_enabled = true;
                 lsmfb_min = pool_lsmfb;
                 if (lsmfb_min < pipr_min) {
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (23 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 24/50] ppc/xive2: Improve pool regs variable name Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update Cédric Le Goater
                   ` (26 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

Booting AIX in a PowerVM partition requires the use of the "Acknowledge
O/S Interrupt to even O/S reporting line" special operation provided by
the IBM XIVE interrupt controller. This operation is invoked by writing
a byte (data is irrelevant) to offset 0xC10 of the Thread Interrupt
Management Area (TIMA). It can be used by software to notify the XIVE
logic that the interrupt was received.

Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-26-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive.h  |  1 +
 include/hw/ppc/xive2.h |  3 ++-
 hw/intc/xive.c         |  8 ++++---
 hw/intc/xive2.c        | 50 ++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 58 insertions(+), 4 deletions(-)

diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 28f0f1b79ad7..46d05d74fbfb 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -561,6 +561,7 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
                            uint8_t group_level);
 void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
 void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
+uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
 
 /*
  * KVM XIVE device helpers
diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 760b94a962e7..ff02ce254984 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -142,5 +142,6 @@ void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
                             hwaddr offset, uint64_t value, unsigned size);
 void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
                                hwaddr offset, uint64_t value, unsigned size);
-
+void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
+                        hwaddr offset, uint64_t value, unsigned size);
 #endif /* PPC_XIVE2_H */
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index d65394651650..8b705727ae2d 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -80,7 +80,7 @@ static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
         }
 }
 
-static uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
+uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
 {
     uint8_t *regs = &tctx->regs[ring];
     uint8_t nsr = regs[TM_NSR];
@@ -340,14 +340,14 @@ static uint64_t xive_tm_vt_poll(XivePresenter *xptr, XiveTCTX *tctx,
 
 static const uint8_t xive_tm_hw_view[] = {
     3, 0, 0, 0,   0, 0, 0, 0,   3, 3, 3, 3,   0, 0, 0, 0, /* QW-0 User */
-    3, 3, 3, 3,   3, 3, 0, 2,   3, 3, 3, 3,   0, 0, 0, 0, /* QW-1 OS   */
+    3, 3, 3, 3,   3, 3, 0, 2,   3, 3, 3, 3,   0, 0, 0, 3, /* QW-1 OS   */
     0, 0, 3, 3,   0, 3, 3, 0,   3, 3, 3, 3,   0, 0, 0, 0, /* QW-2 POOL */
     3, 3, 3, 3,   0, 3, 0, 2,   3, 0, 0, 3,   3, 3, 3, 0, /* QW-3 PHYS */
 };
 
 static const uint8_t xive_tm_hv_view[] = {
     3, 0, 0, 0,   0, 0, 0, 0,   3, 3, 3, 3,   0, 0, 0, 0, /* QW-0 User */
-    3, 3, 3, 3,   3, 3, 0, 2,   3, 3, 3, 3,   0, 0, 0, 0, /* QW-1 OS   */
+    3, 3, 3, 3,   3, 3, 0, 2,   3, 3, 3, 3,   0, 0, 0, 3, /* QW-1 OS   */
     0, 0, 3, 3,   0, 3, 3, 0,   0, 3, 3, 3,   0, 0, 0, 0, /* QW-2 POOL */
     3, 3, 3, 3,   0, 3, 0, 2,   3, 0, 0, 3,   0, 0, 0, 0, /* QW-3 PHYS */
 };
@@ -718,6 +718,8 @@ static const XiveTmOp xive2_tm_operations[] = {
                                                      xive_tm_pull_phys_ctx },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL,   1, xive2_tm_pull_phys_ctx_ol,
                                                      NULL },
+    { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL,          1, xive2_tm_ack_os_el,
+                                                     NULL },
 };
 
 static const XiveTmOp *xive_tm_find_op(XivePresenter *xptr, hwaddr offset,
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 1f4713dabeb1..e7e364c13e7d 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1009,6 +1009,56 @@ static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
     return 0;
 }
 
+static void xive2_tctx_accept_el(XivePresenter *xptr, XiveTCTX *tctx,
+                                 uint8_t ring, uint8_t cl_ring)
+{
+    uint64_t rd;
+    Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+    uint32_t nvp_blk, nvp_idx, xive2_cfg;
+    Xive2Nvp nvp;
+    uint64_t phys_addr;
+    uint8_t OGen = 0;
+
+    xive2_tctx_get_nvp_indexes(tctx, cl_ring, &nvp_blk, &nvp_idx);
+
+    if (xive2_router_get_nvp(xrtr, (uint8_t)nvp_blk, nvp_idx, &nvp)) {
+        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
+                      nvp_blk, nvp_idx);
+        return;
+    }
+
+    if (!xive2_nvp_is_valid(&nvp)) {
+        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid NVP %x/%x\n",
+                      nvp_blk, nvp_idx);
+        return;
+    }
+
+
+    rd = xive_tctx_accept(tctx, ring);
+
+    if (ring == TM_QW1_OS) {
+        OGen = tctx->regs[ring + TM_OGEN];
+    }
+    xive2_cfg = xive2_router_get_config(xrtr);
+    phys_addr = xive2_nvp_reporting_addr(&nvp);
+    uint8_t report_data[REPORT_LINE_GEN1_SIZE];
+    memset(report_data, 0xff, sizeof(report_data));
+    if ((OGen == 1) || (xive2_cfg & XIVE2_GEN1_TIMA_OS)) {
+        report_data[8] = (rd >> 8) & 0xff;
+        report_data[9] = rd & 0xff;
+    } else {
+        report_data[0] = (rd >> 8) & 0xff;
+        report_data[1] = rd & 0xff;
+    }
+    cpu_physical_memory_write(phys_addr, report_data, REPORT_LINE_GEN1_SIZE);
+}
+
+void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
+                        hwaddr offset, uint64_t value, unsigned size)
+{
+    xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
+}
+
 static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
 {
     uint8_t *regs = &tctx->regs[ring];
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (24 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 27/50] ppc/xive2: redistribute irqs for pool and phys ctx pull Cédric Le Goater
                   ` (25 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

Add support for redistributing a presented group interrupt if it
is precluded as a result of changing the CPPR value. Without this,
group interrupts can be lost.

Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-27-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 82 ++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 60 insertions(+), 22 deletions(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index e7e364c13e7d..624620e5b44b 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -601,20 +601,37 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
     return xive2_nvp_cam_line(blk, 1 << tid_shift | (pir & tid_mask));
 }
 
-static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
-                               uint8_t nvp_blk, uint32_t nvp_idx, uint8_t ring)
+static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
 {
-    uint8_t nsr = tctx->regs[ring + TM_NSR];
+    uint8_t *regs = &tctx->regs[ring];
+    uint8_t nsr = regs[TM_NSR];
+    uint8_t pipr = regs[TM_PIPR];
     uint8_t crowd = NVx_CROWD_LVL(nsr);
     uint8_t group = NVx_GROUP_LVL(nsr);
-    uint8_t nvgc_blk;
-    uint8_t nvgc_idx;
-    uint8_t end_blk;
-    uint32_t end_idx;
-    uint8_t pipr = tctx->regs[ring + TM_PIPR];
+    uint8_t nvgc_blk, end_blk, nvp_blk;
+    uint32_t nvgc_idx, end_idx, nvp_idx;
     Xive2Nvgc nvgc;
     uint8_t prio_limit;
     uint32_t cfg;
+    uint8_t alt_ring;
+    uint32_t target_ringw2;
+    uint32_t cam;
+    bool valid;
+    bool hw;
+
+    /* redistribution is only for group/crowd interrupts */
+    if (!xive_nsr_indicates_group_exception(ring, nsr)) {
+        return;
+    }
+
+    alt_ring = xive_nsr_exception_ring(ring, nsr);
+    target_ringw2 = xive_tctx_word2(&tctx->regs[alt_ring]);
+    cam = be32_to_cpu(target_ringw2);
+
+    /* extract nvp block and index from targeted ring's cam */
+    xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &hw);
+
+    trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
 
     trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
     /* convert crowd/group to blk/idx */
@@ -659,8 +676,8 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx,
     xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
 
     /* clear interrupt indication for the context */
-    tctx->regs[ring + TM_NSR] = 0;
-    tctx->regs[ring + TM_PIPR] = tctx->regs[ring + TM_CPPR];
+    regs[TM_NSR] = 0;
+    regs[TM_PIPR] = regs[TM_CPPR];
     xive_tctx_reset_signal(tctx, ring);
 }
 
@@ -695,7 +712,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
     /* Active group/crowd interrupts need to be redistributed */
     nsr = tctx->regs[ring + TM_NSR];
     if (xive_nsr_indicates_group_exception(ring, nsr)) {
-        xive2_redistribute(xrtr, tctx, nvp_blk, nvp_idx, ring);
+        xive2_redistribute(xrtr, tctx, ring);
     }
 
     if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
@@ -1059,6 +1076,7 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
     xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
 }
 
+/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
 static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
 {
     uint8_t *regs = &tctx->regs[ring];
@@ -1069,10 +1087,11 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
     uint32_t nvp_blk, nvp_idx;
     Xive2Nvp nvp;
     int rc;
+    uint8_t nsr = regs[TM_NSR];
 
     trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
                              regs[TM_IPB], regs[TM_PIPR],
-                             cppr, regs[TM_NSR]);
+                             cppr, nsr);
 
     if (cppr > XIVE_PRIORITY_MAX) {
         cppr = 0xff;
@@ -1081,6 +1100,35 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
     old_cppr = regs[TM_CPPR];
     regs[TM_CPPR] = cppr;
 
+    /* Handle increased CPPR priority (lower value) */
+    if (cppr < old_cppr) {
+        if (cppr <= regs[TM_PIPR]) {
+            /* CPPR lowered below PIPR, must un-present interrupt */
+            if (xive_nsr_indicates_exception(ring, nsr)) {
+                if (xive_nsr_indicates_group_exception(ring, nsr)) {
+                    /* redistribute precluded active grp interrupt */
+                    xive2_redistribute(xrtr, tctx, ring);
+                    return;
+                }
+            }
+
+            /* interrupt is VP directed, pending in IPB */
+            regs[TM_PIPR] = cppr;
+            xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
+            return;
+        } else {
+            /* CPPR was lowered, but still above PIPR. No action needed. */
+            return;
+        }
+    }
+
+    /* CPPR didn't change, nothing needs to be done */
+    if (cppr == old_cppr) {
+        return;
+    }
+
+    /* CPPR priority decreased (higher value) */
+
     /*
      * Recompute the PIPR based on local pending interrupts. It will
      * be adjusted below if needed in case of pending group interrupts.
@@ -1129,16 +1177,6 @@ again:
         return;
     }
 
-    if (cppr < old_cppr) {
-        /*
-         * FIXME: check if there's a group interrupt being presented
-         * and if the new cppr prevents it. If so, then the group
-         * interrupt needs to be re-added to the backlog and
-         * re-triggered (see re-trigger END info in the NVGC
-         * structure)
-         */
-    }
-
     if (group_enabled &&
         lsmfb_min < cppr &&
         lsmfb_min < pipr_min) {
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 27/50] ppc/xive2: redistribute irqs for pool and phys ctx pull
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (25 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 28/50] ppc/xive: Change presenter .match_nvt to match not present Cédric Le Goater
                   ` (24 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Glenn Miles <milesg@linux.ibm.com>

When disabling (pulling) an xive interrupt context, we need
to redistribute any active group interrupts to other threads
that can handle the interrupt if possible.  This support had
already been added for the OS context but had not yet been
added to the pool or physical context.

Signed-off-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-28-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive2.h      |  4 ++
 include/hw/ppc/xive2_regs.h |  4 +-
 hw/intc/xive.c              | 12 ++---
 hw/intc/xive2.c             | 94 ++++++++++++++++++++++++++-----------
 4 files changed, 79 insertions(+), 35 deletions(-)

diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index ff02ce254984..a91b99057c2a 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -140,6 +140,10 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
 void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
 void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
                             hwaddr offset, uint64_t value, unsigned size);
+uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+                                hwaddr offset, unsigned size);
+uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+                                hwaddr offset, unsigned size);
 void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
                                hwaddr offset, uint64_t value, unsigned size);
 void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index e22203814312..f82054661bda 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -209,9 +209,9 @@ static inline uint32_t xive2_nvp_idx(uint32_t cam_line)
     return cam_line & ((1 << XIVE2_NVP_SHIFT) - 1);
 }
 
-static inline uint32_t xive2_nvp_blk(uint32_t cam_line)
+static inline uint8_t xive2_nvp_blk(uint32_t cam_line)
 {
-    return (cam_line >> XIVE2_NVP_SHIFT) & 0xf;
+    return (uint8_t)((cam_line >> XIVE2_NVP_SHIFT) & 0xf);
 }
 
 void xive2_nvp_pic_print_info(Xive2Nvp *nvp, uint32_t nvp_idx, GString *buf);
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 8b705727ae2d..2f72d6ecd5a5 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -693,7 +693,7 @@ static const XiveTmOp xive2_tm_operations[] = {
 
     /* MMIOs above 2K : special operations with side effects */
     { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG,         2, NULL,
-                                                     xive_tm_ack_os_reg },
+                                                   xive_tm_ack_os_reg },
     { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING,     1, xive_tm_set_os_pending,
                                                      NULL },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2,     4, NULL,
@@ -705,17 +705,17 @@ static const XiveTmOp xive2_tm_operations[] = {
     { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG,         2, NULL,
                                                      xive_tm_ack_hv_reg },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2,   4, NULL,
-                                                     xive_tm_pull_pool_ctx },
+                                                     xive2_tm_pull_pool_ctx },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX,      4, NULL,
-                                                     xive_tm_pull_pool_ctx },
+                                                     xive2_tm_pull_pool_ctx },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX,      8, NULL,
-                                                     xive_tm_pull_pool_ctx },
+                                                     xive2_tm_pull_pool_ctx },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL,     1, xive2_tm_pull_os_ctx_ol,
                                                      NULL },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2,   4, NULL,
-                                                     xive_tm_pull_phys_ctx },
+                                                     xive2_tm_pull_phys_ctx },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX,      1, NULL,
-                                                     xive_tm_pull_phys_ctx },
+                                                     xive2_tm_pull_phys_ctx },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL,   1, xive2_tm_pull_phys_ctx_ol,
                                                      NULL },
     { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL,          1, xive2_tm_ack_os_el,
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 624620e5b44b..2791985cf29b 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -23,6 +23,9 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
                                     uint32_t end_idx, uint32_t end_data,
                                     bool redistribute);
 
+static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
+                                      uint8_t *nvp_blk, uint32_t *nvp_idx);
+
 uint32_t xive2_router_get_config(Xive2Router *xrtr)
 {
     Xive2RouterClass *xrc = XIVE2_ROUTER_GET_CLASS(xrtr);
@@ -604,8 +607,10 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
 static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
 {
     uint8_t *regs = &tctx->regs[ring];
-    uint8_t nsr = regs[TM_NSR];
-    uint8_t pipr = regs[TM_PIPR];
+    uint8_t *alt_regs = (ring == TM_QW2_HV_POOL) ? &tctx->regs[TM_QW3_HV_PHYS] :
+                                                   regs;
+    uint8_t nsr = alt_regs[TM_NSR];
+    uint8_t pipr = alt_regs[TM_PIPR];
     uint8_t crowd = NVx_CROWD_LVL(nsr);
     uint8_t group = NVx_GROUP_LVL(nsr);
     uint8_t nvgc_blk, end_blk, nvp_blk;
@@ -614,10 +619,6 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
     uint8_t prio_limit;
     uint32_t cfg;
     uint8_t alt_ring;
-    uint32_t target_ringw2;
-    uint32_t cam;
-    bool valid;
-    bool hw;
 
     /* redistribution is only for group/crowd interrupts */
     if (!xive_nsr_indicates_group_exception(ring, nsr)) {
@@ -625,11 +626,9 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
     }
 
     alt_ring = xive_nsr_exception_ring(ring, nsr);
-    target_ringw2 = xive_tctx_word2(&tctx->regs[alt_ring]);
-    cam = be32_to_cpu(target_ringw2);
 
-    /* extract nvp block and index from targeted ring's cam */
-    xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &hw);
+    /* Don't check return code since ring is expected to be invalidated */
+    xive2_tctx_get_nvp_indexes(tctx, alt_ring, &nvp_blk, &nvp_idx);
 
     trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
 
@@ -676,11 +675,23 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
     xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
 
     /* clear interrupt indication for the context */
-    regs[TM_NSR] = 0;
-    regs[TM_PIPR] = regs[TM_CPPR];
+    alt_regs[TM_NSR] = 0;
+    alt_regs[TM_PIPR] = alt_regs[TM_CPPR];
     xive_tctx_reset_signal(tctx, ring);
 }
 
+static uint8_t xive2_hv_irq_ring(uint8_t nsr)
+{
+    switch (nsr >> 6) {
+    case TM_QW3_NSR_HE_POOL:
+        return TM_QW2_HV_POOL;
+    case TM_QW3_NSR_HE_PHYS:
+        return TM_QW3_HV_PHYS;
+    default:
+        return -1;
+    }
+}
+
 static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                                   hwaddr offset, unsigned size, uint8_t ring)
 {
@@ -696,7 +707,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
 
     xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &valid, &do_save);
 
-    if (!valid) {
+    if (xive2_tctx_get_nvp_indexes(tctx, ring, &nvp_blk, &nvp_idx)) {
         qemu_log_mask(LOG_GUEST_ERROR, "XIVE: pulling invalid NVP %x/%x !?\n",
                       nvp_blk, nvp_idx);
     }
@@ -706,13 +717,25 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
          cur_ring += XIVE_TM_RING_SIZE) {
         uint32_t ringw2 = xive_tctx_word2(&tctx->regs[cur_ring]);
         uint32_t ringw2_new = xive_set_field32(TM2_QW1W2_VO, ringw2, 0);
+        bool is_valid = !!(xive_get_field32(TM2_QW1W2_VO, ringw2));
+        uint8_t alt_ring;
         memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
-    }
 
-    /* Active group/crowd interrupts need to be redistributed */
-    nsr = tctx->regs[ring + TM_NSR];
-    if (xive_nsr_indicates_group_exception(ring, nsr)) {
-        xive2_redistribute(xrtr, tctx, ring);
+        /* Skip the rest for USER or invalid contexts */
+        if ((cur_ring == TM_QW0_USER) || !is_valid) {
+            continue;
+        }
+
+        /* Active group/crowd interrupts need to be redistributed */
+        alt_ring = (cur_ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : cur_ring;
+        nsr = tctx->regs[alt_ring + TM_NSR];
+        if (xive_nsr_indicates_group_exception(alt_ring, nsr)) {
+            /* For HV rings, only redistribute if cur_ring matches NSR */
+            if ((cur_ring == TM_QW1_OS) ||
+                (cur_ring == xive2_hv_irq_ring(nsr))) {
+                xive2_redistribute(xrtr, tctx, cur_ring);
+            }
+        }
     }
 
     if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
@@ -736,6 +759,18 @@ uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
     return xive2_tm_pull_ctx(xptr, tctx, offset, size, TM_QW1_OS);
 }
 
+uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+                                hwaddr offset, unsigned size)
+{
+    return xive2_tm_pull_ctx(xptr, tctx, offset, size, TM_QW2_HV_POOL);
+}
+
+uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+                                hwaddr offset, unsigned size)
+{
+    return xive2_tm_pull_ctx(xptr, tctx, offset, size, TM_QW3_HV_PHYS);
+}
+
 #define REPORT_LINE_GEN1_SIZE       16
 
 static void xive2_tm_report_line_gen1(XiveTCTX *tctx, uint8_t *data,
@@ -993,37 +1028,40 @@ void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
     }
 }
 
+/* returns -1 if ring is invalid, but still populates block and index */
 static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
-                                      uint32_t *nvp_blk, uint32_t *nvp_idx)
+                                      uint8_t *nvp_blk, uint32_t *nvp_idx)
 {
-    uint32_t w2, cam;
+    uint32_t w2;
+    uint32_t cam = 0;
+    int rc = 0;
 
     w2 = xive_tctx_word2(&tctx->regs[ring]);
     switch (ring) {
     case TM_QW1_OS:
         if (!(be32_to_cpu(w2) & TM2_QW1W2_VO)) {
-            return -1;
+            rc = -1;
         }
         cam = xive_get_field32(TM2_QW1W2_OS_CAM, w2);
         break;
     case TM_QW2_HV_POOL:
         if (!(be32_to_cpu(w2) & TM2_QW2W2_VP)) {
-            return -1;
+            rc = -1;
         }
         cam = xive_get_field32(TM2_QW2W2_POOL_CAM, w2);
         break;
     case TM_QW3_HV_PHYS:
         if (!(be32_to_cpu(w2) & TM2_QW3W2_VT)) {
-            return -1;
+            rc = -1;
         }
         cam = xive2_tctx_hw_cam_line(tctx->xptr, tctx);
         break;
     default:
-        return -1;
+        rc = -1;
     }
     *nvp_blk = xive2_nvp_blk(cam);
     *nvp_idx = xive2_nvp_idx(cam);
-    return 0;
+    return rc;
 }
 
 static void xive2_tctx_accept_el(XivePresenter *xptr, XiveTCTX *tctx,
@@ -1031,7 +1069,8 @@ static void xive2_tctx_accept_el(XivePresenter *xptr, XiveTCTX *tctx,
 {
     uint64_t rd;
     Xive2Router *xrtr = XIVE2_ROUTER(xptr);
-    uint32_t nvp_blk, nvp_idx, xive2_cfg;
+    uint32_t nvp_idx, xive2_cfg;
+    uint8_t nvp_blk;
     Xive2Nvp nvp;
     uint64_t phys_addr;
     uint8_t OGen = 0;
@@ -1084,7 +1123,8 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
     uint8_t old_cppr, backlog_prio, first_group, group_level;
     uint8_t pipr_min, lsmfb_min, ring_min;
     bool group_enabled;
-    uint32_t nvp_blk, nvp_idx;
+    uint8_t nvp_blk;
+    uint32_t nvp_idx;
     Xive2Nvp nvp;
     int rc;
     uint8_t nsr = regs[TM_NSR];
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 28/50] ppc/xive: Change presenter .match_nvt to match not present
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (26 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 27/50] ppc/xive2: redistribute irqs for pool and phys ctx pull Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt Cédric Le Goater
                   ` (23 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Have the match_nvt method only perform a TCTX match but don't present
the interrupt, the caller presents. This has no functional change, but
allows for more complicated presentation logic after matching.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-29-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive.h | 27 +++++++++++++----------
 hw/intc/pnv_xive.c    | 16 +++++++-------
 hw/intc/pnv_xive2.c   | 16 +++++++-------
 hw/intc/spapr_xive.c  | 18 +++++++--------
 hw/intc/xive.c        | 51 +++++++++++++++----------------------------
 hw/intc/xive2.c       | 31 +++++++++++++-------------
 hw/ppc/pnv.c          | 48 ++++++++++++++--------------------------
 hw/ppc/spapr.c        | 21 +++++++-----------
 8 files changed, 97 insertions(+), 131 deletions(-)

diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 46d05d74fbfb..8152a9df3d39 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -425,6 +425,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas);
 
 typedef struct XiveTCTXMatch {
     XiveTCTX *tctx;
+    int count;
     uint8_t ring;
     bool precluded;
 } XiveTCTXMatch;
@@ -440,10 +441,10 @@ DECLARE_CLASS_CHECKERS(XivePresenterClass, XIVE_PRESENTER,
 
 struct XivePresenterClass {
     InterfaceClass parent;
-    int (*match_nvt)(XivePresenter *xptr, uint8_t format,
-                     uint8_t nvt_blk, uint32_t nvt_idx,
-                     bool crowd, bool cam_ignore, uint8_t priority,
-                     uint32_t logic_serv, XiveTCTXMatch *match);
+    bool (*match_nvt)(XivePresenter *xptr, uint8_t format,
+                      uint8_t nvt_blk, uint32_t nvt_idx,
+                      bool crowd, bool cam_ignore, uint8_t priority,
+                      uint32_t logic_serv, XiveTCTXMatch *match);
     bool (*in_kernel)(const XivePresenter *xptr);
     uint32_t (*get_config)(XivePresenter *xptr);
     int (*broadcast)(XivePresenter *xptr,
@@ -455,12 +456,14 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
                               uint8_t format,
                               uint8_t nvt_blk, uint32_t nvt_idx,
                               bool cam_ignore, uint32_t logic_serv);
-bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
-                           uint8_t nvt_blk, uint32_t nvt_idx,
-                           bool crowd, bool cam_ignore, uint8_t priority,
-                           uint32_t logic_serv, bool *precluded);
+bool xive_presenter_match(XiveFabric *xfb, uint8_t format,
+                          uint8_t nvt_blk, uint32_t nvt_idx,
+                          bool crowd, bool cam_ignore, uint8_t priority,
+                          uint32_t logic_serv, XiveTCTXMatch *match);
 
 uint32_t xive_get_vpgroup_size(uint32_t nvp_index);
+uint8_t xive_get_group_level(bool crowd, bool ignore,
+                             uint32_t nvp_blk, uint32_t nvp_index);
 
 /*
  * XIVE Fabric (Interface between Interrupt Controller and Machine)
@@ -475,10 +478,10 @@ DECLARE_CLASS_CHECKERS(XiveFabricClass, XIVE_FABRIC,
 
 struct XiveFabricClass {
     InterfaceClass parent;
-    int (*match_nvt)(XiveFabric *xfb, uint8_t format,
-                     uint8_t nvt_blk, uint32_t nvt_idx,
-                     bool crowd, bool cam_ignore, uint8_t priority,
-                     uint32_t logic_serv, XiveTCTXMatch *match);
+    bool (*match_nvt)(XiveFabric *xfb, uint8_t format,
+                      uint8_t nvt_blk, uint32_t nvt_idx,
+                      bool crowd, bool cam_ignore, uint8_t priority,
+                      uint32_t logic_serv, XiveTCTXMatch *match);
     int (*broadcast)(XiveFabric *xfb, uint8_t nvt_blk, uint32_t nvt_idx,
                      bool crowd, bool cam_ignore, uint8_t priority);
 };
diff --git a/hw/intc/pnv_xive.c b/hw/intc/pnv_xive.c
index 935c0e4742f5..c2ca40b8be87 100644
--- a/hw/intc/pnv_xive.c
+++ b/hw/intc/pnv_xive.c
@@ -470,14 +470,13 @@ static bool pnv_xive_is_cpu_enabled(PnvXive *xive, PowerPCCPU *cpu)
     return xive->regs[reg >> 3] & PPC_BIT(bit);
 }
 
-static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
-                              uint8_t nvt_blk, uint32_t nvt_idx,
-                              bool crowd, bool cam_ignore, uint8_t priority,
-                              uint32_t logic_serv, XiveTCTXMatch *match)
+static bool pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
+                               uint8_t nvt_blk, uint32_t nvt_idx,
+                               bool crowd, bool cam_ignore, uint8_t priority,
+                               uint32_t logic_serv, XiveTCTXMatch *match)
 {
     PnvXive *xive = PNV_XIVE(xptr);
     PnvChip *chip = xive->chip;
-    int count = 0;
     int i, j;
 
     for (i = 0; i < chip->nr_cores; i++) {
@@ -510,17 +509,18 @@ static int pnv_xive_match_nvt(XivePresenter *xptr, uint8_t format,
                     qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a "
                                   "thread context NVT %x/%x\n",
                                   nvt_blk, nvt_idx);
-                    return -1;
+                    match->count++;
+                    continue;
                 }
 
                 match->ring = ring;
                 match->tctx = tctx;
-                count++;
+                match->count++;
             }
         }
     }
 
-    return count;
+    return !!match->count;
 }
 
 static uint32_t pnv_xive_presenter_get_config(XivePresenter *xptr)
diff --git a/hw/intc/pnv_xive2.c b/hw/intc/pnv_xive2.c
index 7b4a33228e05..e019cad5c14c 100644
--- a/hw/intc/pnv_xive2.c
+++ b/hw/intc/pnv_xive2.c
@@ -640,14 +640,13 @@ static bool pnv_xive2_is_cpu_enabled(PnvXive2 *xive, PowerPCCPU *cpu)
     return xive->tctxt_regs[reg >> 3] & PPC_BIT(bit);
 }
 
-static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
-                               uint8_t nvt_blk, uint32_t nvt_idx,
-                               bool crowd, bool cam_ignore, uint8_t priority,
-                               uint32_t logic_serv, XiveTCTXMatch *match)
+static bool pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
+                                uint8_t nvt_blk, uint32_t nvt_idx,
+                                bool crowd, bool cam_ignore, uint8_t priority,
+                                uint32_t logic_serv, XiveTCTXMatch *match)
 {
     PnvXive2 *xive = PNV_XIVE2(xptr);
     PnvChip *chip = xive->chip;
-    int count = 0;
     int i, j;
     bool gen1_tima_os =
         xive->cq_regs[CQ_XIVE_CFG >> 3] & CQ_XIVE_CFG_GEN1_TIMA_OS;
@@ -692,7 +691,8 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
                                   "thread context NVT %x/%x\n",
                                   nvt_blk, nvt_idx);
                     /* Should set a FIR if we ever model it */
-                    return -1;
+                    match->count++;
+                    continue;
                 }
                 /*
                  * For a group notification, we need to know if the
@@ -717,13 +717,13 @@ static int pnv_xive2_match_nvt(XivePresenter *xptr, uint8_t format,
                             }
                         }
                     }
-                    count++;
+                    match->count++;
                 }
             }
         }
     }
 
-    return count;
+    return !!match->count;
 }
 
 static uint32_t pnv_xive2_presenter_get_config(XivePresenter *xptr)
diff --git a/hw/intc/spapr_xive.c b/hw/intc/spapr_xive.c
index 440edb97d8d3..e393f5dcdccf 100644
--- a/hw/intc/spapr_xive.c
+++ b/hw/intc/spapr_xive.c
@@ -428,14 +428,13 @@ static int spapr_xive_write_nvt(XiveRouter *xrtr, uint8_t nvt_blk,
     g_assert_not_reached();
 }
 
-static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
-                                uint8_t nvt_blk, uint32_t nvt_idx,
-                                bool crowd, bool cam_ignore,
-                                uint8_t priority,
-                                uint32_t logic_serv, XiveTCTXMatch *match)
+static bool spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
+                                 uint8_t nvt_blk, uint32_t nvt_idx,
+                                 bool crowd, bool cam_ignore,
+                                 uint8_t priority,
+                                 uint32_t logic_serv, XiveTCTXMatch *match)
 {
     CPUState *cs;
-    int count = 0;
 
     CPU_FOREACH(cs) {
         PowerPCCPU *cpu = POWERPC_CPU(cs);
@@ -463,16 +462,17 @@ static int spapr_xive_match_nvt(XivePresenter *xptr, uint8_t format,
             if (match->tctx) {
                 qemu_log_mask(LOG_GUEST_ERROR, "XIVE: already found a thread "
                               "context NVT %x/%x\n", nvt_blk, nvt_idx);
-                return -1;
+                match->count++;
+                continue;
             }
 
             match->ring = ring;
             match->tctx = tctx;
-            count++;
+            match->count++;
         }
     }
 
-    return count;
+    return !!match->count;
 }
 
 static uint32_t spapr_xive_presenter_get_config(XivePresenter *xptr)
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 2f72d6ecd5a5..c92e819053e8 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -1762,8 +1762,8 @@ uint32_t xive_get_vpgroup_size(uint32_t nvp_index)
     return 1U << (first_zero + 1);
 }
 
-static uint8_t xive_get_group_level(bool crowd, bool ignore,
-                                    uint32_t nvp_blk, uint32_t nvp_index)
+uint8_t xive_get_group_level(bool crowd, bool ignore,
+                             uint32_t nvp_blk, uint32_t nvp_index)
 {
     int first_zero;
     uint8_t level;
@@ -1881,15 +1881,14 @@ int xive_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
  * This is our simple Xive Presenter Engine model. It is merged in the
  * Router as it does not require an extra object.
  */
-bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
+bool xive_presenter_match(XiveFabric *xfb, uint8_t format,
                            uint8_t nvt_blk, uint32_t nvt_idx,
                            bool crowd, bool cam_ignore, uint8_t priority,
-                           uint32_t logic_serv, bool *precluded)
+                           uint32_t logic_serv, XiveTCTXMatch *match)
 {
     XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xfb);
-    XiveTCTXMatch match = { .tctx = NULL, .ring = 0, .precluded = false };
-    uint8_t group_level;
-    int count;
+
+    memset(match, 0, sizeof(*match));
 
     /*
      * Ask the machine to scan the interrupt controllers for a match.
@@ -1914,22 +1913,8 @@ bool xive_presenter_notify(XiveFabric *xfb, uint8_t format,
      * a new command to the presenters (the equivalent of the "assign"
      * power bus command in the documented full notify sequence.
      */
-    count = xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
-                           priority, logic_serv, &match);
-    if (count < 0) {
-        return false;
-    }
-
-    /* handle CPU exception delivery */
-    if (count) {
-        group_level = xive_get_group_level(crowd, cam_ignore, nvt_blk, nvt_idx);
-        trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, group_level);
-        xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
-    } else {
-        *precluded = match.precluded;
-    }
-
-    return !!count;
+    return xfc->match_nvt(xfb, format, nvt_blk, nvt_idx, crowd, cam_ignore,
+                          priority, logic_serv, match);
 }
 
 /*
@@ -1966,7 +1951,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
     uint8_t nvt_blk;
     uint32_t nvt_idx;
     XiveNVT nvt;
-    bool found, precluded;
+    XiveTCTXMatch match;
 
     uint8_t end_blk = xive_get_field64(EAS_END_BLOCK, eas->w);
     uint32_t end_idx = xive_get_field64(EAS_END_INDEX, eas->w);
@@ -2046,16 +2031,16 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
         return;
     }
 
-    found = xive_presenter_notify(xrtr->xfb, format, nvt_blk, nvt_idx,
-                          false /* crowd */,
-                          xive_get_field32(END_W7_F0_IGNORE, end.w7),
-                          priority,
-                          xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
-                          &precluded);
-    /* we don't support VP-group notification on P9, so precluded is not used */
     /* TODO: Auto EOI. */
-
-    if (found) {
+    /* we don't support VP-group notification on P9, so precluded is not used */
+    if (xive_presenter_match(xrtr->xfb, format, nvt_blk, nvt_idx,
+                             false /* crowd */,
+                             xive_get_field32(END_W7_F0_IGNORE, end.w7),
+                             priority,
+                             xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
+                             &match)) {
+        trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, 0);
+        xive_tctx_pipr_update(match.tctx, match.ring, priority, 0);
         return;
     }
 
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 2791985cf29b..602b23d06d80 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1559,7 +1559,8 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
     Xive2End end;
     uint8_t priority;
     uint8_t format;
-    bool found, precluded;
+    XiveTCTXMatch match;
+    bool crowd, cam_ignore;
     uint8_t nvx_blk;
     uint32_t nvx_idx;
 
@@ -1629,16 +1630,19 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
      */
     nvx_blk = xive_get_field32(END2_W6_VP_BLOCK, end.w6);
     nvx_idx = xive_get_field32(END2_W6_VP_OFFSET, end.w6);
-
-    found = xive_presenter_notify(xrtr->xfb, format, nvx_blk, nvx_idx,
-                          xive2_end_is_crowd(&end), xive2_end_is_ignore(&end),
-                          priority,
-                          xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
-                          &precluded);
+    crowd = xive2_end_is_crowd(&end);
+    cam_ignore = xive2_end_is_ignore(&end);
 
     /* TODO: Auto EOI. */
-
-    if (found) {
+    if (xive_presenter_match(xrtr->xfb, format, nvx_blk, nvx_idx,
+                             crowd, cam_ignore, priority,
+                             xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
+                             &match)) {
+        uint8_t group_level;
+
+        group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
+        trace_xive_presenter_notify(nvx_blk, nvx_idx, match.ring, group_level);
+        xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
         return;
     }
 
@@ -1656,7 +1660,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
             return;
         }
 
-        if (!xive2_end_is_ignore(&end)) {
+        if (!cam_ignore) {
             uint8_t ipb;
             Xive2Nvp nvp;
 
@@ -1685,9 +1689,6 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
         } else {
             Xive2Nvgc nvgc;
             uint32_t backlog;
-            bool crowd;
-
-            crowd = xive2_end_is_crowd(&end);
 
             /*
              * For groups and crowds, the per-priority backlog
@@ -1719,9 +1720,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
             if (backlog == 1) {
                 XiveFabricClass *xfc = XIVE_FABRIC_GET_CLASS(xrtr->xfb);
                 xfc->broadcast(xrtr->xfb, nvx_blk, nvx_idx,
-                               xive2_end_is_crowd(&end),
-                               xive2_end_is_ignore(&end),
-                               priority);
+                               crowd, cam_ignore, priority);
 
                 if (!xive2_end_is_precluded_escalation(&end)) {
                     /*
diff --git a/hw/ppc/pnv.c b/hw/ppc/pnv.c
index 4a49e9d1a865..d84c9067edb3 100644
--- a/hw/ppc/pnv.c
+++ b/hw/ppc/pnv.c
@@ -2608,62 +2608,46 @@ static void pnv_pic_print_info(InterruptStatsProvider *obj, GString *buf)
     }
 }
 
-static int pnv_match_nvt(XiveFabric *xfb, uint8_t format,
-                         uint8_t nvt_blk, uint32_t nvt_idx,
-                         bool crowd, bool cam_ignore, uint8_t priority,
-                         uint32_t logic_serv,
-                         XiveTCTXMatch *match)
+static bool pnv_match_nvt(XiveFabric *xfb, uint8_t format,
+                          uint8_t nvt_blk, uint32_t nvt_idx,
+                          bool crowd, bool cam_ignore, uint8_t priority,
+                          uint32_t logic_serv,
+                          XiveTCTXMatch *match)
 {
     PnvMachineState *pnv = PNV_MACHINE(xfb);
-    int total_count = 0;
     int i;
 
     for (i = 0; i < pnv->num_chips; i++) {
         Pnv9Chip *chip9 = PNV9_CHIP(pnv->chips[i]);
         XivePresenter *xptr = XIVE_PRESENTER(&chip9->xive);
         XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
-        int count;
 
-        count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
-                               cam_ignore, priority, logic_serv, match);
-
-        if (count < 0) {
-            return count;
-        }
-
-        total_count += count;
+        xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
+                       cam_ignore, priority, logic_serv, match);
     }
 
-    return total_count;
+    return !!match->count;
 }
 
-static int pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
-                                uint8_t nvt_blk, uint32_t nvt_idx,
-                                bool crowd, bool cam_ignore, uint8_t priority,
-                                uint32_t logic_serv,
-                                XiveTCTXMatch *match)
+static bool pnv10_xive_match_nvt(XiveFabric *xfb, uint8_t format,
+                                 uint8_t nvt_blk, uint32_t nvt_idx,
+                                 bool crowd, bool cam_ignore, uint8_t priority,
+                                 uint32_t logic_serv,
+                                 XiveTCTXMatch *match)
 {
     PnvMachineState *pnv = PNV_MACHINE(xfb);
-    int total_count = 0;
     int i;
 
     for (i = 0; i < pnv->num_chips; i++) {
         Pnv10Chip *chip10 = PNV10_CHIP(pnv->chips[i]);
         XivePresenter *xptr = XIVE_PRESENTER(&chip10->xive);
         XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
-        int count;
-
-        count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
-                               cam_ignore, priority, logic_serv, match);
-
-        if (count < 0) {
-            return count;
-        }
 
-        total_count += count;
+        xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd,
+                       cam_ignore, priority, logic_serv, match);
     }
 
-    return total_count;
+    return !!match->count;
 }
 
 static int pnv10_xive_broadcast(XiveFabric *xfb,
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 40f53ad7b344..1855a3cd8d03 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -4468,21 +4468,14 @@ static void spapr_pic_print_info(InterruptStatsProvider *obj, GString *buf)
 /*
  * This is a XIVE only operation
  */
-static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
-                           uint8_t nvt_blk, uint32_t nvt_idx,
-                           bool crowd, bool cam_ignore, uint8_t priority,
-                           uint32_t logic_serv, XiveTCTXMatch *match)
+static bool spapr_match_nvt(XiveFabric *xfb, uint8_t format,
+                            uint8_t nvt_blk, uint32_t nvt_idx,
+                            bool crowd, bool cam_ignore, uint8_t priority,
+                            uint32_t logic_serv, XiveTCTXMatch *match)
 {
     SpaprMachineState *spapr = SPAPR_MACHINE(xfb);
     XivePresenter *xptr = XIVE_PRESENTER(spapr->active_intc);
     XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
-    int count;
-
-    count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
-                           priority, logic_serv, match);
-    if (count < 0) {
-        return count;
-    }
 
     /*
      * When we implement the save and restore of the thread interrupt
@@ -4493,12 +4486,14 @@ static int spapr_match_nvt(XiveFabric *xfb, uint8_t format,
      * Until this is done, the sPAPR machine should find at least one
      * matching context always.
      */
-    if (count == 0) {
+    if (!xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, crowd, cam_ignore,
+                           priority, logic_serv, match)) {
         qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVT %x/%x is not dispatched\n",
                       nvt_blk, nvt_idx);
+        return false;
     }
 
-    return count;
+    return true;
 }
 
 int spapr_get_vcpu_id(PowerPCCPU *cpu)
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (27 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 28/50] ppc/xive: Change presenter .match_nvt to match not present Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt Cédric Le Goater
                   ` (22 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

A group interrupt that gets preempted by a higher priority interrupt
delivery must be redistributed otherwise it would get lost.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-30-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 602b23d06d80..f51fd38a13eb 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1638,11 +1638,21 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
                              crowd, cam_ignore, priority,
                              xive_get_field32(END2_W7_F1_LOG_SERVER_ID, end.w7),
                              &match)) {
+        XiveTCTX *tctx = match.tctx;
+        uint8_t ring = match.ring;
+        uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+        uint8_t *aregs = &tctx->regs[alt_ring];
+        uint8_t nsr = aregs[TM_NSR];
         uint8_t group_level;
 
+        if (priority < aregs[TM_PIPR] &&
+            xive_nsr_indicates_group_exception(alt_ring, nsr)) {
+            xive2_redistribute(xrtr, tctx, alt_ring);
+        }
+
         group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
-        trace_xive_presenter_notify(nvx_blk, nvx_idx, match.ring, group_level);
-        xive_tctx_pipr_update(match.tctx, match.ring, priority, group_level);
+        trace_xive_presenter_notify(nvx_blk, nvx_idx, ring, group_level);
+        xive_tctx_pipr_update(tctx, ring, priority, group_level);
         return;
     }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (28 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP Cédric Le Goater
                   ` (21 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

xive_tctx_pipr_update() is used for multiple things. In an effort
to make things simpler and less overloaded, split out the function
that is used to present a new interrupt to the tctx.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-31-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive.h | 2 ++
 hw/intc/xive.c        | 8 +++++++-
 hw/intc/xive2.c       | 2 +-
 3 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 8152a9df3d39..0d6b11e818c1 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -562,6 +562,8 @@ void xive_tctx_reset(XiveTCTX *tctx);
 void xive_tctx_destroy(XiveTCTX *tctx);
 void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
                            uint8_t group_level);
+void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+                            uint8_t group_level);
 void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
 void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
 uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index c92e819053e8..038c35846d94 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -225,6 +225,12 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
     xive_tctx_notify(tctx, ring, group_level);
  }
 
+void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+                            uint8_t group_level)
+{
+    xive_tctx_pipr_update(tctx, ring, priority, group_level);
+}
+
 /*
  * XIVE Thread Interrupt Management Area (TIMA)
  */
@@ -2040,7 +2046,7 @@ void xive_router_end_notify(XiveRouter *xrtr, XiveEAS *eas)
                              xive_get_field32(END_W7_F1_LOG_SERVER_ID, end.w7),
                              &match)) {
         trace_xive_presenter_notify(nvt_blk, nvt_idx, match.ring, 0);
-        xive_tctx_pipr_update(match.tctx, match.ring, priority, 0);
+        xive_tctx_pipr_present(match.tctx, match.ring, priority, 0);
         return;
     }
 
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index f51fd38a13eb..fe40f7f07bdd 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1652,7 +1652,7 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
 
         group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
         trace_xive_presenter_notify(nvx_blk, nvx_idx, ring, group_level);
-        xive_tctx_pipr_update(tctx, ring, priority, group_level);
+        xive_tctx_pipr_present(tctx, ring, priority, group_level);
         return;
     }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (29 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 32/50] ppc/xive: Split xive recompute from IPB function Cédric Le Goater
                   ` (20 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

xive_tctx_pipr_present() as implemented with xive_tctx_pipr_update()
causes VP-directed (group==0) interrupt to be presented in PIPR and NSR
despite being a lower priority than the currently presented group
interrupt.

This must not happen. The IPB bit should record the low priority VP
interrupt, but PIPR and NSR must not present the lower priority
interrupt.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-32-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 038c35846d94..7110cf45d74f 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -228,7 +228,23 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
 void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
                             uint8_t group_level)
 {
-    xive_tctx_pipr_update(tctx, ring, priority, group_level);
+    /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
+    uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+    uint8_t *aregs = &tctx->regs[alt_ring];
+    uint8_t *regs = &tctx->regs[ring];
+    uint8_t pipr = xive_priority_to_pipr(priority);
+
+    if (group_level == 0) {
+        regs[TM_IPB] |= xive_priority_to_ipb(priority);
+        if (pipr >= aregs[TM_PIPR]) {
+            /* VP interrupts can come here with lower priority than PIPR */
+            return;
+        }
+    }
+    g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
+    g_assert(pipr < aregs[TM_PIPR]);
+    aregs[TM_PIPR] = pipr;
+    xive_tctx_notify(tctx, ring, group_level);
 }
 
 /*
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 32/50] ppc/xive: Split xive recompute from IPB function
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (30 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 33/50] ppc/xive: tctx signaling registers rework Cédric Le Goater
                   ` (19 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Further split xive_tctx_pipr_update() by splitting out a new function
that is used to re-compute the PIPR from IPB. This is generally only
used with XIVE1, because group interrputs require more logic.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-33-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c | 25 ++++++++++++++++++++++---
 1 file changed, 22 insertions(+), 3 deletions(-)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 7110cf45d74f..5deb2f478fcb 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -225,6 +225,20 @@ void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
     xive_tctx_notify(tctx, ring, group_level);
  }
 
+static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
+{
+    /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
+    uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
+    uint8_t *aregs = &tctx->regs[alt_ring];
+    uint8_t *regs = &tctx->regs[ring];
+
+    /* Does not support a presented group interrupt */
+    g_assert(!xive_nsr_indicates_group_exception(alt_ring, aregs[TM_NSR]));
+
+    aregs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
+    xive_tctx_notify(tctx, ring, 0);
+}
+
 void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
                             uint8_t group_level)
 {
@@ -517,7 +531,12 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
 static void xive_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
                                    hwaddr offset, uint64_t value, unsigned size)
 {
-    xive_tctx_pipr_update(tctx, TM_QW1_OS, value & 0xff, 0);
+    uint8_t ring = TM_QW1_OS;
+    uint8_t *regs = &tctx->regs[ring];
+
+    /* XXX: how should this work exactly? */
+    regs[TM_IPB] |= xive_priority_to_ipb(value & 0xff);
+    xive_tctx_pipr_recompute_from_ipb(tctx, ring);
 }
 
 static void xive_os_cam_decode(uint32_t cam, uint8_t *nvt_blk,
@@ -601,14 +620,14 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
     }
 
     /*
-     * Always call xive_tctx_pipr_update(). Even if there were no
+     * Always call xive_tctx_recompute_from_ipb(). Even if there were no
      * escalation triggered, there could be a pending interrupt which
      * was saved when the context was pulled and that we need to take
      * into account by recalculating the PIPR (which is not
      * saved/restored).
      * It will also raise the External interrupt signal if needed.
      */
-    xive_tctx_pipr_update(tctx, TM_QW1_OS, 0xFF, 0); /* fxb */
+    xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
 }
 
 /*
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 33/50] ppc/xive: tctx signaling registers rework
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (31 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 32/50] ppc/xive: Split xive recompute from IPB function Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented Cédric Le Goater
                   ` (18 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

The tctx "signaling" registers (PIPR, CPPR, NSR) raise an interrupt on
the target CPU thread. The POOL and PHYS rings both raise hypervisor
interrupts, so they both share one set of signaling registers in the
PHYS ring. The PHYS NSR register contains a field that indicates which
ring has presented the interrupt being signaled to the CPU.

This sharing results in all the "alt_regs" throughout the code. alt_regs
is not very descriptive, and worse is that the name is used for
conversions in both directions, i.e., to find the presenting ring from
the signaling ring, and the signaling ring from the presenting ring.

Instead of alt_regs, use the names sig_regs and sig_ring, and regs and
ring for the presenting ring being worked on. Add a helper function to
get the sign_regs, and add some asserts to ensure the POOL regs are
never used to signal interrupts.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-34-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive.h |  26 +++++++++-
 hw/intc/xive.c        | 112 ++++++++++++++++++++++--------------------
 hw/intc/xive2.c       |  94 ++++++++++++++++-------------------
 3 files changed, 126 insertions(+), 106 deletions(-)

diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 0d6b11e818c1..a3c2f50ecef7 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -539,7 +539,7 @@ static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
 }
 
 /*
- * XIVE Thread Interrupt Management Aera (TIMA)
+ * XIVE Thread Interrupt Management Area (TIMA)
  *
  * This region gives access to the registers of the thread interrupt
  * management context. It is four page wide, each page providing a
@@ -551,6 +551,30 @@ static inline uint8_t xive_ipb_to_pipr(uint8_t ibp)
 #define XIVE_TM_OS_PAGE         0x2
 #define XIVE_TM_USER_PAGE       0x3
 
+/*
+ * The TCTX (TIMA) has 4 rings (phys, pool, os, user), but only signals
+ * (raises an interrupt on) the CPU from 3 of them. Phys and pool both
+ * cause a hypervisor privileged interrupt so interrupts presented on
+ * those rings signal using the phys ring. This helper returns the signal
+ * regs from the given ring.
+ */
+static inline uint8_t *xive_tctx_signal_regs(XiveTCTX *tctx, uint8_t ring)
+{
+    /*
+     * This is a good point to add invariants to ensure nothing has tried to
+     * signal using the POOL ring.
+     */
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
+
+    if (ring == TM_QW2_HV_POOL) {
+        /* POOL and PHYS rings share the signal regs (PIPR, NSR, CPPR) */
+        ring = TM_QW3_HV_PHYS;
+    }
+    return &tctx->regs[ring];
+}
+
 void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
                         uint64_t value, unsigned size);
 uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 5deb2f478fcb..119a178f2e23 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -80,69 +80,77 @@ static qemu_irq xive_tctx_output(XiveTCTX *tctx, uint8_t ring)
         }
 }
 
-uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring)
+/*
+ * interrupt is accepted on the presentation ring, for PHYS ring the NSR
+ * directs it to the PHYS or POOL rings.
+ */
+uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
 {
-    uint8_t *regs = &tctx->regs[ring];
-    uint8_t nsr = regs[TM_NSR];
+    uint8_t *sig_regs = &tctx->regs[sig_ring];
+    uint8_t nsr = sig_regs[TM_NSR];
 
-    qemu_irq_lower(xive_tctx_output(tctx, ring));
+    g_assert(sig_ring == TM_QW1_OS || sig_ring == TM_QW3_HV_PHYS);
+
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
+
+    qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
 
-    if (xive_nsr_indicates_exception(ring, nsr)) {
-        uint8_t cppr = regs[TM_PIPR];
-        uint8_t alt_ring;
-        uint8_t *alt_regs;
+    if (xive_nsr_indicates_exception(sig_ring, nsr)) {
+        uint8_t cppr = sig_regs[TM_PIPR];
+        uint8_t ring;
+        uint8_t *regs;
 
-        alt_ring = xive_nsr_exception_ring(ring, nsr);
-        alt_regs = &tctx->regs[alt_ring];
+        ring = xive_nsr_exception_ring(sig_ring, nsr);
+        regs = &tctx->regs[ring];
 
-        regs[TM_CPPR] = cppr;
+        sig_regs[TM_CPPR] = cppr;
 
         /*
          * If the interrupt was for a specific VP, reset the pending
          * buffer bit, otherwise clear the logical server indicator
          */
-        if (!xive_nsr_indicates_group_exception(ring, nsr)) {
-            alt_regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
+        if (!xive_nsr_indicates_group_exception(sig_ring, nsr)) {
+            regs[TM_IPB] &= ~xive_priority_to_ipb(cppr);
         }
 
         /* Clear the exception from NSR */
-        regs[TM_NSR] = 0;
+        sig_regs[TM_NSR] = 0;
 
-        trace_xive_tctx_accept(tctx->cs->cpu_index, alt_ring,
-                               alt_regs[TM_IPB], regs[TM_PIPR],
-                               regs[TM_CPPR], regs[TM_NSR]);
+        trace_xive_tctx_accept(tctx->cs->cpu_index, ring,
+                               regs[TM_IPB], sig_regs[TM_PIPR],
+                               sig_regs[TM_CPPR], sig_regs[TM_NSR]);
     }
 
-    return ((uint64_t)nsr << 8) | regs[TM_CPPR];
+    return ((uint64_t)nsr << 8) | sig_regs[TM_CPPR];
 }
 
 void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
 {
-    /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
-    uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
-    uint8_t *alt_regs = &tctx->regs[alt_ring];
+    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
     uint8_t *regs = &tctx->regs[ring];
 
-    if (alt_regs[TM_PIPR] < alt_regs[TM_CPPR]) {
+    if (sig_regs[TM_PIPR] < sig_regs[TM_CPPR]) {
         switch (ring) {
         case TM_QW1_OS:
-            regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
+            sig_regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
             break;
         case TM_QW2_HV_POOL:
-            alt_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
+            sig_regs[TM_NSR] = (TM_QW3_NSR_HE_POOL << 6) | (group_level & 0x3F);
             break;
         case TM_QW3_HV_PHYS:
-            regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
+            sig_regs[TM_NSR] = (TM_QW3_NSR_HE_PHYS << 6) | (group_level & 0x3F);
             break;
         default:
             g_assert_not_reached();
         }
         trace_xive_tctx_notify(tctx->cs->cpu_index, ring,
-                               regs[TM_IPB], alt_regs[TM_PIPR],
-                               alt_regs[TM_CPPR], alt_regs[TM_NSR]);
+                               regs[TM_IPB], sig_regs[TM_PIPR],
+                               sig_regs[TM_CPPR], sig_regs[TM_NSR]);
         qemu_irq_raise(xive_tctx_output(tctx, ring));
     } else {
-        alt_regs[TM_NSR] = 0;
+        sig_regs[TM_NSR] = 0;
         qemu_irq_lower(xive_tctx_output(tctx, ring));
     }
 }
@@ -159,25 +167,32 @@ void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring)
 
 static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
 {
-    uint8_t *regs = &tctx->regs[ring];
+    uint8_t *sig_regs = &tctx->regs[ring];
     uint8_t pipr_min;
     uint8_t ring_min;
 
+    g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
+
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
+
+    /* XXX: should show pool IPB for PHYS ring */
     trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
-                             regs[TM_IPB], regs[TM_PIPR],
-                             cppr, regs[TM_NSR]);
+                             sig_regs[TM_IPB], sig_regs[TM_PIPR],
+                             cppr, sig_regs[TM_NSR]);
 
     if (cppr > XIVE_PRIORITY_MAX) {
         cppr = 0xff;
     }
 
-    tctx->regs[ring + TM_CPPR] = cppr;
+    sig_regs[TM_CPPR] = cppr;
 
     /*
      * Recompute the PIPR based on local pending interrupts.  The PHYS
      * ring must take the minimum of both the PHYS and POOL PIPR values.
      */
-    pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
+    pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
     ring_min = ring;
 
     /* PHYS updates also depend on POOL values */
@@ -186,7 +201,6 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
 
         /* POOL values only matter if POOL ctx is valid */
         if (pool_regs[TM_WORD2] & 0x80) {
-
             uint8_t pool_pipr = xive_ipb_to_pipr(pool_regs[TM_IPB]);
 
             /*
@@ -200,7 +214,7 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
         }
     }
 
-    regs[TM_PIPR] = pipr_min;
+    sig_regs[TM_PIPR] = pipr_min;
 
     /* CPPR has changed, check if we need to raise a pending exception */
     xive_tctx_notify(tctx, ring_min, 0);
@@ -208,56 +222,50 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
 
 void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
                            uint8_t group_level)
- {
-    /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
-    uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
-    uint8_t *alt_regs = &tctx->regs[alt_ring];
+{
+    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
     uint8_t *regs = &tctx->regs[ring];
 
     if (group_level == 0) {
         /* VP-specific */
         regs[TM_IPB] |= xive_priority_to_ipb(priority);
-        alt_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
+        sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
     } else {
         /* VP-group */
-        alt_regs[TM_PIPR] = xive_priority_to_pipr(priority);
+        sig_regs[TM_PIPR] = xive_priority_to_pipr(priority);
     }
     xive_tctx_notify(tctx, ring, group_level);
  }
 
 static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
 {
-    /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
-    uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
-    uint8_t *aregs = &tctx->regs[alt_ring];
+    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
     uint8_t *regs = &tctx->regs[ring];
 
     /* Does not support a presented group interrupt */
-    g_assert(!xive_nsr_indicates_group_exception(alt_ring, aregs[TM_NSR]));
+    g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
 
-    aregs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
+    sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
     xive_tctx_notify(tctx, ring, 0);
 }
 
 void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
                             uint8_t group_level)
 {
-    /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
-    uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
-    uint8_t *aregs = &tctx->regs[alt_ring];
+    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
     uint8_t *regs = &tctx->regs[ring];
     uint8_t pipr = xive_priority_to_pipr(priority);
 
     if (group_level == 0) {
         regs[TM_IPB] |= xive_priority_to_ipb(priority);
-        if (pipr >= aregs[TM_PIPR]) {
+        if (pipr >= sig_regs[TM_PIPR]) {
             /* VP interrupts can come here with lower priority than PIPR */
             return;
         }
     }
     g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
-    g_assert(pipr < aregs[TM_PIPR]);
-    aregs[TM_PIPR] = pipr;
+    g_assert(pipr < sig_regs[TM_PIPR]);
+    sig_regs[TM_PIPR] = pipr;
     xive_tctx_notify(tctx, ring, group_level);
 }
 
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index fe40f7f07bdd..71b40f702a6f 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -606,11 +606,9 @@ static uint32_t xive2_tctx_hw_cam_line(XivePresenter *xptr, XiveTCTX *tctx)
 
 static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
 {
-    uint8_t *regs = &tctx->regs[ring];
-    uint8_t *alt_regs = (ring == TM_QW2_HV_POOL) ? &tctx->regs[TM_QW3_HV_PHYS] :
-                                                   regs;
-    uint8_t nsr = alt_regs[TM_NSR];
-    uint8_t pipr = alt_regs[TM_PIPR];
+    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
+    uint8_t nsr = sig_regs[TM_NSR];
+    uint8_t pipr = sig_regs[TM_PIPR];
     uint8_t crowd = NVx_CROWD_LVL(nsr);
     uint8_t group = NVx_GROUP_LVL(nsr);
     uint8_t nvgc_blk, end_blk, nvp_blk;
@@ -618,19 +616,16 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
     Xive2Nvgc nvgc;
     uint8_t prio_limit;
     uint32_t cfg;
-    uint8_t alt_ring;
 
     /* redistribution is only for group/crowd interrupts */
     if (!xive_nsr_indicates_group_exception(ring, nsr)) {
         return;
     }
 
-    alt_ring = xive_nsr_exception_ring(ring, nsr);
-
     /* Don't check return code since ring is expected to be invalidated */
-    xive2_tctx_get_nvp_indexes(tctx, alt_ring, &nvp_blk, &nvp_idx);
+    xive2_tctx_get_nvp_indexes(tctx, ring, &nvp_blk, &nvp_idx);
 
-    trace_xive_redistribute(tctx->cs->cpu_index, alt_ring, nvp_blk, nvp_idx);
+    trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
 
     trace_xive_redistribute(tctx->cs->cpu_index, ring, nvp_blk, nvp_idx);
     /* convert crowd/group to blk/idx */
@@ -675,23 +670,11 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
     xive2_router_end_notify(xrtr, end_blk, end_idx, 0, true);
 
     /* clear interrupt indication for the context */
-    alt_regs[TM_NSR] = 0;
-    alt_regs[TM_PIPR] = alt_regs[TM_CPPR];
+    sig_regs[TM_NSR] = 0;
+    sig_regs[TM_PIPR] = sig_regs[TM_CPPR];
     xive_tctx_reset_signal(tctx, ring);
 }
 
-static uint8_t xive2_hv_irq_ring(uint8_t nsr)
-{
-    switch (nsr >> 6) {
-    case TM_QW3_NSR_HE_POOL:
-        return TM_QW2_HV_POOL;
-    case TM_QW3_NSR_HE_PHYS:
-        return TM_QW3_HV_PHYS;
-    default:
-        return -1;
-    }
-}
-
 static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                                   hwaddr offset, unsigned size, uint8_t ring)
 {
@@ -718,7 +701,8 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
         uint32_t ringw2 = xive_tctx_word2(&tctx->regs[cur_ring]);
         uint32_t ringw2_new = xive_set_field32(TM2_QW1W2_VO, ringw2, 0);
         bool is_valid = !!(xive_get_field32(TM2_QW1W2_VO, ringw2));
-        uint8_t alt_ring;
+        uint8_t *sig_regs;
+
         memcpy(&tctx->regs[cur_ring + TM_WORD2], &ringw2_new, 4);
 
         /* Skip the rest for USER or invalid contexts */
@@ -727,12 +711,11 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
         }
 
         /* Active group/crowd interrupts need to be redistributed */
-        alt_ring = (cur_ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : cur_ring;
-        nsr = tctx->regs[alt_ring + TM_NSR];
-        if (xive_nsr_indicates_group_exception(alt_ring, nsr)) {
-            /* For HV rings, only redistribute if cur_ring matches NSR */
-            if ((cur_ring == TM_QW1_OS) ||
-                (cur_ring == xive2_hv_irq_ring(nsr))) {
+        sig_regs = xive_tctx_signal_regs(tctx, ring);
+        nsr = sig_regs[TM_NSR];
+        if (xive_nsr_indicates_group_exception(cur_ring, nsr)) {
+            /* Ensure ring matches NSR (for HV NSR POOL vs PHYS rings) */
+            if (cur_ring == xive_nsr_exception_ring(cur_ring, nsr)) {
                 xive2_redistribute(xrtr, tctx, cur_ring);
             }
         }
@@ -1118,7 +1101,7 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
 /* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
 static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
 {
-    uint8_t *regs = &tctx->regs[ring];
+    uint8_t *sig_regs = &tctx->regs[ring];
     Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
     uint8_t old_cppr, backlog_prio, first_group, group_level;
     uint8_t pipr_min, lsmfb_min, ring_min;
@@ -1127,33 +1110,41 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
     uint32_t nvp_idx;
     Xive2Nvp nvp;
     int rc;
-    uint8_t nsr = regs[TM_NSR];
+    uint8_t nsr = sig_regs[TM_NSR];
+
+    g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
+
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
 
+    /* XXX: should show pool IPB for PHYS ring */
     trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
-                             regs[TM_IPB], regs[TM_PIPR],
+                             sig_regs[TM_IPB], sig_regs[TM_PIPR],
                              cppr, nsr);
 
     if (cppr > XIVE_PRIORITY_MAX) {
         cppr = 0xff;
     }
 
-    old_cppr = regs[TM_CPPR];
-    regs[TM_CPPR] = cppr;
+    old_cppr = sig_regs[TM_CPPR];
+    sig_regs[TM_CPPR] = cppr;
 
     /* Handle increased CPPR priority (lower value) */
     if (cppr < old_cppr) {
-        if (cppr <= regs[TM_PIPR]) {
+        if (cppr <= sig_regs[TM_PIPR]) {
             /* CPPR lowered below PIPR, must un-present interrupt */
             if (xive_nsr_indicates_exception(ring, nsr)) {
                 if (xive_nsr_indicates_group_exception(ring, nsr)) {
                     /* redistribute precluded active grp interrupt */
-                    xive2_redistribute(xrtr, tctx, ring);
+                    xive2_redistribute(xrtr, tctx,
+                                       xive_nsr_exception_ring(ring, nsr));
                     return;
                 }
             }
 
             /* interrupt is VP directed, pending in IPB */
-            regs[TM_PIPR] = cppr;
+            sig_regs[TM_PIPR] = cppr;
             xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
             return;
         } else {
@@ -1174,9 +1165,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
      * be adjusted below if needed in case of pending group interrupts.
      */
 again:
-    pipr_min = xive_ipb_to_pipr(regs[TM_IPB]);
-    group_enabled = !!regs[TM_LGS];
-    lsmfb_min = group_enabled ? regs[TM_LSMFB] : 0xff;
+    pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
+    group_enabled = !!sig_regs[TM_LGS];
+    lsmfb_min = group_enabled ? sig_regs[TM_LSMFB] : 0xff;
     ring_min = ring;
     group_level = 0;
 
@@ -1265,7 +1256,7 @@ again:
     }
 
     /* PIPR should not be set to a value greater than CPPR */
-    regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
+    sig_regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
 
     /* CPPR has changed, check if we need to raise a pending exception */
     xive_tctx_notify(tctx, ring_min, group_level);
@@ -1490,9 +1481,7 @@ int xive2_presenter_tctx_match(XivePresenter *xptr, XiveTCTX *tctx,
 
 bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
 {
-    /* HV_POOL ring uses HV_PHYS NSR, CPPR and PIPR registers */
-    uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
-    uint8_t *alt_regs = &tctx->regs[alt_ring];
+    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
 
     /*
      * The xive2_presenter_tctx_match() above tells if there's a match
@@ -1500,7 +1489,7 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority)
      * priority to know if the thread can take the interrupt now or if
      * it is precluded.
      */
-    if (priority < alt_regs[TM_PIPR]) {
+    if (priority < sig_regs[TM_PIPR]) {
         return false;
     }
     return true;
@@ -1640,14 +1629,13 @@ static void xive2_router_end_notify(Xive2Router *xrtr, uint8_t end_blk,
                              &match)) {
         XiveTCTX *tctx = match.tctx;
         uint8_t ring = match.ring;
-        uint8_t alt_ring = (ring == TM_QW2_HV_POOL) ? TM_QW3_HV_PHYS : ring;
-        uint8_t *aregs = &tctx->regs[alt_ring];
-        uint8_t nsr = aregs[TM_NSR];
+        uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
+        uint8_t nsr = sig_regs[TM_NSR];
         uint8_t group_level;
 
-        if (priority < aregs[TM_PIPR] &&
-            xive_nsr_indicates_group_exception(alt_ring, nsr)) {
-            xive2_redistribute(xrtr, tctx, alt_ring);
+        if (priority < sig_regs[TM_PIPR] &&
+            xive_nsr_indicates_group_exception(ring, nsr)) {
+            xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
         }
 
         group_level = xive_get_group_level(crowd, cam_ignore, nvx_blk, nvx_idx);
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (32 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 33/50] ppc/xive: tctx signaling registers rework Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function Cédric Le Goater
                   ` (17 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

The relationship between an interrupt signaled in the TIMA and the QEMU
irq line to the processor to be 1:1, so they should be raised and
lowered together and "just in case" lowering should be avoided (it could
mask

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-35-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 119a178f2e23..db26dae7dbf4 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -95,8 +95,6 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
     g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
     g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
 
-    qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
-
     if (xive_nsr_indicates_exception(sig_ring, nsr)) {
         uint8_t cppr = sig_regs[TM_PIPR];
         uint8_t ring;
@@ -117,6 +115,7 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
 
         /* Clear the exception from NSR */
         sig_regs[TM_NSR] = 0;
+        qemu_irq_lower(xive_tctx_output(tctx, sig_ring));
 
         trace_xive_tctx_accept(tctx->cs->cpu_index, ring,
                                regs[TM_IPB], sig_regs[TM_PIPR],
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (33 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 36/50] ppc/xive2: split tctx presentation processing from set CPPR Cédric Le Goater
                   ` (16 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Have xive_tctx_notify() also set the new PIPR value and rename it to
xive_tctx_pipr_set(). This can replace the last xive_tctx_pipr_update()
caller because it does not need to update IPB (it already sets it).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-36-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive.h |  5 ++---
 hw/intc/xive.c        | 39 +++++++++++----------------------------
 hw/intc/xive2.c       | 16 +++++++---------
 3 files changed, 20 insertions(+), 40 deletions(-)

diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index a3c2f50ecef7..2372d1014bd2 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -584,12 +584,11 @@ void xive_tctx_pic_print_info(XiveTCTX *tctx, GString *buf);
 Object *xive_tctx_create(Object *cpu, XivePresenter *xptr, Error **errp);
 void xive_tctx_reset(XiveTCTX *tctx);
 void xive_tctx_destroy(XiveTCTX *tctx);
-void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
-                           uint8_t group_level);
+void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
+                        uint8_t group_level);
 void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
                             uint8_t group_level);
 void xive_tctx_reset_signal(XiveTCTX *tctx, uint8_t ring);
-void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level);
 uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t ring);
 
 /*
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index db26dae7dbf4..6ad84f93c77a 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -125,12 +125,16 @@ uint64_t xive_tctx_accept(XiveTCTX *tctx, uint8_t sig_ring)
     return ((uint64_t)nsr << 8) | sig_regs[TM_CPPR];
 }
 
-void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
+/* Change PIPR and calculate NSR and irq based on PIPR, CPPR, group */
+void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t pipr,
+                        uint8_t group_level)
 {
     uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
     uint8_t *regs = &tctx->regs[ring];
 
-    if (sig_regs[TM_PIPR] < sig_regs[TM_CPPR]) {
+    sig_regs[TM_PIPR] = pipr;
+
+    if (pipr < sig_regs[TM_CPPR]) {
         switch (ring) {
         case TM_QW1_OS:
             sig_regs[TM_NSR] = TM_QW1_NSR_EO | (group_level & 0x3F);
@@ -145,7 +149,7 @@ void xive_tctx_notify(XiveTCTX *tctx, uint8_t ring, uint8_t group_level)
             g_assert_not_reached();
         }
         trace_xive_tctx_notify(tctx->cs->cpu_index, ring,
-                               regs[TM_IPB], sig_regs[TM_PIPR],
+                               regs[TM_IPB], pipr,
                                sig_regs[TM_CPPR], sig_regs[TM_NSR]);
         qemu_irq_raise(xive_tctx_output(tctx, ring));
     } else {
@@ -213,29 +217,10 @@ static void xive_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
         }
     }
 
-    sig_regs[TM_PIPR] = pipr_min;
-
-    /* CPPR has changed, check if we need to raise a pending exception */
-    xive_tctx_notify(tctx, ring_min, 0);
+    /* CPPR has changed, this may present or preclude a pending exception */
+    xive_tctx_pipr_set(tctx, ring_min, pipr_min, 0);
 }
 
-void xive_tctx_pipr_update(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
-                           uint8_t group_level)
-{
-    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
-    uint8_t *regs = &tctx->regs[ring];
-
-    if (group_level == 0) {
-        /* VP-specific */
-        regs[TM_IPB] |= xive_priority_to_ipb(priority);
-        sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
-    } else {
-        /* VP-group */
-        sig_regs[TM_PIPR] = xive_priority_to_pipr(priority);
-    }
-    xive_tctx_notify(tctx, ring, group_level);
- }
-
 static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
 {
     uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
@@ -244,8 +229,7 @@ static void xive_tctx_pipr_recompute_from_ipb(XiveTCTX *tctx, uint8_t ring)
     /* Does not support a presented group interrupt */
     g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
 
-    sig_regs[TM_PIPR] = xive_ipb_to_pipr(regs[TM_IPB]);
-    xive_tctx_notify(tctx, ring, 0);
+    xive_tctx_pipr_set(tctx, ring, xive_ipb_to_pipr(regs[TM_IPB]), 0);
 }
 
 void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
@@ -264,8 +248,7 @@ void xive_tctx_pipr_present(XiveTCTX *tctx, uint8_t ring, uint8_t priority,
     }
     g_assert(pipr <= xive_ipb_to_pipr(regs[TM_IPB]));
     g_assert(pipr < sig_regs[TM_PIPR]);
-    sig_regs[TM_PIPR] = pipr;
-    xive_tctx_notify(tctx, ring, group_level);
+    xive_tctx_pipr_set(tctx, ring, pipr, group_level);
 }
 
 /*
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 71b40f702a6f..0ee50a6bca48 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -966,10 +966,10 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
     }
 
     /*
-     * Compute the PIPR based on the restored state.
+     * Set the PIPR/NSR based on the restored state.
      * It will raise the External interrupt signal if needed.
      */
-    xive_tctx_pipr_update(tctx, TM_QW1_OS, backlog_prio, backlog_level);
+    xive_tctx_pipr_set(tctx, TM_QW1_OS, backlog_prio, backlog_level);
 }
 
 /*
@@ -1144,8 +1144,7 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
             }
 
             /* interrupt is VP directed, pending in IPB */
-            sig_regs[TM_PIPR] = cppr;
-            xive_tctx_notify(tctx, ring, 0); /* Ensure interrupt is cleared */
+            xive_tctx_pipr_set(tctx, ring, cppr, 0);
             return;
         } else {
             /* CPPR was lowered, but still above PIPR. No action needed. */
@@ -1255,11 +1254,10 @@ again:
         pipr_min = backlog_prio;
     }
 
-    /* PIPR should not be set to a value greater than CPPR */
-    sig_regs[TM_PIPR] = (pipr_min > cppr) ? cppr : pipr_min;
-
-    /* CPPR has changed, check if we need to raise a pending exception */
-    xive_tctx_notify(tctx, ring_min, group_level);
+    if (pipr_min > cppr) {
+        pipr_min = cppr;
+    }
+    xive_tctx_pipr_set(tctx, ring_min, pipr_min, group_level);
 }
 
 void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 36/50] ppc/xive2: split tctx presentation processing from set CPPR
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (34 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 37/50] ppc/xive2: Consolidate presentation processing in context push Cédric Le Goater
                   ` (15 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

The second part of the set CPPR operation is to process (or re-present)
any pending interrupts after CPPR is adjusted.

Split this presentation processing out into a standalone function that
can be used in other places.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-37-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 137 +++++++++++++++++++++++++++---------------------
 1 file changed, 76 insertions(+), 61 deletions(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 0ee50a6bca48..c7356c5b2fd8 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1098,66 +1098,19 @@ void xive2_tm_ack_os_el(XivePresenter *xptr, XiveTCTX *tctx,
     xive2_tctx_accept_el(xptr, tctx, TM_QW1_OS, TM_QW1_OS);
 }
 
-/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
-static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t ring, uint8_t cppr)
+/* Re-calculate and present pending interrupts */
+static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
 {
-    uint8_t *sig_regs = &tctx->regs[ring];
+    uint8_t *sig_regs = &tctx->regs[sig_ring];
     Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
-    uint8_t old_cppr, backlog_prio, first_group, group_level;
+    uint8_t backlog_prio, first_group, group_level;
     uint8_t pipr_min, lsmfb_min, ring_min;
+    uint8_t cppr = sig_regs[TM_CPPR];
     bool group_enabled;
-    uint8_t nvp_blk;
-    uint32_t nvp_idx;
     Xive2Nvp nvp;
     int rc;
-    uint8_t nsr = sig_regs[TM_NSR];
-
-    g_assert(ring == TM_QW1_OS || ring == TM_QW3_HV_PHYS);
-
-    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
-    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
-    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
-
-    /* XXX: should show pool IPB for PHYS ring */
-    trace_xive_tctx_set_cppr(tctx->cs->cpu_index, ring,
-                             sig_regs[TM_IPB], sig_regs[TM_PIPR],
-                             cppr, nsr);
-
-    if (cppr > XIVE_PRIORITY_MAX) {
-        cppr = 0xff;
-    }
-
-    old_cppr = sig_regs[TM_CPPR];
-    sig_regs[TM_CPPR] = cppr;
-
-    /* Handle increased CPPR priority (lower value) */
-    if (cppr < old_cppr) {
-        if (cppr <= sig_regs[TM_PIPR]) {
-            /* CPPR lowered below PIPR, must un-present interrupt */
-            if (xive_nsr_indicates_exception(ring, nsr)) {
-                if (xive_nsr_indicates_group_exception(ring, nsr)) {
-                    /* redistribute precluded active grp interrupt */
-                    xive2_redistribute(xrtr, tctx,
-                                       xive_nsr_exception_ring(ring, nsr));
-                    return;
-                }
-            }
 
-            /* interrupt is VP directed, pending in IPB */
-            xive_tctx_pipr_set(tctx, ring, cppr, 0);
-            return;
-        } else {
-            /* CPPR was lowered, but still above PIPR. No action needed. */
-            return;
-        }
-    }
-
-    /* CPPR didn't change, nothing needs to be done */
-    if (cppr == old_cppr) {
-        return;
-    }
-
-    /* CPPR priority decreased (higher value) */
+    g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
 
     /*
      * Recompute the PIPR based on local pending interrupts. It will
@@ -1167,11 +1120,11 @@ again:
     pipr_min = xive_ipb_to_pipr(sig_regs[TM_IPB]);
     group_enabled = !!sig_regs[TM_LGS];
     lsmfb_min = group_enabled ? sig_regs[TM_LSMFB] : 0xff;
-    ring_min = ring;
+    ring_min = sig_ring;
     group_level = 0;
 
     /* PHYS updates also depend on POOL values */
-    if (ring == TM_QW3_HV_PHYS) {
+    if (sig_ring == TM_QW3_HV_PHYS) {
         uint8_t *pool_regs = &tctx->regs[TM_QW2_HV_POOL];
 
         /* POOL values only matter if POOL ctx is valid */
@@ -1201,20 +1154,25 @@ again:
         }
     }
 
-    rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
-    if (rc) {
-        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid context\n");
-        return;
-    }
-
     if (group_enabled &&
         lsmfb_min < cppr &&
         lsmfb_min < pipr_min) {
+
+        uint8_t nvp_blk;
+        uint32_t nvp_idx;
+
         /*
          * Thread has seen a group interrupt with a higher priority
          * than the new cppr or pending local interrupt. Check the
          * backlog
          */
+        rc = xive2_tctx_get_nvp_indexes(tctx, ring_min, &nvp_blk, &nvp_idx);
+        if (rc) {
+            qemu_log_mask(LOG_GUEST_ERROR, "XIVE: set CPPR on invalid "
+                                           "context\n");
+            return;
+        }
+
         if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
             qemu_log_mask(LOG_GUEST_ERROR, "XIVE: No NVP %x/%x\n",
                           nvp_blk, nvp_idx);
@@ -1260,6 +1218,63 @@ again:
     xive_tctx_pipr_set(tctx, ring_min, pipr_min, group_level);
 }
 
+/* NOTE: CPPR only exists for TM_QW1_OS and TM_QW3_HV_PHYS */
+static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t sig_ring, uint8_t cppr)
+{
+    uint8_t *sig_regs = &tctx->regs[sig_ring];
+    Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
+    uint8_t old_cppr;
+    uint8_t nsr = sig_regs[TM_NSR];
+
+    g_assert(sig_ring == TM_QW1_OS || sig_ring == TM_QW3_HV_PHYS);
+
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_NSR] == 0);
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_PIPR] == 0);
+    g_assert(tctx->regs[TM_QW2_HV_POOL + TM_CPPR] == 0);
+
+    /* XXX: should show pool IPB for PHYS ring */
+    trace_xive_tctx_set_cppr(tctx->cs->cpu_index, sig_ring,
+                             sig_regs[TM_IPB], sig_regs[TM_PIPR],
+                             cppr, nsr);
+
+    if (cppr > XIVE_PRIORITY_MAX) {
+        cppr = 0xff;
+    }
+
+    old_cppr = sig_regs[TM_CPPR];
+    sig_regs[TM_CPPR] = cppr;
+
+    /* Handle increased CPPR priority (lower value) */
+    if (cppr < old_cppr) {
+        if (cppr <= sig_regs[TM_PIPR]) {
+            /* CPPR lowered below PIPR, must un-present interrupt */
+            if (xive_nsr_indicates_exception(sig_ring, nsr)) {
+                if (xive_nsr_indicates_group_exception(sig_ring, nsr)) {
+                    /* redistribute precluded active grp interrupt */
+                    xive2_redistribute(xrtr, tctx,
+                                       xive_nsr_exception_ring(sig_ring, nsr));
+                    return;
+                }
+            }
+
+            /* interrupt is VP directed, pending in IPB */
+            xive_tctx_pipr_set(tctx, sig_ring, cppr, 0);
+            return;
+        } else {
+            /* CPPR was lowered, but still above PIPR. No action needed. */
+            return;
+        }
+    }
+
+    /* CPPR didn't change, nothing needs to be done */
+    if (cppr == old_cppr) {
+        return;
+    }
+
+    /* CPPR priority decreased (higher value) */
+    xive2_tctx_process_pending(tctx, sig_ring);
+}
+
 void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
                           hwaddr offset, uint64_t value, unsigned size)
 {
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 37/50] ppc/xive2: Consolidate presentation processing in context push
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (35 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 36/50] ppc/xive2: split tctx presentation processing from set CPPR Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set Cédric Le Goater
                   ` (14 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

OS-push operation must re-present pending interrupts. Use the
newly created xive2_tctx_process_pending() function instead of
duplicating the logic.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-38-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 42 ++++++++++--------------------------------
 1 file changed, 10 insertions(+), 32 deletions(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index c7356c5b2fd8..be945bef1c53 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -903,18 +903,14 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
     return cppr;
 }
 
+static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
+
 static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
                                    uint8_t nvp_blk, uint32_t nvp_idx,
                                    bool do_restore)
 {
-    XivePresenter *xptr = XIVE_PRESENTER(xrtr);
-    uint8_t ipb;
-    uint8_t backlog_level;
-    uint8_t group_level;
-    uint8_t first_group;
-    uint8_t backlog_prio;
-    uint8_t group_prio;
     uint8_t *regs = &tctx->regs[TM_QW1_OS];
+    uint8_t ipb;
     Xive2Nvp nvp;
 
     /*
@@ -946,30 +942,8 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
     }
     /* IPB bits in the backlog are merged with the TIMA IPB bits */
     regs[TM_IPB] |= ipb;
-    backlog_prio = xive_ipb_to_pipr(regs[TM_IPB]);
-    backlog_level = 0;
-
-    first_group = xive_get_field32(NVP2_W0_PGOFIRST, nvp.w0);
-    if (first_group && regs[TM_LSMFB] < backlog_prio) {
-        group_prio = xive2_presenter_backlog_scan(xptr, nvp_blk, nvp_idx,
-                                                  first_group, &group_level);
-        regs[TM_LSMFB] = group_prio;
-        if (regs[TM_LGS] && group_prio < backlog_prio &&
-            group_prio < regs[TM_CPPR]) {
-
-            /* VP can take a group interrupt */
-            xive2_presenter_backlog_decr(xptr, nvp_blk, nvp_idx,
-                                         group_prio, group_level);
-            backlog_prio = group_prio;
-            backlog_level = group_level;
-        }
-    }
 
-    /*
-     * Set the PIPR/NSR based on the restored state.
-     * It will raise the External interrupt signal if needed.
-     */
-    xive_tctx_pipr_set(tctx, TM_QW1_OS, backlog_prio, backlog_level);
+    xive2_tctx_process_pending(tctx, TM_QW1_OS);
 }
 
 /*
@@ -1103,8 +1077,12 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
 {
     uint8_t *sig_regs = &tctx->regs[sig_ring];
     Xive2Router *xrtr = XIVE2_ROUTER(tctx->xptr);
-    uint8_t backlog_prio, first_group, group_level;
-    uint8_t pipr_min, lsmfb_min, ring_min;
+    uint8_t backlog_prio;
+    uint8_t first_group;
+    uint8_t group_level;
+    uint8_t pipr_min;
+    uint8_t lsmfb_min;
+    uint8_t ring_min;
     uint8_t cppr = sig_regs[TM_CPPR];
     bool group_enabled;
     Xive2Nvp nvp;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (36 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 37/50] ppc/xive2: Consolidate presentation processing in context push Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 39/50] ppc/xive: Assert group interrupts were redistributed Cédric Le Goater
                   ` (13 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

When CPPR priority is decreased, pending interrupts do not need to be
re-checked if one is already presented because by definition that will
be the highest priority.

This prevents a presented group interrupt from being lost.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-39-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index be945bef1c53..531e6517baa2 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1250,7 +1250,9 @@ static void xive2_tctx_set_cppr(XiveTCTX *tctx, uint8_t sig_ring, uint8_t cppr)
     }
 
     /* CPPR priority decreased (higher value) */
-    xive2_tctx_process_pending(tctx, sig_ring);
+    if (!xive_nsr_indicates_exception(sig_ring, nsr)) {
+        xive2_tctx_process_pending(tctx, sig_ring);
+    }
 }
 
 void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 39/50] ppc/xive: Assert group interrupts were redistributed
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (37 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 40/50] ppc/xive2: implement NVP context save restore for POOL ring Cédric Le Goater
                   ` (12 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Add some assertions to try to ensure presented group interrupts do
not get lost without being redistributed, if they become precluded
by CPPR or preempted by a higher priority interrupt.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-40-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c  | 2 ++
 hw/intc/xive2.c | 1 +
 2 files changed, 3 insertions(+)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 6ad84f93c77a..d609d552e89e 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -132,6 +132,8 @@ void xive_tctx_pipr_set(XiveTCTX *tctx, uint8_t ring, uint8_t pipr,
     uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
     uint8_t *regs = &tctx->regs[ring];
 
+    g_assert(!xive_nsr_indicates_group_exception(ring, sig_regs[TM_NSR]));
+
     sig_regs[TM_PIPR] = pipr;
 
     if (pipr < sig_regs[TM_CPPR]) {
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 531e6517baa2..a0a6b1a88179 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1089,6 +1089,7 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
     int rc;
 
     g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
+    g_assert(!xive_nsr_indicates_group_exception(sig_ring, sig_regs[TM_NSR]));
 
     /*
      * Recompute the PIPR based on local pending interrupts. It will
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 40/50] ppc/xive2: implement NVP context save restore for POOL ring
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (38 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 39/50] ppc/xive: Assert group interrupts were redistributed Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt Cédric Le Goater
                   ` (11 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

In preparation to implement POOL context push, add support for POOL
NVP context save/restore.

The NVP p bit is defined in the spec as follows:

    If TRUE, the CPPR of a Pool VP in the NVP is updated during store of
    the context with the CPPR of the Hard context it was running under.

It's not clear whether non-pool VPs always or never get CPPR updated.
Before this patch, OS contexts always save CPPR, so we will assume that
is the behaviour.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-41-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive2_regs.h |  1 +
 hw/intc/xive2.c             | 51 +++++++++++++++++++++++++------------
 2 files changed, 36 insertions(+), 16 deletions(-)

diff --git a/include/hw/ppc/xive2_regs.h b/include/hw/ppc/xive2_regs.h
index f82054661bda..2a3e60abadbf 100644
--- a/include/hw/ppc/xive2_regs.h
+++ b/include/hw/ppc/xive2_regs.h
@@ -158,6 +158,7 @@ typedef struct Xive2Nvp {
 #define NVP2_W0_L                  PPC_BIT32(8)
 #define NVP2_W0_G                  PPC_BIT32(9)
 #define NVP2_W0_T                  PPC_BIT32(10)
+#define NVP2_W0_P                  PPC_BIT32(11)
 #define NVP2_W0_ESC_END            PPC_BIT32(25) /* 'N' bit 0:ESB  1:END */
 #define NVP2_W0_PGOFIRST           PPC_BITMASK32(26, 31)
         uint32_t       w1;
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index a0a6b1a88179..7631d4886206 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -512,12 +512,13 @@ static void xive2_presenter_backlog_decr(XivePresenter *xptr,
  */
 
 static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
-                                uint8_t nvp_blk, uint32_t nvp_idx,
-                                uint8_t ring)
+                                uint8_t ring,
+                                uint8_t nvp_blk, uint32_t nvp_idx)
 {
     CPUPPCState *env = &POWERPC_CPU(tctx->cs)->env;
     uint32_t pir = env->spr_cb[SPR_PIR].default_value;
     Xive2Nvp nvp;
+    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
     uint8_t *regs = &tctx->regs[ring];
 
     if (xive2_router_get_nvp(xrtr, nvp_blk, nvp_idx, &nvp)) {
@@ -553,7 +554,14 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
     }
 
     nvp.w2 = xive_set_field32(NVP2_W2_IPB, nvp.w2, regs[TM_IPB]);
-    nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, regs[TM_CPPR]);
+
+    if ((nvp.w0 & NVP2_W0_P) || ring != TM_QW2_HV_POOL) {
+        /*
+         * Non-pool contexts always save CPPR (ignore p bit). XXX: Clarify
+         * whether that is the correct behaviour.
+         */
+        nvp.w2 = xive_set_field32(NVP2_W2_CPPR, nvp.w2, sig_regs[TM_CPPR]);
+    }
     if (nvp.w0 & NVP2_W0_L) {
         /*
          * Typically not used. If LSMFB is restored with 0, it will
@@ -722,7 +730,7 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
     }
 
     if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
-        xive2_tctx_save_ctx(xrtr, tctx, nvp_blk, nvp_idx, ring);
+        xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
     }
 
     /*
@@ -863,12 +871,15 @@ void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
     xive2_tm_pull_ctx_ol(xptr, tctx, offset, value, size, TM_QW3_HV_PHYS);
 }
 
-static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
-                                        uint8_t nvp_blk, uint32_t nvp_idx,
-                                        Xive2Nvp *nvp)
+static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
+                                      uint8_t ring,
+                                      uint8_t nvp_blk, uint32_t nvp_idx,
+                                      Xive2Nvp *nvp)
 {
     CPUPPCState *env = &POWERPC_CPU(tctx->cs)->env;
     uint32_t pir = env->spr_cb[SPR_PIR].default_value;
+    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
+    uint8_t *regs = &tctx->regs[ring];
     uint8_t cppr;
 
     if (!xive2_nvp_is_hw(nvp)) {
@@ -881,10 +892,10 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
     nvp->w2 = xive_set_field32(NVP2_W2_CPPR, nvp->w2, 0);
     xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 2);
 
-    tctx->regs[TM_QW1_OS + TM_CPPR] = cppr;
-    tctx->regs[TM_QW1_OS + TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
-    tctx->regs[TM_QW1_OS + TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
-    tctx->regs[TM_QW1_OS + TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
+    sig_regs[TM_CPPR] = cppr;
+    regs[TM_LSMFB] = xive_get_field32(NVP2_W2_LSMFB, nvp->w2);
+    regs[TM_LGS] = xive_get_field32(NVP2_W2_LGS, nvp->w2);
+    regs[TM_T] = xive_get_field32(NVP2_W2_T, nvp->w2);
 
     nvp->w1 = xive_set_field32(NVP2_W1_CO, nvp->w1, 1);
     nvp->w1 = xive_set_field32(NVP2_W1_CO_THRID_VALID, nvp->w1, 1);
@@ -893,9 +904,18 @@ static uint8_t xive2_tctx_restore_os_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
     /*
      * Checkout privilege: 0:OS, 1:Pool, 2:Hard
      *
-     * TODO: we only support OS push/pull
+     * TODO: we don't support hard push/pull
      */
-    nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 0);
+    switch (ring) {
+    case TM_QW1_OS:
+        nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 0);
+        break;
+    case TM_QW2_HV_POOL:
+        nvp->w1 = xive_set_field32(NVP2_W1_CO_PRIV, nvp->w1, 1);
+        break;
+    default:
+        g_assert_not_reached();
+    }
 
     xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, nvp, 1);
 
@@ -930,9 +950,8 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
     }
 
     /* Automatically restore thread context registers */
-    if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE &&
-        do_restore) {
-        xive2_tctx_restore_os_ctx(xrtr, tctx, nvp_blk, nvp_idx, &nvp);
+    if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_restore) {
+        xive2_tctx_restore_ctx(xrtr, tctx, TM_QW1_OS, nvp_blk, nvp_idx, &nvp);
     }
 
     ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (39 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 40/50] ppc/xive2: implement NVP context save restore for POOL ring Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 42/50] ppc/xive: Redistribute phys after pulling of pool context Cédric Le Goater
                   ` (10 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

When the pool context is pulled, the shared pool/phys signal is
reset, which loses the qemu irq if a phys interrupt was presented.

Only reset the signal if a poll irq was presented.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-42-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 7631d4886206..112397459afe 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -727,20 +727,22 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                 xive2_redistribute(xrtr, tctx, cur_ring);
             }
         }
+
+        /*
+         * Lower external interrupt line of requested ring and below except for
+         * USER, which doesn't exist.
+         */
+        if (xive_nsr_indicates_exception(cur_ring, nsr)) {
+            if (cur_ring == xive_nsr_exception_ring(cur_ring, nsr)) {
+                xive_tctx_reset_signal(tctx, cur_ring);
+            }
+        }
     }
 
     if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
         xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
     }
 
-    /*
-     * Lower external interrupt line of requested ring and below except for
-     * USER, which doesn't exist.
-     */
-    for (cur_ring = TM_QW1_OS; cur_ring <= ring;
-         cur_ring += XIVE_TM_RING_SIZE) {
-        xive_tctx_reset_signal(tctx, cur_ring);
-    }
     return target_ringw2;
 }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 42/50] ppc/xive: Redistribute phys after pulling of pool context
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (40 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 43/50] ppc/xive: Check TIMA operations validity Cédric Le Goater
                   ` (9 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

After pulling the pool context, if a pool irq had been presented and
was cleared in the process, there could be a pending irq in phys that
should be presented. Process the phys irq ring after pulling pool ring
to catch this case and avoid losing irqs.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-43-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c  |  3 +++
 hw/intc/xive2.c | 16 ++++++++++++++--
 2 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index d609d552e89e..50a38bbf2ef9 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -320,6 +320,9 @@ static uint64_t xive_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
 
     xive_tctx_reset_signal(tctx, TM_QW1_OS);
     xive_tctx_reset_signal(tctx, TM_QW2_HV_POOL);
+    /* Re-check phys for interrupts if pool was disabled */
+    xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW3_HV_PHYS);
+
     return qw2w2;
 }
 
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 112397459afe..e3f9ff384a1a 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -683,6 +683,8 @@ static void xive2_redistribute(Xive2Router *xrtr, XiveTCTX *tctx, uint8_t ring)
     xive_tctx_reset_signal(tctx, ring);
 }
 
+static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
+
 static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                                   hwaddr offset, unsigned size, uint8_t ring)
 {
@@ -739,6 +741,18 @@ static uint64_t xive2_tm_pull_ctx(XivePresenter *xptr, XiveTCTX *tctx,
         }
     }
 
+    if (ring == TM_QW2_HV_POOL) {
+        /* Re-check phys for interrupts if pool was disabled */
+        nsr = tctx->regs[TM_QW3_HV_PHYS + TM_NSR];
+        if (xive_nsr_indicates_exception(TM_QW3_HV_PHYS, nsr)) {
+            /* Ring must be PHYS because POOL would have been redistributed */
+            g_assert(xive_nsr_exception_ring(TM_QW3_HV_PHYS, nsr) ==
+                                                           TM_QW3_HV_PHYS);
+        } else {
+            xive2_tctx_process_pending(tctx, TM_QW3_HV_PHYS);
+        }
+    }
+
     if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_save) {
         xive2_tctx_save_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx);
     }
@@ -925,8 +939,6 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
     return cppr;
 }
 
-static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring);
-
 static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
                                    uint8_t nvp_blk, uint32_t nvp_idx,
                                    bool do_restore)
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 43/50] ppc/xive: Check TIMA operations validity
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (41 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 42/50] ppc/xive: Redistribute phys after pulling of pool context Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 44/50] ppc/xive2: Implement pool context push TIMA op Cédric Le Goater
                   ` (8 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Certain TIMA operations should only be performed when a ring is valid,
others when the ring is invalid, and they are considered undefined if
used incorrectly. Add checks for this condition.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-44-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive.h |   1 +
 hw/intc/xive.c        | 196 +++++++++++++++++++++++++-----------------
 2 files changed, 116 insertions(+), 81 deletions(-)

diff --git a/include/hw/ppc/xive.h b/include/hw/ppc/xive.h
index 2372d1014bd2..b7ca8544e431 100644
--- a/include/hw/ppc/xive.h
+++ b/include/hw/ppc/xive.h
@@ -365,6 +365,7 @@ static inline uint32_t xive_tctx_word2(uint8_t *ring)
     return *((uint32_t *) &ring[TM_WORD2]);
 }
 
+bool xive_ring_valid(XiveTCTX *tctx, uint8_t ring);
 bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr);
 bool xive_nsr_indicates_group_exception(uint8_t ring, uint8_t nsr);
 uint8_t xive_nsr_exception_ring(uint8_t ring, uint8_t nsr);
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 50a38bbf2ef9..6589c0a523c9 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -25,6 +25,19 @@
 /*
  * XIVE Thread Interrupt Management context
  */
+bool xive_ring_valid(XiveTCTX *tctx, uint8_t ring)
+{
+    uint8_t cur_ring;
+
+    for (cur_ring = ring; cur_ring <= TM_QW3_HV_PHYS;
+         cur_ring += XIVE_TM_RING_SIZE) {
+        if (!(tctx->regs[cur_ring + TM_WORD2] & 0x80)) {
+            return false;
+        }
+    }
+    return true;
+}
+
 bool xive_nsr_indicates_exception(uint8_t ring, uint8_t nsr)
 {
     switch (ring) {
@@ -663,6 +676,8 @@ typedef struct XiveTmOp {
     uint8_t  page_offset;
     uint32_t op_offset;
     unsigned size;
+    bool     hw_ok;
+    bool     sw_ok;
     void     (*write_handler)(XivePresenter *xptr, XiveTCTX *tctx,
                               hwaddr offset,
                               uint64_t value, unsigned size);
@@ -675,34 +690,34 @@ static const XiveTmOp xive_tm_operations[] = {
      * MMIOs below 2K : raw values and special operations without side
      * effects
      */
-    { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR,       1, xive_tm_set_os_cppr,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2,      4, xive_tm_push_os_ctx,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR,  1, xive_tm_set_hv_cppr,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, NULL,
-                                                     xive_tm_vt_poll },
+    { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR,       1, true, true,
+      xive_tm_set_os_cppr, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2,      4, true, true,
+      xive_tm_push_os_ctx, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR,  1, true, true,
+      xive_tm_set_hv_cppr, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, false, true,
+      xive_tm_vt_push, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
+      NULL, xive_tm_vt_poll },
 
     /* MMIOs above 2K : special operations with side effects */
-    { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG,         2, NULL,
-                                                     xive_tm_ack_os_reg },
-    { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING,     1, xive_tm_set_os_pending,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX,        4, NULL,
-                                                     xive_tm_pull_os_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX,        8, NULL,
-                                                     xive_tm_pull_os_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG,         2, NULL,
-                                                     xive_tm_ack_hv_reg },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX,      4, NULL,
-                                                     xive_tm_pull_pool_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX,      8, NULL,
-                                                     xive_tm_pull_pool_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX,      1, NULL,
-                                                     xive_tm_pull_phys_ctx },
+    { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG,         2, true, false,
+      NULL, xive_tm_ack_os_reg },
+    { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING,     1, true, false,
+      xive_tm_set_os_pending, NULL },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX,        4, true, false,
+      NULL, xive_tm_pull_os_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX,        8, true, false,
+      NULL, xive_tm_pull_os_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG,         2, true, false,
+      NULL, xive_tm_ack_hv_reg },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX,      4, true, false,
+      NULL, xive_tm_pull_pool_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX,      8, true, false,
+      NULL, xive_tm_pull_pool_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX,      1, true, false,
+      NULL, xive_tm_pull_phys_ctx },
 };
 
 static const XiveTmOp xive2_tm_operations[] = {
@@ -710,52 +725,48 @@ static const XiveTmOp xive2_tm_operations[] = {
      * MMIOs below 2K : raw values and special operations without side
      * effects
      */
-    { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR,       1, xive2_tm_set_os_cppr,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2,      4, xive2_tm_push_os_ctx,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2,      8, xive2_tm_push_os_ctx,
-                                                     NULL },
-    { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS,        1, xive_tm_set_os_lgs,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR,  1, xive2_tm_set_hv_cppr,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, xive_tm_vt_push,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, NULL,
-                                                     xive_tm_vt_poll },
-    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T,     1, xive2_tm_set_hv_target,
-                                                     NULL },
+    { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_CPPR,       1, true, true,
+      xive2_tm_set_os_cppr, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2,      4, true, true,
+      xive2_tm_push_os_ctx, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW1_OS + TM_WORD2,      8, true, true,
+      xive2_tm_push_os_ctx, NULL },
+    { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS,        1, true, true,
+      xive_tm_set_os_lgs, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR,  1, true, true,
+      xive2_tm_set_hv_cppr, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
+      NULL, xive_tm_vt_poll },
+    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T,     1, true, true,
+      xive2_tm_set_hv_target, NULL },
 
     /* MMIOs above 2K : special operations with side effects */
-    { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG,         2, NULL,
-                                                   xive_tm_ack_os_reg },
-    { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING,     1, xive_tm_set_os_pending,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2,     4, NULL,
-                                                     xive2_tm_pull_os_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX,        4, NULL,
-                                                     xive2_tm_pull_os_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX,        8, NULL,
-                                                     xive2_tm_pull_os_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG,         2, NULL,
-                                                     xive_tm_ack_hv_reg },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2,   4, NULL,
-                                                     xive2_tm_pull_pool_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX,      4, NULL,
-                                                     xive2_tm_pull_pool_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX,      8, NULL,
-                                                     xive2_tm_pull_pool_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL,     1, xive2_tm_pull_os_ctx_ol,
-                                                     NULL },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2,   4, NULL,
-                                                     xive2_tm_pull_phys_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX,      1, NULL,
-                                                     xive2_tm_pull_phys_ctx },
-    { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL,   1, xive2_tm_pull_phys_ctx_ol,
-                                                     NULL },
-    { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL,          1, xive2_tm_ack_os_el,
-                                                     NULL },
+    { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG,         2, true, false,
+      NULL, xive_tm_ack_os_reg },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2,     4, true, false,
+      NULL, xive2_tm_pull_os_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX,        4, true, false,
+      NULL, xive2_tm_pull_os_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX,        8, true, false,
+      NULL, xive2_tm_pull_os_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_ACK_HV_REG,         2, true, false,
+      NULL, xive_tm_ack_hv_reg },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX_G2,   4, true, false,
+      NULL, xive2_tm_pull_pool_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX,      4, true, false,
+      NULL, xive2_tm_pull_pool_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_POOL_CTX,      8, true, false,
+      NULL, xive2_tm_pull_pool_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_OL,     1, true, false,
+      xive2_tm_pull_os_ctx_ol, NULL },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_G2,   4, true, false,
+      NULL, xive2_tm_pull_phys_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX,      1, true, false,
+      NULL, xive2_tm_pull_phys_ctx },
+    { XIVE_TM_HV_PAGE, TM_SPC_PULL_PHYS_CTX_OL,   1, true, false,
+      xive2_tm_pull_phys_ctx_ol, NULL },
+    { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_EL,          1, true, false,
+      xive2_tm_ack_os_el, NULL },
 };
 
 static const XiveTmOp *xive_tm_find_op(XivePresenter *xptr, hwaddr offset,
@@ -797,18 +808,28 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
                         uint64_t value, unsigned size)
 {
     const XiveTmOp *xto;
+    uint8_t ring = offset & TM_RING_OFFSET;
+    bool is_valid = xive_ring_valid(tctx, ring);
+    bool hw_owned = is_valid;
 
     trace_xive_tctx_tm_write(tctx->cs->cpu_index, offset, size, value);
 
-    /*
-     * TODO: check V bit in Q[0-3]W2
-     */
-
     /*
      * First, check for special operations in the 2K region
      */
+    xto = xive_tm_find_op(tctx->xptr, offset, size, true);
+    if (xto) {
+        if (hw_owned && !xto->hw_ok) {
+            qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to HW TIMA "
+                          "@%"HWADDR_PRIx" size %d\n", offset, size);
+        }
+        if (!hw_owned && !xto->sw_ok) {
+            qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to SW TIMA "
+                          "@%"HWADDR_PRIx" size %d\n", offset, size);
+        }
+    }
+
     if (offset & TM_SPECIAL_OP) {
-        xto = xive_tm_find_op(tctx->xptr, offset, size, true);
         if (!xto) {
             qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid write access at TIMA "
                           "@%"HWADDR_PRIx" size %d\n", offset, size);
@@ -821,7 +842,6 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
     /*
      * Then, for special operations in the region below 2K.
      */
-    xto = xive_tm_find_op(tctx->xptr, offset, size, true);
     if (xto) {
         xto->write_handler(xptr, tctx, offset, value, size);
         return;
@@ -830,6 +850,11 @@ void xive_tctx_tm_write(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
     /*
      * Finish with raw access to the register values
      */
+    if (hw_owned) {
+        /* Store context operations are dangerous when context is valid */
+        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined write to HW TIMA "
+                      "@%"HWADDR_PRIx" size %d\n", offset, size);
+    }
     xive_tm_raw_write(tctx, offset, value, size);
 }
 
@@ -837,17 +862,27 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
                            unsigned size)
 {
     const XiveTmOp *xto;
+    uint8_t ring = offset & TM_RING_OFFSET;
+    bool is_valid = xive_ring_valid(tctx, ring);
+    bool hw_owned = is_valid;
     uint64_t ret;
 
-    /*
-     * TODO: check V bit in Q[0-3]W2
-     */
+    xto = xive_tm_find_op(tctx->xptr, offset, size, false);
+    if (xto) {
+        if (hw_owned && !xto->hw_ok) {
+            qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined read to HW TIMA "
+                          "@%"HWADDR_PRIx" size %d\n", offset, size);
+        }
+        if (!hw_owned && !xto->sw_ok) {
+            qemu_log_mask(LOG_GUEST_ERROR, "XIVE: undefined read to SW TIMA "
+                          "@%"HWADDR_PRIx" size %d\n", offset, size);
+        }
+    }
 
     /*
      * First, check for special operations in the 2K region
      */
     if (offset & TM_SPECIAL_OP) {
-        xto = xive_tm_find_op(tctx->xptr, offset, size, false);
         if (!xto) {
             qemu_log_mask(LOG_GUEST_ERROR, "XIVE: invalid read access to TIMA"
                           "@%"HWADDR_PRIx" size %d\n", offset, size);
@@ -860,7 +895,6 @@ uint64_t xive_tctx_tm_read(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
     /*
      * Then, for special operations in the region below 2K.
      */
-    xto = xive_tm_find_op(tctx->xptr, offset, size, false);
     if (xto) {
         ret = xto->read_handler(xptr, tctx, offset, size);
         goto out;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 44/50] ppc/xive2: Implement pool context push TIMA op
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (42 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 43/50] ppc/xive: Check TIMA operations validity Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 45/50] ppc/xive2: redistribute group interrupts on context push Cédric Le Goater
                   ` (7 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Implement pool context push TIMA op.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-45-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive2.h |  2 ++
 hw/intc/xive.c         |  4 ++++
 hw/intc/xive2.c        | 50 ++++++++++++++++++++++++++++--------------
 3 files changed, 39 insertions(+), 17 deletions(-)

diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index a91b99057c2a..c1ab06a55adf 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -140,6 +140,8 @@ bool xive2_tm_irq_precluded(XiveTCTX *tctx, int ring, uint8_t priority);
 void xive2_tm_set_lsmfb(XiveTCTX *tctx, int ring, uint8_t priority);
 void xive2_tm_set_hv_target(XivePresenter *xptr, XiveTCTX *tctx,
                             hwaddr offset, uint64_t value, unsigned size);
+void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+                            hwaddr offset, uint64_t value, unsigned size);
 uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                                 hwaddr offset, unsigned size);
 uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 6589c0a523c9..e7f77be2f711 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -733,6 +733,10 @@ static const XiveTmOp xive2_tm_operations[] = {
       xive2_tm_push_os_ctx, NULL },
     { XIVE_TM_OS_PAGE, TM_QW1_OS + TM_LGS,        1, true, true,
       xive_tm_set_os_lgs, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 4, true, true,
+      xive2_tm_push_pool_ctx, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 8, true, true,
+      xive2_tm_push_pool_ctx, NULL },
     { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR,  1, true, true,
       xive2_tm_set_hv_cppr, NULL },
     { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index e3f9ff384a1a..4244e1d02b61 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -583,6 +583,7 @@ static void xive2_tctx_save_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
     xive2_router_write_nvp(xrtr, nvp_blk, nvp_idx, &nvp, 1);
 }
 
+/* POOL cam is the same as OS cam encoding */
 static void xive2_cam_decode(uint32_t cam, uint8_t *nvp_blk,
                              uint32_t *nvp_idx, bool *valid, bool *hw)
 {
@@ -940,10 +941,11 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
 }
 
 static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
+                                   uint8_t ring,
                                    uint8_t nvp_blk, uint32_t nvp_idx,
                                    bool do_restore)
 {
-    uint8_t *regs = &tctx->regs[TM_QW1_OS];
+    uint8_t *regs = &tctx->regs[ring];
     uint8_t ipb;
     Xive2Nvp nvp;
 
@@ -965,7 +967,7 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
 
     /* Automatically restore thread context registers */
     if (xive2_router_get_config(xrtr) & XIVE2_VP_SAVE_RESTORE && do_restore) {
-        xive2_tctx_restore_ctx(xrtr, tctx, TM_QW1_OS, nvp_blk, nvp_idx, &nvp);
+        xive2_tctx_restore_ctx(xrtr, tctx, ring, nvp_blk, nvp_idx, &nvp);
     }
 
     ipb = xive_get_field32(NVP2_W2_IPB, nvp.w2);
@@ -976,48 +978,62 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
     /* IPB bits in the backlog are merged with the TIMA IPB bits */
     regs[TM_IPB] |= ipb;
 
-    xive2_tctx_process_pending(tctx, TM_QW1_OS);
+    xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
+                                         TM_QW3_HV_PHYS : ring);
 }
 
 /*
- * Updating the OS CAM line can trigger a resend of interrupt
+ * Updating the ring CAM line can trigger a resend of interrupt
  */
-void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
-                          hwaddr offset, uint64_t value, unsigned size)
+static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+                              hwaddr offset, uint64_t value, unsigned size,
+                              uint8_t ring)
 {
     uint32_t cam;
-    uint32_t qw1w2;
-    uint64_t qw1dw1;
+    uint32_t w2;
+    uint64_t dw1;
     uint8_t nvp_blk;
     uint32_t nvp_idx;
-    bool vo;
+    bool v;
     bool do_restore;
 
     /* First update the thead context */
     switch (size) {
     case 4:
         cam = value;
-        qw1w2 = cpu_to_be32(cam);
-        memcpy(&tctx->regs[TM_QW1_OS + TM_WORD2], &qw1w2, 4);
+        w2 = cpu_to_be32(cam);
+        memcpy(&tctx->regs[ring + TM_WORD2], &w2, 4);
         break;
     case 8:
         cam = value >> 32;
-        qw1dw1 = cpu_to_be64(value);
-        memcpy(&tctx->regs[TM_QW1_OS + TM_WORD2], &qw1dw1, 8);
+        dw1 = cpu_to_be64(value);
+        memcpy(&tctx->regs[ring + TM_WORD2], &dw1, 8);
         break;
     default:
         g_assert_not_reached();
     }
 
-    xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &vo, &do_restore);
+    xive2_cam_decode(cam, &nvp_blk, &nvp_idx, &v, &do_restore);
 
     /* Check the interrupt pending bits */
-    if (vo) {
-        xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, nvp_blk, nvp_idx,
-                               do_restore);
+    if (v) {
+        xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, ring,
+                               nvp_blk, nvp_idx, do_restore);
     }
 }
 
+void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+                          hwaddr offset, uint64_t value, unsigned size)
+{
+    xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW1_OS);
+}
+
+void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+                            hwaddr offset, uint64_t value, unsigned size)
+{
+    xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW2_HV_POOL);
+}
+
 /* returns -1 if ring is invalid, but still populates block and index */
 static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
                                       uint8_t *nvp_blk, uint32_t *nvp_idx)
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 45/50] ppc/xive2: redistribute group interrupts on context push
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (43 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 44/50] ppc/xive2: Implement pool context push TIMA op Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 46/50] ppc/xive2: Implement set_os_pending TIMA op Cédric Le Goater
                   ` (6 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

When pushing a context, any presented group interrupt should be
redistributed before processing pending interrupts to present
highest priority.

This can occur when pushing the POOL ring when the valid PHYS
ring has a group interrupt presented, because they share signal
registers.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-46-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 4244e1d02b61..23eb85bb8669 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -945,8 +945,9 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
                                    uint8_t nvp_blk, uint32_t nvp_idx,
                                    bool do_restore)
 {
+    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
     uint8_t *regs = &tctx->regs[ring];
-    uint8_t ipb;
+    uint8_t ipb, nsr = sig_regs[TM_NSR];
     Xive2Nvp nvp;
 
     /*
@@ -978,6 +979,11 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
     /* IPB bits in the backlog are merged with the TIMA IPB bits */
     regs[TM_IPB] |= ipb;
 
+    if (xive_nsr_indicates_group_exception(ring, nsr)) {
+        /* redistribute precluded active grp interrupt */
+        g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the grp interrupt */
+        xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
+    }
     xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
                                          TM_QW3_HV_PHYS : ring);
 }
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 46/50] ppc/xive2: Implement set_os_pending TIMA op
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (44 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 45/50] ppc/xive2: redistribute group interrupts on context push Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 47/50] ppc/xive2: Implement POOL LGS push " Cédric Le Goater
                   ` (5 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

xive2 must take into account redistribution of group interrupts if
the VP directed priority exceeds the group interrupt priority after
this operation. The xive1 code is not group aware so implement this
for xive2.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-47-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive2.h |  2 ++
 hw/intc/xive.c         |  2 ++
 hw/intc/xive2.c        | 28 ++++++++++++++++++++++++++++
 3 files changed, 32 insertions(+)

diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index c1ab06a55adf..45266c2a8b9e 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -130,6 +130,8 @@ void xive2_tm_set_hv_cppr(XivePresenter *xptr, XiveTCTX *tctx,
                           hwaddr offset, uint64_t value, unsigned size);
 void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
                           hwaddr offset, uint64_t value, unsigned size);
+void xive2_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
+                             hwaddr offset, uint64_t value, unsigned size);
 void xive2_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx, hwaddr offset,
                            uint64_t value, unsigned size);
 uint64_t xive2_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index e7f77be2f711..25cb3877cb15 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -747,6 +747,8 @@ static const XiveTmOp xive2_tm_operations[] = {
     /* MMIOs above 2K : special operations with side effects */
     { XIVE_TM_OS_PAGE, TM_SPC_ACK_OS_REG,         2, true, false,
       NULL, xive_tm_ack_os_reg },
+    { XIVE_TM_OS_PAGE, TM_SPC_SET_OS_PENDING,     1, true, false,
+      xive2_tm_set_os_pending, NULL },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX_G2,     4, true, false,
       NULL, xive2_tm_pull_os_ctx },
     { XIVE_TM_HV_PAGE, TM_SPC_PULL_OS_CTX,        4, true, false,
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 23eb85bb8669..f9eaea119289 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1323,6 +1323,34 @@ void xive2_tm_set_os_cppr(XivePresenter *xptr, XiveTCTX *tctx,
     xive2_tctx_set_cppr(tctx, TM_QW1_OS, value & 0xff);
 }
 
+/*
+ * Adjust the IPB to allow a CPU to process event queues of other
+ * priorities during one physical interrupt cycle.
+ */
+void xive2_tm_set_os_pending(XivePresenter *xptr, XiveTCTX *tctx,
+                             hwaddr offset, uint64_t value, unsigned size)
+{
+    Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+    uint8_t ring = TM_QW1_OS;
+    uint8_t *regs = &tctx->regs[ring];
+    uint8_t priority = value & 0xff;
+
+    /*
+     * XXX: should this simply set a bit in IPB and wait for it to be picked
+     * up next cycle, or is it supposed to present it now? We implement the
+     * latter here.
+     */
+    regs[TM_IPB] |= xive_priority_to_ipb(priority);
+    if (xive_ipb_to_pipr(regs[TM_IPB]) >= regs[TM_PIPR]) {
+        return;
+    }
+    if (xive_nsr_indicates_group_exception(ring, regs[TM_NSR])) {
+        xive2_redistribute(xrtr, tctx, ring);
+    }
+
+    xive_tctx_pipr_present(tctx, ring, priority, 0);
+}
+
 static void xive2_tctx_set_target(XiveTCTX *tctx, uint8_t ring, uint8_t target)
 {
     uint8_t *regs = &tctx->regs[ring];
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 47/50] ppc/xive2: Implement POOL LGS push TIMA op
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (45 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 46/50] ppc/xive2: Implement set_os_pending TIMA op Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 48/50] ppc/xive2: Implement PHYS ring VP " Cédric Le Goater
                   ` (4 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Implement set LGS for the POOL ring.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-48-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 25cb3877cb15..725ba72b8f7a 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -532,6 +532,12 @@ static void xive_tm_set_os_lgs(XivePresenter *xptr, XiveTCTX *tctx,
     xive_tctx_set_lgs(tctx, TM_QW1_OS, value & 0xff);
 }
 
+static void xive_tm_set_pool_lgs(XivePresenter *xptr, XiveTCTX *tctx,
+                          hwaddr offset, uint64_t value, unsigned size)
+{
+    xive_tctx_set_lgs(tctx, TM_QW2_HV_POOL, value & 0xff);
+}
+
 /*
  * Adjust the PIPR to allow a CPU to process event queues of other
  * priorities during one physical interrupt cycle.
@@ -737,6 +743,8 @@ static const XiveTmOp xive2_tm_operations[] = {
       xive2_tm_push_pool_ctx, NULL },
     { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_WORD2, 8, true, true,
       xive2_tm_push_pool_ctx, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW2_HV_POOL + TM_LGS,   1, true, true,
+      xive_tm_set_pool_lgs, NULL },
     { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR,  1, true, true,
       xive2_tm_set_hv_cppr, NULL },
     { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 48/50] ppc/xive2: Implement PHYS ring VP push TIMA op
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (46 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 47/50] ppc/xive2: Implement POOL LGS push " Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 49/50] ppc/xive: Split need_resend into restore_nvp Cédric Le Goater
                   ` (3 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

Implement the phys (aka hard) VP push. PowerVM uses this operation.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-49-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 include/hw/ppc/xive2.h |  2 ++
 hw/intc/xive.c         |  2 ++
 hw/intc/xive2.c        | 11 +++++++++++
 3 files changed, 15 insertions(+)

diff --git a/include/hw/ppc/xive2.h b/include/hw/ppc/xive2.h
index 45266c2a8b9e..f4437e2c79a7 100644
--- a/include/hw/ppc/xive2.h
+++ b/include/hw/ppc/xive2.h
@@ -146,6 +146,8 @@ void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                             hwaddr offset, uint64_t value, unsigned size);
 uint64_t xive2_tm_pull_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                                 hwaddr offset, unsigned size);
+void xive2_tm_push_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+                            hwaddr offset, uint64_t value, unsigned size);
 uint64_t xive2_tm_pull_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
                                 hwaddr offset, unsigned size);
 void xive2_tm_pull_phys_ctx_ol(XivePresenter *xptr, XiveTCTX *tctx,
diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 725ba72b8f7a..8b7182fbb86c 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -747,6 +747,8 @@ static const XiveTmOp xive2_tm_operations[] = {
       xive_tm_set_pool_lgs, NULL },
     { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_CPPR,  1, true, true,
       xive2_tm_set_hv_cppr, NULL },
+    { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, false, true,
+      xive2_tm_push_phys_ctx, NULL },
     { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_WORD2, 1, true, true,
       NULL, xive_tm_vt_poll },
     { XIVE_TM_HV_PAGE, TM_QW3_HV_PHYS + TM_T,     1, true, true,
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index f9eaea119289..1b005687961a 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -1005,6 +1005,11 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
 
     /* First update the thead context */
     switch (size) {
+    case 1:
+        tctx->regs[ring + TM_WORD2] = value & 0xff;
+        cam = xive2_tctx_hw_cam_line(xptr, tctx);
+        cam |= ((value & 0xc0) << 24); /* V and H bits */
+        break;
     case 4:
         cam = value;
         w2 = cpu_to_be32(cam);
@@ -1040,6 +1045,12 @@ void xive2_tm_push_pool_ctx(XivePresenter *xptr, XiveTCTX *tctx,
     xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW2_HV_POOL);
 }
 
+void xive2_tm_push_phys_ctx(XivePresenter *xptr, XiveTCTX *tctx,
+                            hwaddr offset, uint64_t value, unsigned size)
+{
+    xive2_tm_push_ctx(xptr, tctx, offset, value, size, TM_QW3_HV_PHYS);
+}
+
 /* returns -1 if ring is invalid, but still populates block and index */
 static int xive2_tctx_get_nvp_indexes(XiveTCTX *tctx, uint8_t ring,
                                       uint8_t *nvp_blk, uint32_t *nvp_idx)
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 49/50] ppc/xive: Split need_resend into restore_nvp
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (47 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 48/50] ppc/xive2: Implement PHYS ring VP " Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-21 16:22 ` [PULL 50/50] ppc/xive2: Enable lower level contexts on VP push Cédric Le Goater
                   ` (2 subsequent siblings)
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

This is needed by the next patch which will re-send on all lower
rings when pushing a context.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-50-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive.c  | 24 ++++++++++++------------
 hw/intc/xive2.c | 28 ++++++++++++++++------------
 2 files changed, 28 insertions(+), 24 deletions(-)

diff --git a/hw/intc/xive.c b/hw/intc/xive.c
index 8b7182fbb86c..e0ffcf89ebff 100644
--- a/hw/intc/xive.c
+++ b/hw/intc/xive.c
@@ -606,7 +606,7 @@ static uint64_t xive_tm_pull_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
     return qw1w2;
 }
 
-static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
+static void xive_tctx_restore_nvp(XiveRouter *xrtr, XiveTCTX *tctx,
                                   uint8_t nvt_blk, uint32_t nvt_idx)
 {
     XiveNVT nvt;
@@ -632,16 +632,6 @@ static void xive_tctx_need_resend(XiveRouter *xrtr, XiveTCTX *tctx,
         uint8_t *regs = &tctx->regs[TM_QW1_OS];
         regs[TM_IPB] |= ipb;
     }
-
-    /*
-     * Always call xive_tctx_recompute_from_ipb(). Even if there were no
-     * escalation triggered, there could be a pending interrupt which
-     * was saved when the context was pulled and that we need to take
-     * into account by recalculating the PIPR (which is not
-     * saved/restored).
-     * It will also raise the External interrupt signal if needed.
-     */
-    xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
 }
 
 /*
@@ -663,7 +653,17 @@ static void xive_tm_push_os_ctx(XivePresenter *xptr, XiveTCTX *tctx,
 
     /* Check the interrupt pending bits */
     if (vo) {
-        xive_tctx_need_resend(XIVE_ROUTER(xptr), tctx, nvt_blk, nvt_idx);
+        xive_tctx_restore_nvp(XIVE_ROUTER(xptr), tctx, nvt_blk, nvt_idx);
+
+        /*
+         * Always call xive_tctx_recompute_from_ipb(). Even if there were no
+         * escalation triggered, there could be a pending interrupt which
+         * was saved when the context was pulled and that we need to take
+         * into account by recalculating the PIPR (which is not
+         * saved/restored).
+         * It will also raise the External interrupt signal if needed.
+         */
+        xive_tctx_pipr_recompute_from_ipb(tctx, TM_QW1_OS); /* fxb */
     }
 }
 
diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index 1b005687961a..c3c6871e91b3 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -940,14 +940,14 @@ static uint8_t xive2_tctx_restore_ctx(Xive2Router *xrtr, XiveTCTX *tctx,
     return cppr;
 }
 
-static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
+/* Restore TIMA VP context from NVP backlog */
+static void xive2_tctx_restore_nvp(Xive2Router *xrtr, XiveTCTX *tctx,
                                    uint8_t ring,
                                    uint8_t nvp_blk, uint32_t nvp_idx,
                                    bool do_restore)
 {
-    uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
     uint8_t *regs = &tctx->regs[ring];
-    uint8_t ipb, nsr = sig_regs[TM_NSR];
+    uint8_t ipb;
     Xive2Nvp nvp;
 
     /*
@@ -978,14 +978,6 @@ static void xive2_tctx_need_resend(Xive2Router *xrtr, XiveTCTX *tctx,
     }
     /* IPB bits in the backlog are merged with the TIMA IPB bits */
     regs[TM_IPB] |= ipb;
-
-    if (xive_nsr_indicates_group_exception(ring, nsr)) {
-        /* redistribute precluded active grp interrupt */
-        g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the grp interrupt */
-        xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
-    }
-    xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
-                                         TM_QW3_HV_PHYS : ring);
 }
 
 /*
@@ -1028,8 +1020,20 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
 
     /* Check the interrupt pending bits */
     if (v) {
-        xive2_tctx_need_resend(XIVE2_ROUTER(xptr), tctx, ring,
+        Xive2Router *xrtr = XIVE2_ROUTER(xptr);
+        uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
+        uint8_t nsr = sig_regs[TM_NSR];
+
+        xive2_tctx_restore_nvp(xrtr, tctx, ring,
                                nvp_blk, nvp_idx, do_restore);
+
+        if (xive_nsr_indicates_group_exception(ring, nsr)) {
+            /* redistribute precluded active grp interrupt */
+            g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the interrupt */
+            xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
+        }
+        xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
+                                                 TM_QW3_HV_PHYS : ring);
     }
 }
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PULL 50/50] ppc/xive2: Enable lower level contexts on VP push
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (48 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 49/50] ppc/xive: Split need_resend into restore_nvp Cédric Le Goater
@ 2025-07-21 16:22 ` Cédric Le Goater
  2025-07-22 11:20 ` [PULL 00/50] ppc queue Stefan Hajnoczi
  2025-07-22 11:44 ` Michael Tokarev
  51 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-21 16:22 UTC (permalink / raw)
  To: qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Gautam Menghani, Cédric Le Goater

From: Nicholas Piggin <npiggin@gmail.com>

When pushing a context, the lower-level context becomes valid if it
had V=1, and so on. Iterate lower level contexts and send them
pending interrupts if they become enabled.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Glenn Miles <milesg@linux.ibm.com>
Reviewed-by: Michael Kowal <kowal@linux.ibm.com>
Tested-by: Gautam Menghani <gautam@linux.ibm.com>
Link: https://lore.kernel.org/qemu-devel/20250512031100.439842-51-npiggin@gmail.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
 hw/intc/xive2.c | 36 ++++++++++++++++++++++++++++--------
 1 file changed, 28 insertions(+), 8 deletions(-)

diff --git a/hw/intc/xive2.c b/hw/intc/xive2.c
index c3c6871e91b3..ee5fa2617849 100644
--- a/hw/intc/xive2.c
+++ b/hw/intc/xive2.c
@@ -995,6 +995,12 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
     bool v;
     bool do_restore;
 
+    if (xive_ring_valid(tctx, ring)) {
+        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: Attempt to push VP to enabled"
+                                       " ring 0x%02x\n", ring);
+        return;
+    }
+
     /* First update the thead context */
     switch (size) {
     case 1:
@@ -1021,19 +1027,32 @@ static void xive2_tm_push_ctx(XivePresenter *xptr, XiveTCTX *tctx,
     /* Check the interrupt pending bits */
     if (v) {
         Xive2Router *xrtr = XIVE2_ROUTER(xptr);
-        uint8_t *sig_regs = xive_tctx_signal_regs(tctx, ring);
-        uint8_t nsr = sig_regs[TM_NSR];
+        uint8_t cur_ring;
 
         xive2_tctx_restore_nvp(xrtr, tctx, ring,
                                nvp_blk, nvp_idx, do_restore);
 
-        if (xive_nsr_indicates_group_exception(ring, nsr)) {
-            /* redistribute precluded active grp interrupt */
-            g_assert(ring == TM_QW2_HV_POOL); /* PHYS ring has the interrupt */
-            xive2_redistribute(xrtr, tctx, xive_nsr_exception_ring(ring, nsr));
+        for (cur_ring = TM_QW1_OS; cur_ring <= ring;
+             cur_ring += XIVE_TM_RING_SIZE) {
+            uint8_t *sig_regs = xive_tctx_signal_regs(tctx, cur_ring);
+            uint8_t nsr = sig_regs[TM_NSR];
+
+            if (!xive_ring_valid(tctx, cur_ring)) {
+                continue;
+            }
+
+            if (cur_ring == TM_QW2_HV_POOL) {
+                if (xive_nsr_indicates_exception(cur_ring, nsr)) {
+                    g_assert(xive_nsr_exception_ring(cur_ring, nsr) ==
+                                                               TM_QW3_HV_PHYS);
+                    xive2_redistribute(xrtr, tctx,
+                                       xive_nsr_exception_ring(ring, nsr));
+                }
+                xive2_tctx_process_pending(tctx, TM_QW3_HV_PHYS);
+                break;
+            }
+            xive2_tctx_process_pending(tctx, cur_ring);
         }
-        xive2_tctx_process_pending(tctx, ring == TM_QW2_HV_POOL ?
-                                                 TM_QW3_HV_PHYS : ring);
     }
 }
 
@@ -1159,6 +1178,7 @@ static void xive2_tctx_process_pending(XiveTCTX *tctx, uint8_t sig_ring)
     int rc;
 
     g_assert(sig_ring == TM_QW3_HV_PHYS || sig_ring == TM_QW1_OS);
+    g_assert(sig_regs[TM_WORD2] & 0x80);
     g_assert(!xive_nsr_indicates_group_exception(sig_ring, sig_regs[TM_NSR]));
 
     /*
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (49 preceding siblings ...)
  2025-07-21 16:22 ` [PULL 50/50] ppc/xive2: Enable lower level contexts on VP push Cédric Le Goater
@ 2025-07-22 11:20 ` Stefan Hajnoczi
  2025-07-22 11:44 ` Michael Tokarev
  51 siblings, 0 replies; 67+ messages in thread
From: Stefan Hajnoczi @ 2025-07-22 11:20 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: qemu-devel, Nicholas Piggin, Daniel Henrique Barboza,
	Cédric Le Goater

[-- Attachment #1: Type: text/plain, Size: 116 bytes --]

Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/10.1 for any user-visible changes.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
                   ` (50 preceding siblings ...)
  2025-07-22 11:20 ` [PULL 00/50] ppc queue Stefan Hajnoczi
@ 2025-07-22 11:44 ` Michael Tokarev
  2025-07-22 13:37   ` Cédric Le Goater
  51 siblings, 1 reply; 67+ messages in thread
From: Michael Tokarev @ 2025-07-22 11:44 UTC (permalink / raw)
  To: Cédric Le Goater, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza

21.07.2025 19:21, Cédric Le Goater wrote:

> ----------------------------------------------------------------
> ppc/xive queue:
> 
> * Various bug fixes around lost interrupts particularly.
> * Major group interrupt work, in particular around redistributing
>    interrupts. Upstream group support is not in a complete or usable
>    state as it is.
> * Significant context push/pull improvements, particularly pool and
>    phys context handling was quite incomplete beyond trivial OPAL
>    case that pushes at boot.
> * Improved tracing and checking for unimp and guest error situations.
> * Various other missing feature support.

Is there anything in there which should be picked up for
stable qemu branches?

Thanks,

/mjt

> ----------------------------------------------------------------
> Glenn Miles (12):
>        ppc/xive2: Fix calculation of END queue sizes
>        ppc/xive2: Use fair irq target search algorithm
>        ppc/xive2: Fix irq preempted by lower priority group irq
>        ppc/xive2: Fix treatment of PIPR in CPPR update
>        pnv/xive2: Support ESB Escalation
>        ppc/xive2: add interrupt priority configuration flags
>        ppc/xive2: Support redistribution of group interrupts
>        ppc/xive: Add more interrupt notification tracing
>        ppc/xive2: Improve pool regs variable name
>        ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op
>        ppc/xive2: Redistribute group interrupt precluded by CPPR update
>        ppc/xive2: redistribute irqs for pool and phys ctx pull
> 
> Michael Kowal (4):
>        ppc/xive2: Remote VSDs need to match on forwarding address
>        ppc/xive2: Reset Generation Flipped bit on END Cache Watch
>        pnv/xive2: Print value in invalid register write logging
>        pnv/xive2: Permit valid writes to VC/PC Flush Control registers
> 
> Nicholas Piggin (34):
>        ppc/xive: Fix xive trace event output
>        ppc/xive: Report access size in XIVE TM operation error logs
>        ppc/xive2: fix context push calculation of IPB priority
>        ppc/xive: Fix PHYS NSR ring matching
>        ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR
>        ppc/xive2: Set CPPR delivery should account for group priority
>        ppc/xive: tctx_notify should clear the precluded interrupt
>        ppc/xive: Explicitly zero NSR after accepting
>        ppc/xive: Move NSR decoding into helper functions
>        ppc/xive: Fix pulling pool and phys contexts
>        pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL
>        ppc/xive: Change presenter .match_nvt to match not present
>        ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt
>        ppc/xive: Add xive_tctx_pipr_present() to present new interrupt
>        ppc/xive: Fix high prio group interrupt being preempted by low prio VP
>        ppc/xive: Split xive recompute from IPB function
>        ppc/xive: tctx signaling registers rework
>        ppc/xive: tctx_accept only lower irq line if an interrupt was presented
>        ppc/xive: Add xive_tctx_pipr_set() helper function
>        ppc/xive2: split tctx presentation processing from set CPPR
>        ppc/xive2: Consolidate presentation processing in context push
>        ppc/xive2: Avoid needless interrupt re-check on CPPR set
>        ppc/xive: Assert group interrupts were redistributed
>        ppc/xive2: implement NVP context save restore for POOL ring
>        ppc/xive2: Prevent pulling of pool context losing phys interrupt
>        ppc/xive: Redistribute phys after pulling of pool context
>        ppc/xive: Check TIMA operations validity
>        ppc/xive2: Implement pool context push TIMA op
>        ppc/xive2: redistribute group interrupts on context push
>        ppc/xive2: Implement set_os_pending TIMA op
>        ppc/xive2: Implement POOL LGS push TIMA op
>        ppc/xive2: Implement PHYS ring VP push TIMA op
>        ppc/xive: Split need_resend into restore_nvp
>        ppc/xive2: Enable lower level contexts on VP push


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-07-22 11:44 ` Michael Tokarev
@ 2025-07-22 13:37   ` Cédric Le Goater
  2025-07-22 14:25     ` Michael Tokarev
  0 siblings, 1 reply; 67+ messages in thread
From: Cédric Le Goater @ 2025-07-22 13:37 UTC (permalink / raw)
  To: Michael Tokarev, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani

+ Glenn, Michael, Caleb, Gautam

On 7/22/25 13:44, Michael Tokarev wrote:
> 21.07.2025 19:21, Cédric Le Goater wrote:
> 
>> ----------------------------------------------------------------
>> ppc/xive queue:
>>
>> * Various bug fixes around lost interrupts particularly.
>> * Major group interrupt work, in particular around redistributing
>>    interrupts. Upstream group support is not in a complete or usable
>>    state as it is.
>> * Significant context push/pull improvements, particularly pool and
>>    phys context handling was quite incomplete beyond trivial OPAL
>>    case that pushes at boot.
>> * Improved tracing and checking for unimp and guest error situations.
>> * Various other missing feature support.
> 
> Is there anything in there which should be picked up for
> stable qemu branches?

May be the IBM simulation team can say ?
I think this would also require some testing before applying.

Which stable branch are you targeting ? 7.2 to 10.0 ?


Thanks,

C.
  



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-07-22 13:37   ` Cédric Le Goater
@ 2025-07-22 14:25     ` Michael Tokarev
  2025-08-05 16:26       ` Miles Glenn
  0 siblings, 1 reply; 67+ messages in thread
From: Michael Tokarev @ 2025-07-22 14:25 UTC (permalink / raw)
  To: Cédric Le Goater, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Glenn Miles,
	Michael Kowal, Caleb Schlossin, Gautam Menghani, qemu-stable

On 22.07.2025 16:37, Cédric Le Goater wrote:
> + Glenn, Michael, Caleb, Gautam
> 
> On 7/22/25 13:44, Michael Tokarev wrote:
>> 21.07.2025 19:21, Cédric Le Goater wrote:
>>
>>> ----------------------------------------------------------------
>>> ppc/xive queue:
>>>
>>> * Various bug fixes around lost interrupts particularly.
>>> * Major group interrupt work, in particular around redistributing
>>>    interrupts. Upstream group support is not in a complete or usable
>>>    state as it is.
>>> * Significant context push/pull improvements, particularly pool and
>>>    phys context handling was quite incomplete beyond trivial OPAL
>>>    case that pushes at boot.
>>> * Improved tracing and checking for unimp and guest error situations.
>>> * Various other missing feature support.
>>
>> Is there anything in there which should be picked up for
>> stable qemu branches?
> 
> May be the IBM simulation team can say ?
> I think this would also require some testing before applying.
> 
> Which stable branch are you targeting ? 7.2 to 10.0 ?

There are currently 2 active stable branches, 7.2 and 10.0.
Both are supposed to be long-term maintenance.  I think 7.2
can be left behind already.

Thanks,

/mjt


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-07-22 14:25     ` Michael Tokarev
@ 2025-08-05 16:26       ` Miles Glenn
  2025-08-05 16:33         ` Michael Tokarev
  2025-08-05 20:07         ` Cédric Le Goater
  0 siblings, 2 replies; 67+ messages in thread
From: Miles Glenn @ 2025-08-05 16:26 UTC (permalink / raw)
  To: Michael Tokarev, Cédric Le Goater, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Caleb Schlossin, Gautam Menghani, qemu-stable

On Tue, 2025-07-22 at 17:25 +0300, Michael Tokarev wrote:
> On 22.07.2025 16:37, Cédric Le Goater wrote:
> > + Glenn, Michael, Caleb, Gautam
> > 
> > On 7/22/25 13:44, Michael Tokarev wrote:
> > > 21.07.2025 19:21, Cédric Le Goater wrote:
> > > 
> > > > ----------------------------------------------------------------
> > > > ppc/xive queue:
> > > > 
> > > > * Various bug fixes around lost interrupts particularly.
> > > > * Major group interrupt work, in particular around redistributing
> > > >    interrupts. Upstream group support is not in a complete or usable
> > > >    state as it is.
> > > > * Significant context push/pull improvements, particularly pool and
> > > >    phys context handling was quite incomplete beyond trivial OPAL
> > > >    case that pushes at boot.
> > > > * Improved tracing and checking for unimp and guest error situations.
> > > > * Various other missing feature support.
> > > 
> > > Is there anything in there which should be picked up for
> > > stable qemu branches?
> > 
> > May be the IBM simulation team can say ?
> > I think this would also require some testing before applying.
> > 
> > Which stable branch are you targeting ? 7.2 to 10.0 ?
> 
> There are currently 2 active stable branches, 7.2 and 10.0.
> Both are supposed to be long-term maintenance.  I think 7.2
> can be left behind already.
> 
> Thanks,
> 
> /mjt

Michael T.,

All of the XIVE fixes/changes originating from myself were made in an
effort to get PowerVM firmware running on PowerNV with minimal testing
of OPAL firmware.  However, even with those fixes, running PowerVM on
PowerNV is still pretty unstable.  While backporting these fixes would
likely increase the stability of running PowerVM on PowerNV, I do think
it could pose significant risk to the stability of running OPAL on
PowerNV.  With that in mind, I think it's probably best if we did not
backport any of my own XIVE changes.

Nick, can you respond regarding the changes you made?

Thanks,

Glenn







^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-05 16:26       ` Miles Glenn
@ 2025-08-05 16:33         ` Michael Tokarev
  2025-08-05 20:17           ` Cédric Le Goater
  2025-08-05 20:07         ` Cédric Le Goater
  1 sibling, 1 reply; 67+ messages in thread
From: Michael Tokarev @ 2025-08-05 16:33 UTC (permalink / raw)
  To: milesg, Cédric Le Goater, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Caleb Schlossin, Gautam Menghani, qemu-stable

On 05.08.2025 19:26, Miles Glenn wrote:
> On Tue, 2025-07-22 at 17:25 +03..00, Michael Tokarev wrote:
...
>> There are currently 2 active stable branches, 7.2 and 10.0.
>> Both are supposed to be long-term maintenance.  I think 7.2
>> can be left behind already.
>>
>> Thanks,
>>
>> /mjt
> 
> Michael T.,
> 
> All of the XIVE fixes/changes originating from myself were made in an
> effort to get PowerVM firmware running on PowerNV with minimal testing
> of OPAL firmware.  However, even with those fixes, running PowerVM on
> PowerNV is still pretty unstable.  While backporting these fixes would
> likely increase the stability of running PowerVM on PowerNV, I do think
> it could pose significant risk to the stability of running OPAL on
> PowerNV.  With that in mind, I think it's probably best if we did not
> backport any of my own XIVE changes.

My view on this, - having in mind 10.0 most likely will be a long-term
support branch - we can pick the PowerVM changes, and if a breakage with
the case you mentioned is found (which will be the same breakage as with
master branch, hopefully), we can pick fixes for these too.

Especially as we have more time now after release of 10.1 and before the
next stable series.

So to me, breakage in stable series is not a good thing, but we can as
well fix it there, - so there might be some balance between known bugs,
possible breakage and future fixes.

But it's definitely your call, you know this area much better.

Thanks,

/mjt


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-05 16:26       ` Miles Glenn
  2025-08-05 16:33         ` Michael Tokarev
@ 2025-08-05 20:07         ` Cédric Le Goater
  2025-08-06 20:46           ` Miles Glenn
  1 sibling, 1 reply; 67+ messages in thread
From: Cédric Le Goater @ 2025-08-05 20:07 UTC (permalink / raw)
  To: milesg, Michael Tokarev, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Caleb Schlossin, Gautam Menghani, qemu-stable

On 8/5/25 18:26, Miles Glenn wrote:
> On Tue, 2025-07-22 at 17:25 +0300, Michael Tokarev wrote:
>> On 22.07.2025 16:37, Cédric Le Goater wrote:
>>> + Glenn, Michael, Caleb, Gautam
>>>
>>> On 7/22/25 13:44, Michael Tokarev wrote:
>>>> 21.07.2025 19:21, Cédric Le Goater wrote:
>>>>
>>>>> ----------------------------------------------------------------
>>>>> ppc/xive queue:
>>>>>
>>>>> * Various bug fixes around lost interrupts particularly.
>>>>> * Major group interrupt work, in particular around redistributing
>>>>>     interrupts. Upstream group support is not in a complete or usable
>>>>>     state as it is.
>>>>> * Significant context push/pull improvements, particularly pool and
>>>>>     phys context handling was quite incomplete beyond trivial OPAL
>>>>>     case that pushes at boot.
>>>>> * Improved tracing and checking for unimp and guest error situations.
>>>>> * Various other missing feature support.
>>>>
>>>> Is there anything in there which should be picked up for
>>>> stable qemu branches?
>>>
>>> May be the IBM simulation team can say ?
>>> I think this would also require some testing before applying.
>>>
>>> Which stable branch are you targeting ? 7.2 to 10.0 ?
>>
>> There are currently 2 active stable branches, 7.2 and 10.0.
>> Both are supposed to be long-term maintenance.  I think 7.2
>> can be left behind already.
>>
>> Thanks,
>>
>> /mjt
> 
> Michael T.,
> 
> All of the XIVE fixes/changes originating from myself were made in an
> effort to get PowerVM firmware running on PowerNV with minimal testing
> of OPAL firmware.  However, even with those fixes, running PowerVM on
> PowerNV is still pretty unstable.  While backporting these fixes would
> likely increase the stability of running PowerVM on PowerNV, I do think
> it could pose significant risk to the stability of running OPAL on
> PowerNV.  With that in mind, I think it's probably best if we did not
> backport any of my own XIVE changes.

These seem to be interesting to have :

ppc/xive2: Fix treatment of PIPR in CPPR update
ppc/xive2: Fix irq preempted by lower priority group irq
ppc/xive: Fix PHYS NSR ring matching
ppc/xive2: fix context push calculation of IPB priority
ppc/xive2: Remote VSDs need to match on forwarding address
ppc/xive2: Fix calculation of END queue sizes
ppc/xive: Report access size in XIVE TM operation error logs
ppc/xive: Fix xive trace event output

?

Thanks,

C.



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-05 16:33         ` Michael Tokarev
@ 2025-08-05 20:17           ` Cédric Le Goater
  0 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-08-05 20:17 UTC (permalink / raw)
  To: Michael Tokarev, milesg, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Caleb Schlossin, Gautam Menghani, qemu-stable

On 8/5/25 18:33, Michael Tokarev wrote:
> On 05.08.2025 19:26, Miles Glenn wrote:
>> On Tue, 2025-07-22 at 17:25 +03..00, Michael Tokarev wrote:
> ...
>>> There are currently 2 active stable branches, 7.2 and 10.0.
>>> Both are supposed to be long-term maintenance.  I think 7.2
>>> can be left behind already.
>>>
>>> Thanks,
>>>
>>> /mjt
>>
>> Michael T.,
>>
>> All of the XIVE fixes/changes originating from myself were made in an
>> effort to get PowerVM firmware running on PowerNV with minimal testing
>> of OPAL firmware.  However, even with those fixes, running PowerVM on
>> PowerNV is still pretty unstable.  While backporting these fixes would
>> likely increase the stability of running PowerVM on PowerNV, I do think
>> it could pose significant risk to the stability of running OPAL on
>> PowerNV.  With that in mind, I think it's probably best if we did not
>> backport any of my own XIVE changes.
> 
> My view on this, - having in mind 10.0 most likely will be a long-term
> support branch - we can pick the PowerVM changes, and if a breakage with
> the case you mentioned is found (which will be the same breakage as with
> master branch, hopefully), we can pick fixes for these too.
> 
> Especially as we have more time now after release of 10.1 and before the
> next stable series.
> 
> So to me, breakage in stable series is not a good thing, but we can as
> well fix it there, - so there might be some balance between known bugs,
> possible breakage and future fixes.

We have a large set of functional tests for powernv, even checking
emulated nested virtualization IIRC. I still have some scripts running
16 sockets powernv machines with a bunch of pci devices to stress
emulation a bit more.

The upstream target is OPAL firmware, not PowerVM. Patches for PowerVM
may be proposed later, if deemed appropriate by the IBM simulation team.

Cheers,

C.


> But it's definitely your call, you know this area much better.
> 
> Thanks,
> 
> /mjt
> 



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-05 20:07         ` Cédric Le Goater
@ 2025-08-06 20:46           ` Miles Glenn
  2025-08-08  6:07             ` Michael Tokarev
  0 siblings, 1 reply; 67+ messages in thread
From: Miles Glenn @ 2025-08-06 20:46 UTC (permalink / raw)
  To: Cédric Le Goater, Michael Tokarev, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Caleb Schlossin, Gautam Menghani, qemu-stable

On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
> On 8/5/25 18:26, Miles Glenn wrote:
> > On Tue, 2025-07-22 at 17:25 +0300, Michael Tokarev wrote:
> > > On 22.07.2025 16:37, Cédric Le Goater wrote:
> > > > + Glenn, Michael, Caleb, Gautam
> > > > 
> > > > On 7/22/25 13:44, Michael Tokarev wrote:
> > > > > 21.07.2025 19:21, Cédric Le Goater wrote:
> > > > > 
> > > > > > ----------------------------------------------------------------
> > > > > > ppc/xive queue:
> > > > > > 
> > > > > > * Various bug fixes around lost interrupts particularly.
> > > > > > * Major group interrupt work, in particular around redistributing
> > > > > >     interrupts. Upstream group support is not in a complete or usable
> > > > > >     state as it is.
> > > > > > * Significant context push/pull improvements, particularly pool and
> > > > > >     phys context handling was quite incomplete beyond trivial OPAL
> > > > > >     case that pushes at boot.
> > > > > > * Improved tracing and checking for unimp and guest error situations.
> > > > > > * Various other missing feature support.
> > > > > 
> > > > > Is there anything in there which should be picked up for
> > > > > stable qemu branches?
> > > > 
> > > > May be the IBM simulation team can say ?
> > > > I think this would also require some testing before applying.
> > > > 
> > > > Which stable branch are you targeting ? 7.2 to 10.0 ?
> > > 
> > > There are currently 2 active stable branches, 7.2 and 10.0.
> > > Both are supposed to be long-term maintenance.  I think 7.2
> > > can be left behind already.
> > > 
> > > Thanks,
> > > 
> > > /mjt
> > 
> > Michael T.,
> > 
> > All of the XIVE fixes/changes originating from myself were made in an
> > effort to get PowerVM firmware running on PowerNV with minimal testing
> > of OPAL firmware.  However, even with those fixes, running PowerVM on
> > PowerNV is still pretty unstable.  While backporting these fixes would
> > likely increase the stability of running PowerVM on PowerNV, I do think
> > it could pose significant risk to the stability of running OPAL on
> > PowerNV.  With that in mind, I think it's probably best if we did not
> > backport any of my own XIVE changes.
> 
> These seem to be interesting to have :
> 
> ppc/xive2: Fix treatment of PIPR in CPPR update
> ppc/xive2: Fix irq preempted by lower priority group irq
> ppc/xive: Fix PHYS NSR ring matching
> ppc/xive2: fix context push calculation of IPB priority
> ppc/xive2: Remote VSDs need to match on forwarding address
> ppc/xive2: Fix calculation of END queue sizes
> ppc/xive: Report access size in XIVE TM operation error logs
> ppc/xive: Fix xive trace event output
> 
> ?
> 
> Thanks,
> 
> C.
> 

I'm still not sure that the benefit is worth the effort, but I
certainly don't have a problem with them being backported if someone
has the desire and the time to do it.

Thanks,

Glenn





^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-06 20:46           ` Miles Glenn
@ 2025-08-08  6:07             ` Michael Tokarev
  2025-08-08  8:17               ` Cédric Le Goater
  2025-08-08 16:17               ` Miles Glenn
  0 siblings, 2 replies; 67+ messages in thread
From: Michael Tokarev @ 2025-08-08  6:07 UTC (permalink / raw)
  To: milesg, Cédric Le Goater, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Caleb Schlossin, Gautam Menghani, qemu-stable

On 06.08.2025 23:46, Miles Glenn wrote:
> On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
...
>> These seem to be interesting to have :
>>
>> ppc/xive2: Fix treatment of PIPR in CPPR update
>> ppc/xive2: Fix irq preempted by lower priority group irq
>> ppc/xive: Fix PHYS NSR ring matching
>> ppc/xive2: fix context push calculation of IPB priority
>> ppc/xive2: Remote VSDs need to match on forwarding address
>> ppc/xive2: Fix calculation of END queue sizes
>> ppc/xive: Report access size in XIVE TM operation error logs
>> ppc/xive: Fix xive trace event output
> 
> I'm still not sure that the benefit is worth the effort, but I
> certainly don't have a problem with them being backported if someone
> has the desire and the time to do it.

I mentioned already that 10.0 series will (hopefully) be LTS series.
At the very least, it is what we'll have in the upcoming debian
stable release (trixie), which will be stable for the next 2 years.
Whenever this is important to have working Power* support in debian -
I don't know.

All the mentioned patches applied to 10.0 branch cleanly (in the
reverse order, from bottom to top), so there's no effort needed
to back-port them.  And the result passes at least the standard
qemu testsuite.  So it looks like everything works as intended.

Please keep qemu-stable@ in Cc for other fixes which you think are
of interest for older/stable series of qemu.

Thanks,

/mjt


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-08  6:07             ` Michael Tokarev
@ 2025-08-08  8:17               ` Cédric Le Goater
  2025-08-08 16:37                 ` Miles Glenn
                                   ` (2 more replies)
  2025-08-08 16:17               ` Miles Glenn
  1 sibling, 3 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-08-08  8:17 UTC (permalink / raw)
  To: Michael Tokarev, milesg, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Caleb Schlossin, Gautam Menghani, qemu-stable

On 8/8/25 08:07, Michael Tokarev wrote:
> On 06.08.2025 23:46, Miles Glenn wrote:
>> On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
> ...
>>> These seem to be interesting to have :
>>>
>>> ppc/xive2: Fix treatment of PIPR in CPPR update
>>> ppc/xive2: Fix irq preempted by lower priority group irq

I added :

   ppc/xive2: Reset Generation Flipped bit on END Cache Watch

>>> ppc/xive: Fix PHYS NSR ring matching
>>> ppc/xive2: fix context push calculation of IPB priority
>>> ppc/xive2: Remote VSDs need to match on forwarding address
>>> ppc/xive2: Fix calculation of END queue sizes
>>> ppc/xive: Report access size in XIVE TM operation error logs
>>> ppc/xive: Fix xive trace event output
>>
>> I'm still not sure that the benefit is worth the effort, but I
>> certainly don't have a problem with them being backported if someone
>> has the desire and the time to do it.
> 
> I mentioned already that 10.0 series will (hopefully) be LTS series.
> At the very least, it is what we'll have in the upcoming debian
> stable release (trixie), which will be stable for the next 2 years.
> Whenever this is important to have working Power* support in debian -
> I don't know.
> 
> All the mentioned patches applied to 10.0 branch cleanly (in the
> reverse order, from bottom to top), so there's no effort needed
> to back-port them.  And the result passes at least the standard
> qemu testsuite.  So it looks like everything works as intended.


24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu"
kernel on a PowerNV10 system defined as :

   Architecture:             ppc64le
     Byte Order:             Little Endian
   CPU(s):                   16
     On-line CPU(s) list:    0-15
   Model name:               POWER10, altivec supported
     Model:                  2.0 (pvr 0080 1200)
     Thread(s) per core:     4
     Core(s) per socket:     2
     Socket(s):              2
     Frequency boost:        enabled
     CPU(s) scaling MHz:     76%
     CPU max MHz:            3800.0000
     CPU min MHz:            2000.0000
   Caches (sum of all):
     L1d:                    128 KiB (4 instances)
     L1i:                    128 KiB (4 instances)
   NUMA:
     NUMA node(s):           2
     NUMA node0 CPU(s):      0-7
     NUMA node1 CPU(s):      8-15

with devices :

   0000:00:00.0 PCI bridge: IBM Device 0652
   0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02)
   0001:00:00.0 PCI bridge: IBM Device 0652
   0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e
   0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03)
   0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
   0002:00:00.0 PCI bridge: IBM Device 0652
   ...

A rhel9 nested guest boots too.

Poweroff and reboot are fine.



Michael,

I would say ship it.


Glenn, Gautam,

It would nice to get rid of these messages.
   
   [    0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16
   [    2.270794918,5] XIVE: [ IC 00  ] Resetting one xive...
   [    2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already enabled
   CPU 0100 Backtrace:
    S: 0000000032413a20 R: 0000000030021408   .backtrace+0x40
    S: 0000000032413ad0 R: 000000003008427c   .xive2_tima_enable_phys+0x40
    S: 0000000032413b50 R: 0000000030087430   .__xive_reset.constprop.0.isra.0+0x520
    S: 0000000032413c90 R: 0000000030087638   .opal_xive_reset+0x78
    S: 0000000032413d10 R: 00000000300038bc   opal_entry+0x14c
    --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 ---
   [    2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already enabled


Is it a modeling issue ?


Thanks,

C.






^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-08  6:07             ` Michael Tokarev
  2025-08-08  8:17               ` Cédric Le Goater
@ 2025-08-08 16:17               ` Miles Glenn
  1 sibling, 0 replies; 67+ messages in thread
From: Miles Glenn @ 2025-08-08 16:17 UTC (permalink / raw)
  To: Michael Tokarev, Cédric Le Goater, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Caleb Schlossin, Gautam Menghani, qemu-stable

On Fri, 2025-08-08 at 09:07 +0300, Michael Tokarev wrote:
> On 06.08.2025 23:46, Miles Glenn wrote:
> > On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
> ...
> > > These seem to be interesting to have :
> > > 
> > > ppc/xive2: Fix treatment of PIPR in CPPR update
> > > ppc/xive2: Fix irq preempted by lower priority group irq
> > > ppc/xive: Fix PHYS NSR ring matching
> > > ppc/xive2: fix context push calculation of IPB priority
> > > ppc/xive2: Remote VSDs need to match on forwarding address
> > > ppc/xive2: Fix calculation of END queue sizes
> > > ppc/xive: Report access size in XIVE TM operation error logs
> > > ppc/xive: Fix xive trace event output
> > 
> > I'm still not sure that the benefit is worth the effort, but I
> > certainly don't have a problem with them being backported if someone
> > has the desire and the time to do it.
> 
> I mentioned already that 10.0 series will (hopefully) be LTS series.
> At the very least, it is what we'll have in the upcoming debian
> stable release (trixie), which will be stable for the next 2 years.
> Whenever this is important to have working Power* support in debian -
> I don't know.
> 
> All the mentioned patches applied to 10.0 branch cleanly (in the
> reverse order, from bottom to top), so there's no effort needed
> to back-port them.  And the result passes at least the standard
> qemu testsuite.  So it looks like everything works as intended.
> 
> Please keep qemu-stable@ in Cc for other fixes which you think are
> of interest for older/stable series of qemu.
> 
> Thanks,
> 
> /mjt

Will do, and thanks for doing the backporting, Michael!

-Glenn



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-08  8:17               ` Cédric Le Goater
@ 2025-08-08 16:37                 ` Miles Glenn
  2025-08-12 20:38                 ` Mike Kowal
  2025-08-19 12:56                 ` Gautam Menghani
  2 siblings, 0 replies; 67+ messages in thread
From: Miles Glenn @ 2025-08-08 16:37 UTC (permalink / raw)
  To: Cédric Le Goater, Michael Tokarev, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Michael Kowal,
	Caleb Schlossin, Gautam Menghani, qemu-stable

On Fri, 2025-08-08 at 10:17 +0200, Cédric Le Goater wrote:
> On 8/8/25 08:07, Michael Tokarev wrote:
> > On 06.08.2025 23:46, Miles Glenn wrote:
> > > On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
> > ...
> > > > These seem to be interesting to have :
> > > > 
> > > > ppc/xive2: Fix treatment of PIPR in CPPR update
> > > > ppc/xive2: Fix irq preempted by lower priority group irq
> 
> I added :
> 
>    ppc/xive2: Reset Generation Flipped bit on END Cache Watch
> 
> > > > ppc/xive: Fix PHYS NSR ring matching
> > > > ppc/xive2: fix context push calculation of IPB priority
> > > > ppc/xive2: Remote VSDs need to match on forwarding address
> > > > ppc/xive2: Fix calculation of END queue sizes
> > > > ppc/xive: Report access size in XIVE TM operation error logs
> > > > ppc/xive: Fix xive trace event output
> > > 
> > > I'm still not sure that the benefit is worth the effort, but I
> > > certainly don't have a problem with them being backported if someone
> > > has the desire and the time to do it.
> > 
> > I mentioned already that 10.0 series will (hopefully) be LTS series.
> > At the very least, it is what we'll have in the upcoming debian
> > stable release (trixie), which will be stable for the next 2 years.
> > Whenever this is important to have working Power* support in debian -
> > I don't know.
> > 
> > All the mentioned patches applied to 10.0 branch cleanly (in the
> > reverse order, from bottom to top), so there's no effort needed
> > to back-port them.  And the result passes at least the standard
> > qemu testsuite.  So it looks like everything works as intended.
> 
> 24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu"
> kernel on a PowerNV10 system defined as :
> 
>    Architecture:             ppc64le
>      Byte Order:             Little Endian
>    CPU(s):                   16
>      On-line CPU(s) list:    0-15
>    Model name:               POWER10, altivec supported
>      Model:                  2.0 (pvr 0080 1200)
>      Thread(s) per core:     4
>      Core(s) per socket:     2
>      Socket(s):              2
>      Frequency boost:        enabled
>      CPU(s) scaling MHz:     76%
>      CPU max MHz:            3800.0000
>      CPU min MHz:            2000.0000
>    Caches (sum of all):
>      L1d:                    128 KiB (4 instances)
>      L1i:                    128 KiB (4 instances)
>    NUMA:
>      NUMA node(s):           2
>      NUMA node0 CPU(s):      0-7
>      NUMA node1 CPU(s):      8-15
> 
> with devices :
> 
>    0000:00:00.0 PCI bridge: IBM Device 0652
>    0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02)
>    0001:00:00.0 PCI bridge: IBM Device 0652
>    0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e
>    0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03)
>    0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
>    0002:00:00.0 PCI bridge: IBM Device 0652
>    ...
> 
> A rhel9 nested guest boots too.
> 
> Poweroff and reboot are fine.
> 
> 
> 
> Michael,
> 
> I would say ship it.
> 
> 
> Glenn, Gautam,
> 
> It would nice to get rid of these messages.
>    
>    [    0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16
>    [    2.270794918,5] XIVE: [ IC 00  ] Resetting one xive...
>    [    2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already enabled
>    CPU 0100 Backtrace:
>     S: 0000000032413a20 R: 0000000030021408   .backtrace+0x40
>     S: 0000000032413ad0 R: 000000003008427c   .xive2_tima_enable_phys+0x40
>     S: 0000000032413b50 R: 0000000030087430   .__xive_reset.constprop.0.isra.0+0x520
>     S: 0000000032413c90 R: 0000000030087638   .opal_xive_reset+0x78
>     S: 0000000032413d10 R: 00000000300038bc   opal_entry+0x14c
>     --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 ---
>    [    2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already enabled
> 
> 
> Is it a modeling issue ?
> 
> 
> Thanks,
> 
> C.
> 
> 
> 
> 

Thank you, Cédric!

I'm not sure what's causing that error message.  I'm assuming it wasn't
there before now, which would probably mean that something (the model?)
is enabling the PHYS cams at initialization or realization where we
didn't used to.

Mike Kowal, is that the expecected behavior?  Can you take a look when
you have a chance?

Thanks,

Glenn



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-08  8:17               ` Cédric Le Goater
  2025-08-08 16:37                 ` Miles Glenn
@ 2025-08-12 20:38                 ` Mike Kowal
  2025-08-19 12:56                 ` Gautam Menghani
  2 siblings, 0 replies; 67+ messages in thread
From: Mike Kowal @ 2025-08-12 20:38 UTC (permalink / raw)
  To: Cédric Le Goater, Michael Tokarev, milesg, qemu-devel
  Cc: Nicholas Piggin, Daniel Henrique Barboza, Caleb Schlossin,
	Gautam Menghani, qemu-stable


On 8/8/2025 3:17 AM, Cédric Le Goater wrote:
> On 8/8/25 08:07, Michael Tokarev wrote:
>> On 06.08.2025 23:46, Miles Glenn wrote:
>>> On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
>> ...
>>>> These seem to be interesting to have :
>>>>
>>>> ppc/xive2: Fix treatment of PIPR in CPPR update
>>>> ppc/xive2: Fix irq preempted by lower priority group irq
>
> I added :
>
>   ppc/xive2: Reset Generation Flipped bit on END Cache Watch
>
>>>> ppc/xive: Fix PHYS NSR ring matching
>>>> ppc/xive2: fix context push calculation of IPB priority
>>>> ppc/xive2: Remote VSDs need to match on forwarding address
>>>> ppc/xive2: Fix calculation of END queue sizes
>>>> ppc/xive: Report access size in XIVE TM operation error logs
>>>> ppc/xive: Fix xive trace event output
>>>
>>> I'm still not sure that the benefit is worth the effort, but I
>>> certainly don't have a problem with them being backported if someone
>>> has the desire and the time to do it.
>>
>> I mentioned already that 10.0 series will (hopefully) be LTS series.
>> At the very least, it is what we'll have in the upcoming debian
>> stable release (trixie), which will be stable for the next 2 years.
>> Whenever this is important to have working Power* support in debian -
>> I don't know.
>>
>> All the mentioned patches applied to 10.0 branch cleanly (in the
>> reverse order, from bottom to top), so there's no effort needed
>> to back-port them.  And the result passes at least the standard
>> qemu testsuite.  So it looks like everything works as intended.
>
>
> 24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu"
> kernel on a PowerNV10 system defined as :
>
>   Architecture:             ppc64le
>     Byte Order:             Little Endian
>   CPU(s):                   16
>     On-line CPU(s) list:    0-15
>   Model name:               POWER10, altivec supported
>     Model:                  2.0 (pvr 0080 1200)
>     Thread(s) per core:     4
>     Core(s) per socket:     2
>     Socket(s):              2
>     Frequency boost:        enabled
>     CPU(s) scaling MHz:     76%
>     CPU max MHz:            3800.0000
>     CPU min MHz:            2000.0000
>   Caches (sum of all):
>     L1d:                    128 KiB (4 instances)
>     L1i:                    128 KiB (4 instances)
>   NUMA:
>     NUMA node(s):           2
>     NUMA node0 CPU(s):      0-7
>     NUMA node1 CPU(s):      8-15
>
> with devices :
>
>   0000:00:00.0 PCI bridge: IBM Device 0652
>   0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM 
> Express Controller (rev 02)
>   0001:00:00.0 PCI bridge: IBM Device 0652
>   0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e
>   0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host 
> Controller (rev 03)
>   0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit 
> Network Connection
>   0002:00:00.0 PCI bridge: IBM Device 0652
>   ...
>
> A rhel9 nested guest boots too.
>
> Poweroff and reboot are fine.
>
>
>
> Michael,
>
> I would say ship it.
>
>
> Glenn, Gautam,
>
> It would nice to get rid of these messages.
>     [    0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16
>   [    2.270794918,5] XIVE: [ IC 00  ] Resetting one xive...
>   [    2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already 
> enabled
>   CPU 0100 Backtrace:
>    S: 0000000032413a20 R: 0000000030021408   .backtrace+0x40
>    S: 0000000032413ad0 R: 000000003008427c .xive2_tima_enable_phys+0x40
>    S: 0000000032413b50 R: 0000000030087430 
> .__xive_reset.constprop.0.isra.0+0x520
>    S: 0000000032413c90 R: 0000000030087638   .opal_xive_reset+0x78
>    S: 0000000032413d10 R: 00000000300038bc   opal_entry+0x14c
>    --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 ---
>   [    2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already 
> enabled
>
>
> Is it a modeling issue ?

I do not think it a modeling issue  We do get any warning or error 
messages when booting Linux on Power VM. Note that " [PATCH 43/50] 
ppc/xive: Check TIMA operations validity" added some warning logs.  The 
problem is that the Context is 'hardware owned' since it is already 
pushed/enabled.

MAK

>
>
> Thanks,
>
> C.
>
>
>
>


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-08  8:17               ` Cédric Le Goater
  2025-08-08 16:37                 ` Miles Glenn
  2025-08-12 20:38                 ` Mike Kowal
@ 2025-08-19 12:56                 ` Gautam Menghani
  2025-09-01  6:23                   ` Cédric Le Goater
  2 siblings, 1 reply; 67+ messages in thread
From: Gautam Menghani @ 2025-08-19 12:56 UTC (permalink / raw)
  To: Cédric Le Goater
  Cc: Michael Tokarev, milesg, qemu-devel, Nicholas Piggin,
	Daniel Henrique Barboza, Michael Kowal, Caleb Schlossin,
	qemu-stable

On Fri, Aug 08, 2025 at 10:17:24AM +0200, Cédric Le Goater wrote:
> On 8/8/25 08:07, Michael Tokarev wrote:
> > On 06.08.2025 23:46, Miles Glenn wrote:
> > > On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
> > ...
> > > > These seem to be interesting to have :
> > > > 
> > > > ppc/xive2: Fix treatment of PIPR in CPPR update
> > > > ppc/xive2: Fix irq preempted by lower priority group irq
> 
> I added :
> 
>   ppc/xive2: Reset Generation Flipped bit on END Cache Watch
> 
> > > > ppc/xive: Fix PHYS NSR ring matching
> > > > ppc/xive2: fix context push calculation of IPB priority
> > > > ppc/xive2: Remote VSDs need to match on forwarding address
> > > > ppc/xive2: Fix calculation of END queue sizes
> > > > ppc/xive: Report access size in XIVE TM operation error logs
> > > > ppc/xive: Fix xive trace event output
> > > 
> > > I'm still not sure that the benefit is worth the effort, but I
> > > certainly don't have a problem with them being backported if someone
> > > has the desire and the time to do it.
> > 
> > I mentioned already that 10.0 series will (hopefully) be LTS series.
> > At the very least, it is what we'll have in the upcoming debian
> > stable release (trixie), which will be stable for the next 2 years.
> > Whenever this is important to have working Power* support in debian -
> > I don't know.
> > 
> > All the mentioned patches applied to 10.0 branch cleanly (in the
> > reverse order, from bottom to top), so there's no effort needed
> > to back-port them.  And the result passes at least the standard
> > qemu testsuite.  So it looks like everything works as intended.
> 
> 
> 24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu"
> kernel on a PowerNV10 system defined as :
> 
>   Architecture:             ppc64le
>     Byte Order:             Little Endian
>   CPU(s):                   16
>     On-line CPU(s) list:    0-15
>   Model name:               POWER10, altivec supported
>     Model:                  2.0 (pvr 0080 1200)
>     Thread(s) per core:     4
>     Core(s) per socket:     2
>     Socket(s):              2
>     Frequency boost:        enabled
>     CPU(s) scaling MHz:     76%
>     CPU max MHz:            3800.0000
>     CPU min MHz:            2000.0000
>   Caches (sum of all):
>     L1d:                    128 KiB (4 instances)
>     L1i:                    128 KiB (4 instances)
>   NUMA:
>     NUMA node(s):           2
>     NUMA node0 CPU(s):      0-7
>     NUMA node1 CPU(s):      8-15
> 
> with devices :
> 
>   0000:00:00.0 PCI bridge: IBM Device 0652
>   0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02)
>   0001:00:00.0 PCI bridge: IBM Device 0652
>   0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e
>   0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03)
>   0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
>   0002:00:00.0 PCI bridge: IBM Device 0652
>   ...
> 
> A rhel9 nested guest boots too.
> 
> Poweroff and reboot are fine.
> 
> 
> 
> Michael,
> 
> I would say ship it.
> 
> 
> Glenn, Gautam,
> 
> It would nice to get rid of these messages.
>   [    0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16
>   [    2.270794918,5] XIVE: [ IC 00  ] Resetting one xive...
>   [    2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already enabled
>   CPU 0100 Backtrace:
>    S: 0000000032413a20 R: 0000000030021408   .backtrace+0x40
>    S: 0000000032413ad0 R: 000000003008427c   .xive2_tima_enable_phys+0x40
>    S: 0000000032413b50 R: 0000000030087430   .__xive_reset.constprop.0.isra.0+0x520
>    S: 0000000032413c90 R: 0000000030087638   .opal_xive_reset+0x78
>    S: 0000000032413d10 R: 00000000300038bc   opal_entry+0x14c
>    --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 ---
>   [    2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already enabled
> 

Hi Cedric,

I'm not able to repro this with the latest QEMU master (commit
5836af078321).

My command line is:

$ cat run.sh
#!/bin/bash

./build/qemu-system-ppc64 \
	-smp 16,sockets=2,cores=2,threads=4 \
	-kernel vmlinux \
	-initrd initrd.img \
	-append 'root=LABEL=cloudimg-rootfs ro console=hvc0 earlyprintk' \
	-drive file=/home/gautam/images/noble-server-cloudimg-ppc64el.img,format=qcow2,if=none,id=drive0,format=qcow2,cache=none -device nvme,bus=pcie.0,addr=0x0,drive=drive0,serial=1234 \
	-M powernv10  -netdev user,id=net0,hostfwd=tcp::2223-:22 -device e1000e,netdev=net0,bus=pcie.1 -nographic


Can you please share your command line with which you got the above
warnings?

Thanks,
Gautam

> 
> Is it a modeling issue ?
> 
> 
> Thanks,
> 
> C.
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PULL 00/50] ppc queue
  2025-08-19 12:56                 ` Gautam Menghani
@ 2025-09-01  6:23                   ` Cédric Le Goater
  0 siblings, 0 replies; 67+ messages in thread
From: Cédric Le Goater @ 2025-09-01  6:23 UTC (permalink / raw)
  To: Gautam Menghani
  Cc: Michael Tokarev, milesg, qemu-devel, Nicholas Piggin,
	Daniel Henrique Barboza, Michael Kowal, Caleb Schlossin,
	qemu-stable

Hello,

On 8/19/25 14:56, Gautam Menghani wrote:
> On Fri, Aug 08, 2025 at 10:17:24AM +0200, Cédric Le Goater wrote:
>> On 8/8/25 08:07, Michael Tokarev wrote:
>>> On 06.08.2025 23:46, Miles Glenn wrote:
>>>> On Tue, 2025-08-05 at 22:07 +0200, Cédric Le Goater wrote:
>>> ...
>>>>> These seem to be interesting to have :
>>>>>
>>>>> ppc/xive2: Fix treatment of PIPR in CPPR update
>>>>> ppc/xive2: Fix irq preempted by lower priority group irq
>>
>> I added :
>>
>>    ppc/xive2: Reset Generation Flipped bit on END Cache Watch
>>
>>>>> ppc/xive: Fix PHYS NSR ring matching
>>>>> ppc/xive2: fix context push calculation of IPB priority
>>>>> ppc/xive2: Remote VSDs need to match on forwarding address
>>>>> ppc/xive2: Fix calculation of END queue sizes
>>>>> ppc/xive: Report access size in XIVE TM operation error logs
>>>>> ppc/xive: Fix xive trace event output
>>>>
>>>> I'm still not sure that the benefit is worth the effort, but I
>>>> certainly don't have a problem with them being backported if someone
>>>> has the desire and the time to do it.
>>>
>>> I mentioned already that 10.0 series will (hopefully) be LTS series.
>>> At the very least, it is what we'll have in the upcoming debian
>>> stable release (trixie), which will be stable for the next 2 years.
>>> Whenever this is important to have working Power* support in debian -
>>> I don't know.
>>>
>>> All the mentioned patches applied to 10.0 branch cleanly (in the
>>> reverse order, from bottom to top), so there's no effort needed
>>> to back-port them.  And the result passes at least the standard
>>> qemu testsuite.  So it looks like everything works as intended.
>>
>>
>> 24.04 operates correctly with a "6.14.0-27-generic #27~24.04.1-Ubuntu"
>> kernel on a PowerNV10 system defined as :
>>
>>    Architecture:             ppc64le
>>      Byte Order:             Little Endian
>>    CPU(s):                   16
>>      On-line CPU(s) list:    0-15
>>    Model name:               POWER10, altivec supported
>>      Model:                  2.0 (pvr 0080 1200)
>>      Thread(s) per core:     4
>>      Core(s) per socket:     2
>>      Socket(s):              2
>>      Frequency boost:        enabled
>>      CPU(s) scaling MHz:     76%
>>      CPU max MHz:            3800.0000
>>      CPU min MHz:            2000.0000
>>    Caches (sum of all):
>>      L1d:                    128 KiB (4 instances)
>>      L1i:                    128 KiB (4 instances)
>>    NUMA:
>>      NUMA node(s):           2
>>      NUMA node0 CPU(s):      0-7
>>      NUMA node1 CPU(s):      8-15
>>
>> with devices :
>>
>>    0000:00:00.0 PCI bridge: IBM Device 0652
>>    0000:01:00.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02)
>>    0001:00:00.0 PCI bridge: IBM Device 0652
>>    0001:01:00.0 PCI bridge: Red Hat, Inc. Device 000e
>>    0001:02:02.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03)
>>    0001:02:03.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
>>    0002:00:00.0 PCI bridge: IBM Device 0652
>>    ...
>>
>> A rhel9 nested guest boots too.
>>
>> Poweroff and reboot are fine.
>>
>>
>>
>> Michael,
>>
>> I would say ship it.
>>
>>
>> Glenn, Gautam,
>>
>> It would nice to get rid of these messages.
>>    [    0.000000] NR_IRQS: 512, nr_irqs: 512, preallocated irqs: 16
>>    [    2.270794918,5] XIVE: [ IC 00  ] Resetting one xive...
>>    [    2.271575295,3] XIVE: [CPU 0000] Error enabling PHYS CAM already enabled
>>    CPU 0100 Backtrace:
>>     S: 0000000032413a20 R: 0000000030021408   .backtrace+0x40
>>     S: 0000000032413ad0 R: 000000003008427c   .xive2_tima_enable_phys+0x40
>>     S: 0000000032413b50 R: 0000000030087430   .__xive_reset.constprop.0.isra.0+0x520
>>     S: 0000000032413c90 R: 0000000030087638   .opal_xive_reset+0x78
>>     S: 0000000032413d10 R: 00000000300038bc   opal_entry+0x14c
>>     --- OPAL call token: 0x80 caller R1: 0xc0000000014bbc90 ---
>>    [    2.273581201,3] XIVE: [CPU 0001] Error enabling PHYS CAM already enabled
>>
> 
> Hi Cedric,
> 
> I'm not able to repro this with the latest QEMU master (commit
> 5836af078321).
> 
> My command line is:
> 
> $ cat run.sh
> #!/bin/bash
> 
> ./build/qemu-system-ppc64 \
> 	-smp 16,sockets=2,cores=2,threads=4 \
> 	-kernel vmlinux \
> 	-initrd initrd.img \
> 	-append 'root=LABEL=cloudimg-rootfs ro console=hvc0 earlyprintk' \
> 	-drive file=/home/gautam/images/noble-server-cloudimg-ppc64el.img,format=qcow2,if=none,id=drive0,format=qcow2,cache=none -device nvme,bus=pcie.0,addr=0x0,drive=drive0,serial=1234 \
> 	-M powernv10  -netdev user,id=net0,hostfwd=tcp::2223-:22 -device e1000e,netdev=net0,bus=pcie.1 -nographic
> 
> 
> Can you please share your command line with which you got the above
> warnings?

It's the same as yours.

The issue seems to be with OPAL. OPAL v7.1-106-g785a5e307 (shipped with QEMU)
is OK. Latest OPAL v7.1-133-gd365a01a0996 is not.


Thanks,

C.



^ permalink raw reply	[flat|nested] 67+ messages in thread

end of thread, other threads:[~2025-09-01  6:25 UTC | newest]

Thread overview: 67+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-21 16:21 [PULL 00/50] ppc queue Cédric Le Goater
2025-07-21 16:21 ` [PULL 01/50] ppc/xive: Fix xive trace event output Cédric Le Goater
2025-07-21 16:21 ` [PULL 02/50] ppc/xive: Report access size in XIVE TM operation error logs Cédric Le Goater
2025-07-21 16:21 ` [PULL 03/50] ppc/xive2: Fix calculation of END queue sizes Cédric Le Goater
2025-07-21 16:21 ` [PULL 04/50] ppc/xive2: Remote VSDs need to match on forwarding address Cédric Le Goater
2025-07-21 16:21 ` [PULL 05/50] ppc/xive2: fix context push calculation of IPB priority Cédric Le Goater
2025-07-21 16:21 ` [PULL 06/50] ppc/xive: Fix PHYS NSR ring matching Cédric Le Goater
2025-07-21 16:21 ` [PULL 07/50] ppc/xive2: Reset Generation Flipped bit on END Cache Watch Cédric Le Goater
2025-07-21 16:21 ` [PULL 08/50] ppc/xive2: Use fair irq target search algorithm Cédric Le Goater
2025-07-21 16:21 ` [PULL 09/50] ppc/xive2: Fix irq preempted by lower priority group irq Cédric Le Goater
2025-07-21 16:21 ` [PULL 10/50] ppc/xive2: Fix treatment of PIPR in CPPR update Cédric Le Goater
2025-07-21 16:21 ` [PULL 11/50] ppc/xive2: Do not present group interrupt on OS-push if precluded by CPPR Cédric Le Goater
2025-07-21 16:21 ` [PULL 12/50] ppc/xive2: Set CPPR delivery should account for group priority Cédric Le Goater
2025-07-21 16:21 ` [PULL 13/50] ppc/xive: tctx_notify should clear the precluded interrupt Cédric Le Goater
2025-07-21 16:21 ` [PULL 14/50] ppc/xive: Explicitly zero NSR after accepting Cédric Le Goater
2025-07-21 16:21 ` [PULL 15/50] ppc/xive: Move NSR decoding into helper functions Cédric Le Goater
2025-07-21 16:21 ` [PULL 16/50] ppc/xive: Fix pulling pool and phys contexts Cédric Le Goater
2025-07-21 16:22 ` [PULL 17/50] pnv/xive2: Support ESB Escalation Cédric Le Goater
2025-07-21 16:22 ` [PULL 18/50] pnv/xive2: Print value in invalid register write logging Cédric Le Goater
2025-07-21 16:22 ` [PULL 19/50] pnv/xive2: VC_ENDC_WATCH_SPEC regs should read back WATCH_FULL Cédric Le Goater
2025-07-21 16:22 ` [PULL 20/50] pnv/xive2: Permit valid writes to VC/PC Flush Control registers Cédric Le Goater
2025-07-21 16:22 ` [PULL 21/50] ppc/xive2: add interrupt priority configuration flags Cédric Le Goater
2025-07-21 16:22 ` [PULL 22/50] ppc/xive2: Support redistribution of group interrupts Cédric Le Goater
2025-07-21 16:22 ` [PULL 23/50] ppc/xive: Add more interrupt notification tracing Cédric Le Goater
2025-07-21 16:22 ` [PULL 24/50] ppc/xive2: Improve pool regs variable name Cédric Le Goater
2025-07-21 16:22 ` [PULL 25/50] ppc/xive2: Implement "Ack OS IRQ to even report line" TIMA op Cédric Le Goater
2025-07-21 16:22 ` [PULL 26/50] ppc/xive2: Redistribute group interrupt precluded by CPPR update Cédric Le Goater
2025-07-21 16:22 ` [PULL 27/50] ppc/xive2: redistribute irqs for pool and phys ctx pull Cédric Le Goater
2025-07-21 16:22 ` [PULL 28/50] ppc/xive: Change presenter .match_nvt to match not present Cédric Le Goater
2025-07-21 16:22 ` [PULL 29/50] ppc/xive2: Redistribute group interrupt preempted by higher priority interrupt Cédric Le Goater
2025-07-21 16:22 ` [PULL 30/50] ppc/xive: Add xive_tctx_pipr_present() to present new interrupt Cédric Le Goater
2025-07-21 16:22 ` [PULL 31/50] ppc/xive: Fix high prio group interrupt being preempted by low prio VP Cédric Le Goater
2025-07-21 16:22 ` [PULL 32/50] ppc/xive: Split xive recompute from IPB function Cédric Le Goater
2025-07-21 16:22 ` [PULL 33/50] ppc/xive: tctx signaling registers rework Cédric Le Goater
2025-07-21 16:22 ` [PULL 34/50] ppc/xive: tctx_accept only lower irq line if an interrupt was presented Cédric Le Goater
2025-07-21 16:22 ` [PULL 35/50] ppc/xive: Add xive_tctx_pipr_set() helper function Cédric Le Goater
2025-07-21 16:22 ` [PULL 36/50] ppc/xive2: split tctx presentation processing from set CPPR Cédric Le Goater
2025-07-21 16:22 ` [PULL 37/50] ppc/xive2: Consolidate presentation processing in context push Cédric Le Goater
2025-07-21 16:22 ` [PULL 38/50] ppc/xive2: Avoid needless interrupt re-check on CPPR set Cédric Le Goater
2025-07-21 16:22 ` [PULL 39/50] ppc/xive: Assert group interrupts were redistributed Cédric Le Goater
2025-07-21 16:22 ` [PULL 40/50] ppc/xive2: implement NVP context save restore for POOL ring Cédric Le Goater
2025-07-21 16:22 ` [PULL 41/50] ppc/xive2: Prevent pulling of pool context losing phys interrupt Cédric Le Goater
2025-07-21 16:22 ` [PULL 42/50] ppc/xive: Redistribute phys after pulling of pool context Cédric Le Goater
2025-07-21 16:22 ` [PULL 43/50] ppc/xive: Check TIMA operations validity Cédric Le Goater
2025-07-21 16:22 ` [PULL 44/50] ppc/xive2: Implement pool context push TIMA op Cédric Le Goater
2025-07-21 16:22 ` [PULL 45/50] ppc/xive2: redistribute group interrupts on context push Cédric Le Goater
2025-07-21 16:22 ` [PULL 46/50] ppc/xive2: Implement set_os_pending TIMA op Cédric Le Goater
2025-07-21 16:22 ` [PULL 47/50] ppc/xive2: Implement POOL LGS push " Cédric Le Goater
2025-07-21 16:22 ` [PULL 48/50] ppc/xive2: Implement PHYS ring VP " Cédric Le Goater
2025-07-21 16:22 ` [PULL 49/50] ppc/xive: Split need_resend into restore_nvp Cédric Le Goater
2025-07-21 16:22 ` [PULL 50/50] ppc/xive2: Enable lower level contexts on VP push Cédric Le Goater
2025-07-22 11:20 ` [PULL 00/50] ppc queue Stefan Hajnoczi
2025-07-22 11:44 ` Michael Tokarev
2025-07-22 13:37   ` Cédric Le Goater
2025-07-22 14:25     ` Michael Tokarev
2025-08-05 16:26       ` Miles Glenn
2025-08-05 16:33         ` Michael Tokarev
2025-08-05 20:17           ` Cédric Le Goater
2025-08-05 20:07         ` Cédric Le Goater
2025-08-06 20:46           ` Miles Glenn
2025-08-08  6:07             ` Michael Tokarev
2025-08-08  8:17               ` Cédric Le Goater
2025-08-08 16:37                 ` Miles Glenn
2025-08-12 20:38                 ` Mike Kowal
2025-08-19 12:56                 ` Gautam Menghani
2025-09-01  6:23                   ` Cédric Le Goater
2025-08-08 16:17               ` Miles Glenn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).