* [PATCH 01/10] lib: irqchip: add S-mode notification helpers
2026-05-14 22:57 [PATCH 00/10] Introduce Virtual IRQ (VIRQ) framework Raymond Mao
@ 2026-05-14 22:57 ` Raymond Mao
2026-05-14 22:57 ` [PATCH 02/10] lib: sbi: domain: adaptation for supporting VIRQ couriering domain context switch Raymond Mao
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Raymond Mao @ 2026-05-14 22:57 UTC (permalink / raw)
To: opensbi
Cc: scott, dave.patel, raymond.mao, robin.randhawa, samuel.holland,
anup.patel, anuppate, anup, dhaval, peter.lin
From: Raymond Mao <raymond.mao@riscstar.com>
Add irqchip helpers to set/clear S-mode notification (SEIP-based),
In addition to set/clear, expose a get API to read the current
notification state so upper layers can do edge-triggered notification.
Signed-off-by: Raymond Mao <raymond.mao@riscstar.com>
---
include/sbi/sbi_irqchip.h | 24 ++++++++++++++++++++++++
lib/sbi/sbi_irqchip.c | 18 +++++++++++++++++-
2 files changed, 41 insertions(+), 1 deletion(-)
diff --git a/include/sbi/sbi_irqchip.h b/include/sbi/sbi_irqchip.h
index 77b54110..7f23615a 100644
--- a/include/sbi/sbi_irqchip.h
+++ b/include/sbi/sbi_irqchip.h
@@ -110,4 +110,28 @@ int sbi_irqchip_init(struct sbi_scratch *scratch, bool cold_boot);
/** Exit interrupt controllers */
void sbi_irqchip_exit(struct sbi_scratch *scratch);
+/**
+ * Notify S-mode for a pending virtual interrupt on this hart.
+ *
+ * The irqchip layer abstracts the notification mechanism; on platforms that
+ * use SEIP, this sets mip.SEIP.
+ */
+int sbi_irqchip_notify_smode_set(void);
+
+/**
+ * Clear S-mode notification for virtual interrupts on this hart.
+ *
+ * The irqchip layer abstracts the notification mechanism; on platforms that
+ * use SEIP, this clears mip.SEIP.
+ */
+void sbi_irqchip_notify_smode_clear(void);
+
+/**
+ * Read S-mode notification state for virtual interrupts on this hart.
+ *
+ * The irqchip layer abstracts the notification mechanism; on platforms that
+ * use SEIP, this reads mip.SEIP.
+ */
+bool sbi_irqchip_notify_smode_get(void);
+
#endif
diff --git a/lib/sbi/sbi_irqchip.c b/lib/sbi/sbi_irqchip.c
index f8599fa6..e022d534 100644
--- a/lib/sbi/sbi_irqchip.c
+++ b/lib/sbi/sbi_irqchip.c
@@ -122,7 +122,7 @@ int sbi_irqchip_raw_handler_default(struct sbi_irqchip_device *chip, u32 hwirq)
sbi_printf("[IRQCHIP] Calling hwirq %u raw handler callback\n", hwirq);
rc = h->callback(hwirq, h->priv);
- if (chip->hwirq_eoi) {
+ if (chip->hwirq_eoi && rc != SBI_EALREADY) {
sbi_printf("[IRQCHIP] Calling EOI of hwirq %u\n", hwirq);
chip->hwirq_eoi(chip, hwirq);
}
@@ -320,3 +320,19 @@ void sbi_irqchip_exit(struct sbi_scratch *scratch)
if (hd && hd->chip && hd->chip->process_hwirqs)
csr_clear(CSR_MIE, MIP_MEIP);
}
+
+int sbi_irqchip_notify_smode_set(void)
+{
+ csr_set(CSR_MIP, MIP_SEIP);
+ return 0;
+}
+
+void sbi_irqchip_notify_smode_clear(void)
+{
+ csr_clear(CSR_MIP, MIP_SEIP);
+}
+
+bool sbi_irqchip_notify_smode_get(void)
+{
+ return !!(csr_read(CSR_MIP) & MIP_SEIP);
+}
--
2.25.1
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 02/10] lib: sbi: domain: adaptation for supporting VIRQ couriering domain context switch
2026-05-14 22:57 [PATCH 00/10] Introduce Virtual IRQ (VIRQ) framework Raymond Mao
2026-05-14 22:57 ` [PATCH 01/10] lib: irqchip: add S-mode notification helpers Raymond Mao
@ 2026-05-14 22:57 ` Raymond Mao
2026-05-14 22:57 ` [PATCH 03/10] lib: sbi: Add Virtual IRQ (VIRQ) subsystem Raymond Mao
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Raymond Mao @ 2026-05-14 22:57 UTC (permalink / raw)
To: opensbi
Cc: scott, dave.patel, raymond.mao, robin.randhawa, samuel.holland,
anup.patel, anuppate, anup, dhaval, peter.lin
From: Raymond Mao <raymond.mao@riscstar.com>
Prerequisite adaptations for introducing VIRQ couriering domain
context switch.
MIDELEG:
- Save/restore MIDELEG in domain contexts and initialize per-domain
defaults.
SEIP notification:
- Add virq_seip_notify flag, and use it to enable SEIP delegation
for SEIP-notify domains.
- Add smode_notify_pending and helper function to store the S-mode
notification status, and use it as a flag to fire the pending
notification after domain context switch.
Return domain context switch:
- Add sbi_domain_context_exit_to_prev() to return to the previous
domain without scanning for another candidate.
- Introduce per-hart deferred return flags and APIs to request/consume
them.
- Perform the actual return-to-prev at the end of sbi_trap_handler()
to avoid corrupting mepc during ecall handling.
Additionally, fix two return domain-switch protential issues:
- When a domain switch occurs inside sbi_trap_handler(), return the
switched-to trap context from scratch instead of the original trap
entry context. This prevents the trap restore path from resuming with
stale state from the previous domain.
- When returning to an S-mode target domain, copy the restored
CSR_SSTATUS SIE/SPIE/SPP bits into trap_ctx->regs.mstatus. The final
trap exit writes CSR_MSTATUS from the trap context, so stale mstatus
bits can otherwise clear S-mode interrupt enable and leave a pending
SEIP undelivered.
Signed-off-by: Raymond Mao <raymond.mao@riscstar.com>
---
include/sbi/sbi_domain.h | 2 +
include/sbi/sbi_domain_context.h | 24 +++++
lib/sbi/sbi_domain_context.c | 152 ++++++++++++++++++++++++++++++-
lib/sbi/sbi_trap.c | 16 ++++
4 files changed, 192 insertions(+), 2 deletions(-)
diff --git a/include/sbi/sbi_domain.h b/include/sbi/sbi_domain.h
index 16edd4ce..c507023c 100644
--- a/include/sbi/sbi_domain.h
+++ b/include/sbi/sbi_domain.h
@@ -217,6 +217,8 @@ struct sbi_domain {
bool system_suspend_allowed;
/** Identifies whether to include the firmware region */
bool fw_region_inited;
+ /** Whether to notify S-mode for VIRQ couriering */
+ bool virq_seip_notify;
};
/** The root domain instance */
diff --git a/include/sbi/sbi_domain_context.h b/include/sbi/sbi_domain_context.h
index 31a3a7f8..88450fcb 100644
--- a/include/sbi/sbi_domain_context.h
+++ b/include/sbi/sbi_domain_context.h
@@ -28,6 +28,30 @@ int sbi_domain_context_enter(struct sbi_domain *dom);
*/
int sbi_domain_context_exit(void);
+/**
+ * Exit the current domain context and return to the previous context
+ * if one exists. This will not attempt to start other domains.
+ *
+ * @return 0 on success and negative error code on failure
+ */
+int sbi_domain_context_exit_to_prev(void);
+
+void sbi_domain_context_request_return_to_prev(void);
+bool sbi_domain_context_need_return_to_prev(void);
+void sbi_domain_context_mark_switched(void);
+bool sbi_domain_context_consume_switched(void);
+
+/**
+ * Mark a pending S-mode notification for a target domain context.
+ *
+ * @param dom pointer to domain
+ * @param hartindex hart index
+ *
+ * @return true if notification was already pending, false otherwise
+ */
+bool sbi_domain_context_pending_notify_smode(struct sbi_domain *dom,
+ u32 hartindex);
+
/**
* Initialize domain context support
*
diff --git a/lib/sbi/sbi_domain_context.c b/lib/sbi/sbi_domain_context.c
index 158f4990..ee84b2f1 100644
--- a/lib/sbi/sbi_domain_context.c
+++ b/lib/sbi/sbi_domain_context.c
@@ -12,6 +12,7 @@
#include <sbi/sbi_hart.h>
#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h>
+#include <sbi/sbi_irqchip.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h>
#include <sbi/sbi_domain.h>
@@ -42,6 +43,8 @@ struct hart_context {
unsigned long sip;
/** Supervisor address translation and protection register */
unsigned long satp;
+ /** Machine interrupt delegation register */
+ unsigned long mideleg;
/** Counter-enable register */
unsigned long scounteren;
/** Supervisor environment configuration register */
@@ -55,9 +58,13 @@ struct hart_context {
struct hart_context *prev_ctx;
/** Is context initialized and runnable */
bool initialized;
+ /** Pending S-mode notification to deliver after switch */
+ bool smode_notify_pending;
};
static struct sbi_domain_data dcpriv;
+static unsigned long sbi_domain_defer_return_mask;
+static unsigned long sbi_domain_switched_mask;
static inline struct hart_context *hart_context_get(struct sbi_domain *dom,
u32 hartindex)
@@ -126,16 +133,32 @@ static int switch_to_next_domain_context(struct hart_context *ctx,
sbi_hart_protection_unconfigure(scratch);
sbi_hart_protection_configure(scratch);
- /* Save current CSR context and restore target domain's CSR context */
+ /*
+ * Save current CSR context and restore target domain's CSR context.
+ *
+ * If the trap came from S-mode (MPP=S), MEPC holds the S-mode return
+ * point. In that case, save MEPC as the SEPC for the current domain
+ * so returning resumes correctly after a VIRQ-driven domain switch.
+ */
ctx->sstatus = csr_swap(CSR_SSTATUS, dom_ctx->sstatus);
ctx->sie = csr_swap(CSR_SIE, dom_ctx->sie);
ctx->stvec = csr_swap(CSR_STVEC, dom_ctx->stvec);
ctx->sscratch = csr_swap(CSR_SSCRATCH, dom_ctx->sscratch);
- ctx->sepc = csr_swap(CSR_SEPC, dom_ctx->sepc);
+ {
+ unsigned long cur_sepc = csr_read(CSR_SEPC);
+
+ if (((csr_read(CSR_MSTATUS) & MSTATUS_MPP) >>
+ MSTATUS_MPP_SHIFT) == PRV_S)
+ cur_sepc = csr_read(CSR_MEPC);
+ ctx->sepc = cur_sepc;
+ csr_write(CSR_SEPC, dom_ctx->sepc);
+ }
ctx->scause = csr_swap(CSR_SCAUSE, dom_ctx->scause);
ctx->stval = csr_swap(CSR_STVAL, dom_ctx->stval);
ctx->sip = csr_swap(CSR_SIP, dom_ctx->sip);
ctx->satp = csr_swap(CSR_SATP, dom_ctx->satp);
+ if (misa_extension('S'))
+ ctx->mideleg = csr_swap(CSR_MIDELEG, dom_ctx->mideleg);
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_10)
ctx->scounteren = csr_swap(CSR_SCOUNTEREN, dom_ctx->scounteren);
if (sbi_hart_priv_version(scratch) >= SBI_HART_PRIV_VER_1_12)
@@ -146,7 +169,38 @@ static int switch_to_next_domain_context(struct hart_context *ctx,
/* Save current trap state and restore target domain's trap state */
trap_ctx = sbi_trap_get_context(scratch);
sbi_memcpy(&ctx->trap_ctx, trap_ctx, sizeof(*trap_ctx));
+ if (((csr_read(CSR_MSTATUS) & MSTATUS_MPP) >> MSTATUS_MPP_SHIFT) ==
+ PRV_S) {
+ /* Preserve S-mode return PC in the saved trap context */
+ ctx->trap_ctx.regs.mepc = ctx->sepc;
+ }
+ /* Ensure M-mode trap context fields are refreshed */
+ ctx->trap_ctx.regs.mepc = csr_read(CSR_MEPC);
+ ctx->trap_ctx.regs.mstatus = csr_read(CSR_MSTATUS);
sbi_memcpy(trap_ctx, &dom_ctx->trap_ctx, sizeof(*trap_ctx));
+ if (((csr_read(CSR_MSTATUS) & MSTATUS_MPP) >> MSTATUS_MPP_SHIFT) ==
+ PRV_S) {
+ /* Ensure target trap context returns to its S-mode PC */
+ trap_ctx->regs.mepc = dom_ctx->sepc;
+ }
+ if (target_dom->next_mode == PRV_S) {
+ trap_ctx->regs.mstatus &= ~MSTATUS_MPP;
+ trap_ctx->regs.mstatus |= (PRV_S << MSTATUS_MPP_SHIFT);
+ trap_ctx->regs.mstatus &= ~(SSTATUS_SIE | SSTATUS_SPIE |
+ SSTATUS_SPP);
+ trap_ctx->regs.mstatus |= csr_read(CSR_SSTATUS) &
+ (SSTATUS_SIE | SSTATUS_SPIE |
+ SSTATUS_SPP);
+ }
+ /* Keep CSR_MEPC aligned with the active trap context */
+ csr_write(CSR_MEPC, trap_ctx->regs.mepc);
+
+ /* Deliver pending S-mode notification after switching context */
+ if (dom_ctx->smode_notify_pending) {
+ if (!sbi_irqchip_notify_smode_get())
+ sbi_irqchip_notify_smode_set();
+ dom_ctx->smode_notify_pending = false;
+ }
/* Mark current context structure initialized because context saved */
ctx->initialized = true;
@@ -163,6 +217,7 @@ static int switch_to_next_domain_context(struct hart_context *ctx,
else
sbi_hsm_hart_stop(scratch, true);
}
+ sbi_domain_context_mark_switched();
return 0;
}
@@ -182,6 +237,19 @@ static int hart_context_init(u32 hartindex)
/* Bind context and domain */
ctx->dom = dom;
+ /*
+ * Default MIDELEG policy: root domain keeps SEI delegated;
+ * non-root domains keep SEI delegated only when VIRQ uses
+ * mip.SEIP for notification.
+ */
+ if (misa_extension('S')) {
+ unsigned long mideleg = csr_read(CSR_MIDELEG);
+
+ if (dom == &root || dom->virq_seip_notify)
+ ctx->mideleg = mideleg | MIP_SEIP;
+ else
+ ctx->mideleg = mideleg & ~MIP_SEIP;
+ }
hart_context_set(dom, hartindex, ctx);
}
@@ -271,6 +339,86 @@ int sbi_domain_context_exit(void)
return switch_to_next_domain_context(ctx, dom_ctx);
}
+int sbi_domain_context_exit_to_prev(void)
+{
+ struct hart_context *ctx = hart_context_thishart_get();
+ struct hart_context *dom_ctx;
+
+ if (!ctx)
+ return SBI_EINVAL;
+
+ dom_ctx = ctx->prev_ctx;
+ if (!dom_ctx)
+ return SBI_ENOENT;
+
+ /*
+ * Returning to a previous domain implies it has already executed,
+ * so its context is runnable even if not marked initialized.
+ */
+ dom_ctx->initialized = true;
+
+ /* Clear prev context to avoid unintended re-entry */
+ ctx->prev_ctx = NULL;
+
+ return switch_to_next_domain_context(ctx, dom_ctx);
+}
+
+void sbi_domain_context_request_return_to_prev(void)
+{
+ sbi_domain_defer_return_mask |= (1UL << current_hartindex());
+}
+
+bool sbi_domain_context_need_return_to_prev(void)
+{
+ u32 hartindex = current_hartindex();
+ bool need = !!(sbi_domain_defer_return_mask & (1UL << hartindex));
+
+ if (need)
+ sbi_domain_defer_return_mask &= ~(1UL << hartindex);
+
+ return need;
+}
+
+void sbi_domain_context_mark_switched(void)
+{
+ sbi_domain_switched_mask |= (1UL << current_hartindex());
+}
+
+bool sbi_domain_context_consume_switched(void)
+{
+ u32 hartindex = current_hartindex();
+ bool switched = !!(sbi_domain_switched_mask & (1UL << hartindex));
+
+ if (switched)
+ sbi_domain_switched_mask &= ~(1UL << hartindex);
+
+ return switched;
+}
+
+bool sbi_domain_context_pending_notify_smode(struct sbi_domain *dom,
+ u32 hartindex)
+{
+ struct hart_context *ctx;
+ bool already;
+
+ if (!dom)
+ return false;
+
+ ctx = hart_context_get(dom, hartindex);
+ if (!ctx) {
+ if (hart_context_init(hartindex))
+ return false;
+ ctx = hart_context_get(dom, hartindex);
+ if (!ctx)
+ return false;
+ }
+
+ already = ctx->smode_notify_pending;
+ ctx->smode_notify_pending = true;
+
+ return already;
+}
+
int sbi_domain_context_init(void)
{
/**
diff --git a/lib/sbi/sbi_trap.c b/lib/sbi/sbi_trap.c
index f41db4d1..79d9ca5a 100644
--- a/lib/sbi/sbi_trap.c
+++ b/lib/sbi/sbi_trap.c
@@ -24,6 +24,7 @@
#include <sbi/sbi_sse.h>
#include <sbi/sbi_timer.h>
#include <sbi/sbi_trap.h>
+#include <sbi/sbi_domain_context.h>
static void sbi_trap_error_one(const struct sbi_trap_context *tcntx,
const char *prefix, u32 hartid, u32 depth)
@@ -372,6 +373,21 @@ trap_done:
if (sbi_mstatus_prev_mode(regs->mstatus) != PRV_M)
sbi_sse_process_pending_events(regs);
+ if (sbi_domain_context_need_return_to_prev()) {
+ int rc = sbi_domain_context_exit_to_prev();
+
+ if (rc && rc != SBI_ENOENT)
+ sbi_printf("return_to_prev failed, rc=%d\n",
+ rc);
+ }
+
+ if (sbi_domain_context_consume_switched()) {
+ struct sbi_trap_context *newctx = sbi_trap_get_context(scratch);
+
+ sbi_trap_set_context(scratch, newctx->prev_context);
+ return newctx;
+ }
+
sbi_trap_set_context(scratch, tcntx->prev_context);
return tcntx;
}
--
2.25.1
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 03/10] lib: sbi: Add Virtual IRQ (VIRQ) subsystem
2026-05-14 22:57 [PATCH 00/10] Introduce Virtual IRQ (VIRQ) framework Raymond Mao
2026-05-14 22:57 ` [PATCH 01/10] lib: irqchip: add S-mode notification helpers Raymond Mao
2026-05-14 22:57 ` [PATCH 02/10] lib: sbi: domain: adaptation for supporting VIRQ couriering domain context switch Raymond Mao
@ 2026-05-14 22:57 ` Raymond Mao
2026-05-14 22:57 ` [PATCH 04/10] lib: sbi: Add VIRQ ecall extension Raymond Mao
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Raymond Mao @ 2026-05-14 22:57 UTC (permalink / raw)
To: opensbi
Cc: scott, dave.patel, raymond.mao, robin.randhawa, samuel.holland,
anup.patel, anuppate, anup, dhaval, peter.lin
From: Raymond Mao <raymond.mao@riscstar.com>
VIRQ is an abstraction framework providing per-MPXY-channel
HWIRQ<->VIRQ mapping, per-(domain,hart) VIRQ routing and couriering.
It notifies S-mode payload via irqchip SEIP helper when a VIRQ is
mapped/routed/enqueued and provides ecall extension for a S-mode
payload to pop/complete a pending VIRQ.
VIRQ layer is composed of three major parts:
1. VIRQ mapping and allocation
- Provides a stable per-MPXY-channel mapping between a host
physical interrupt endpoint (chip_uid, hwirq) and a VIRQ
number.
- VIRQ number allocation uses a scalable bitmap.
2. HWIRQ->(Domain,hart) routing rules
- Routing rules are derived from sysirq nodes by
interrupts-extended property, for example:
interrupts-extended =
<&aplic HWIRQx IRQ_TYPE>, // virq 0
<&aplic HWIRQy IRQ_TYPE>, // virq 1
...;
- VIRQ numbers are allocated from zero, implicit from the order
of the entries in the interrupts-extended property.
- Each entry is cached as a routing rule.
- Default behavior: if an asserted HWIRQ does not match any
routing rule, it will be routed to the root domain
(channel 0) as a fallback.
3. Per-(domain,hart) pending queue couriering
- Each domain maintains a per-hart ring buffer queue of pending
VIRQs. A courier handler enqueues VIRQs on HWIRQ assertion.
- The couriering is domain-aware. It switches to the target domain
when it is not the same as the current one, and sends request for
returning to the previous domain after SEIP completion.
- S-mode notification is edge-triggered based on the irqchip notify
state, and is cleared only when the queue becomes empty.
Signed-off-by: Raymond Mao <raymond.mao@riscstar.com>
---
include/sbi/sbi_domain.h | 2 +
include/sbi/sbi_virq.h | 492 +++++++++++++++++
lib/sbi/objects.mk | 1 +
lib/sbi/sbi_domain.c | 10 +
lib/sbi/sbi_virq.c | 1136 ++++++++++++++++++++++++++++++++++++++
5 files changed, 1641 insertions(+)
create mode 100644 include/sbi/sbi_virq.h
create mode 100644 lib/sbi/sbi_virq.c
diff --git a/include/sbi/sbi_domain.h b/include/sbi/sbi_domain.h
index c507023c..7e288cd8 100644
--- a/include/sbi/sbi_domain.h
+++ b/include/sbi/sbi_domain.h
@@ -219,6 +219,8 @@ struct sbi_domain {
bool fw_region_inited;
/** Whether to notify S-mode for VIRQ couriering */
bool virq_seip_notify;
+ /** per-domain wired-IRQ courier state */
+ void *virq_priv;
};
/** The root domain instance */
diff --git a/include/sbi/sbi_virq.h b/include/sbi/sbi_virq.h
new file mode 100644
index 00000000..566ae827
--- /dev/null
+++ b/include/sbi/sbi_virq.h
@@ -0,0 +1,492 @@
+/* SPDX-License-Identifier: BSD-2-Clause */
+/*
+ * Virtual IRQ (VIRQ) courier/routing layer for OpenSBI.
+ *
+ * This header defines:
+ * 1) VIRQ number allocation and (chip_uid,hwirq) <-> VIRQ mapping
+ * 2) HWIRQ -> Domain routing rules (from DeviceTree "opensbi,mpxy-sysirq")
+ * 3) Per-(domain,hart) pending queue (push in M-mode, pop/complete in S-mode)
+ *
+ * High-level design intent:
+ * - All physical host IRQs are handled in M-mode by host irqchip drivers.
+ * - For each incoming HWIRQ, OpenSBI determines the destination domain using
+ * DT-defined routing rules and enqueues a VIRQ into the per-(domain,hart)
+ * pending queue.
+ * - S-mode payload consumes pending VIRQs via pop(), and completes them via
+ * complete(), which unmasks the corresponding host HWIRQ line.
+ * - M-mode notifies S-mode via the irqchip notification mechanism.
+ *
+ * Notes:
+ * - "opensbi,mpxy-sysirq" routing is derived from the sysirq node's
+ * "interrupts-extended" entries. It does not encode privilege level
+ * delivery. Hardware delivery (MEI vs SEI) is determined by platform IRQ
+ * topology and interrupt-parent.
+ *
+ * Copyright (c) 2026 RISCstar Solutions Corporation.
+ *
+ * Author: Raymond Mao <raymond.mao@riscstar.com>
+ */
+
+#ifndef __SBI_VIRQ_H__
+#define __SBI_VIRQ_H__
+
+#include <sbi/sbi_domain.h>
+#include <sbi/sbi_irqchip.h>
+#include <sbi/riscv_locks.h>
+#include <sbi/sbi_types.h>
+
+/*
+ * Current implementation behavior when queue overflows:
+ * - Drop the incoming VIRQ
+ * - Return SBI_ENOMEM
+ */
+#define VIRQ_QSIZE 64
+
+/*
+ * Reverse mapping table is chunked to avoid a single large static array.
+ * VIRQ is used as an index into a chunk; chunks are allocated on demand.
+ */
+#define VIRQ_CHUNK_SHIFT 6U
+#define VIRQ_CHUNK_SIZE (1U << VIRQ_CHUNK_SHIFT)
+#define VIRQ_CHUNK_MASK (VIRQ_CHUNK_SIZE - 1U)
+
+/* Minimum growth step for forward mapping vector and related metadata. */
+#define VEC_GROW_MIN 16U
+
+/* Returned by pop when no pending VIRQ is available. */
+#define VIRQ_INVALID 0xffffffffU
+
+/*
+ * VIRQ allocator and (chip_uid,hwirq) <-> VIRQ mapping
+ */
+
+/*
+ * VIRQ mapping model:
+ * - Forward mapping: (chip_uid,hwirq) -> VIRQ
+ * Implementation: dynamic vector of entries (linear search).
+ *
+ * - Reverse mapping: VIRQ -> (chip_uid,hwirq)
+ * Implementation: chunked table allocated on demand, O(1) lookup.
+ *
+ * - VIRQ number allocation:
+ * Implementation: growable bitmap; capacity expands as needed.
+ *
+ * Memory usage scales with the number of installed mappings.
+ */
+
+/* Entry of reverse mapping table: represents (chip_uid,hwirq) endpoint */
+struct virq_entry {
+ u32 chip_uid;
+ u32 hwirq;
+};
+
+/* Chunked reverse mapping table: VIRQ -> (chip_uid,hwirq) */
+struct virq_chunk {
+ struct virq_entry e[VIRQ_CHUNK_SIZE];
+};
+
+/*
+ * HWIRQ -> Domain routing rules
+ */
+
+/*
+ * A routing rule maps a single HWIRQ to a domain.
+ *
+ * Rules are populated once during cold boot while parsing the DT
+ * opensbi-domains configuration (sysirq node "opensbi,mpxy-sysirq").
+ *
+ * DT encodes mapping via "interrupts-extended"; the index within this array
+ * becomes the VIRQ number for the given MPXY channel.
+ *
+ * Policy notes:
+ * - Duplicate HWIRQ entries are rejected and return SBI_EALREADY.
+ * - If no rule matches, routing falls back to the root domain (&root).
+ */
+struct sbi_virq_route_rule {
+ u32 hwirq;
+ struct sbi_domain *dom; /* owner domain */
+ u32 channel_id; /* VIRQ space/channel */
+};
+
+/*
+ * Courier context passed as 'opaque' to sbi_virq_courier_handler(), created
+ * per host irqchip.
+ *
+ * The courier handler needs to:
+ * - map (chip_uid,hwirq) -> VIRQ
+ * - mask/unmask HWIRQ using the correct irqchip device
+ * Therefore the irqchip device pointer is carried here.
+ */
+struct sbi_virq_courier_ctx {
+ struct sbi_irqchip_device *chip;
+};
+
+/*
+ * Per-(domain,hart) pending VIRQ state and queue management
+ */
+
+/*
+ * Per-(domain,hart) VIRQ state.
+ *
+ * Locking:
+ * - lock protects head/tail and q[].
+ *
+ * Queue semantics:
+ * - q[] stores VIRQs pending handling for this (domain,hart).
+ * - enqueue is performed by M-mode (courier handler) according to route rule
+ * populated from DT.
+ * - pop/complete is performed by S-mode payload running in the destination
+ * domain on the current hart.
+ * - chip caches the irqchip device for unmasking on complete().
+ */
+struct sbi_domain_virq_state {
+ spinlock_t lock;
+ u32 head;
+ u32 tail;
+
+ /* Pending VIRQ ring buffer. */
+ struct {
+ u32 virq;
+ u32 channel_id;
+ struct sbi_irqchip_device *chip;
+ } q[VIRQ_QSIZE];
+
+ /* Last popped entry for completion. */
+ u32 last_pop_virq;
+ u32 last_pop_channel_id;
+ struct sbi_irqchip_device *last_pop_chip;
+
+ /* Return to previous domain after VIRQ completion. */
+ bool return_to_prev;
+};
+
+/*
+ * Per-domain private VIRQ context.
+ *
+ * Attached to struct sbi_domain and contains per-hart states.
+ */
+struct sbi_domain_virq_priv {
+ /* number of platform harts */
+ u32 nharts;
+
+ /* number of allocated per-hart states */
+ u32 st_count;
+
+ /* per-hart VIRQ state pointer array (indexed by hart index) */
+ struct sbi_domain_virq_state *st_by_hart[];
+};
+
+/* Courier binding used when enqueuing a VIRQ. */
+struct sbi_virq_courier_binding {
+ /* destination domain */
+ struct sbi_domain *dom;
+
+ /* irqchip device that asserted the HWIRQ */
+ struct sbi_irqchip_device *chip;
+
+ /* VIRQ space/channel ID */
+ u32 channel_id;
+
+ /* VIRQ number to enqueue */
+ u32 virq;
+};
+
+/*
+ * Public APIs
+ */
+
+/*
+ * Initialize a per-channel VIRQ map.
+ *
+ * @channel_id:
+ * VIRQ space/channel ID (0 is the default channel).
+ *
+ * @init_virq_cap:
+ * Initial capacity in VIRQ bits (e.g., 256). Implementation may grow beyond.
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_ENOMEM on allocation failure
+ */
+int sbi_virq_map_init(u32 channel_id, u32 init_virq_cap);
+
+/*
+ * Create or get a stable mapping for (channel_id, chip_uid, hwirq) -> VIRQ.
+ *
+ * @channel_id:
+ * Paravirt channel ID; VIRQ numbering is local to each channel.
+ *
+ * @chip_uid:
+ * Unique 32-bit ID of the host irqchip device.
+ *
+ * @hwirq:
+ * Host HWIRQ number as produced by the irqchip driver (e.g. APLIC claim ID).
+ *
+ * @allow_identity:
+ * If true, allocator may attempt VIRQ == hwirq for small ranges.
+ *
+ * @identity_limit:
+ * Upper bound (exclusive) for identity mapping trial: hwirq < identity_limit.
+ *
+ * @out_virq:
+ * Output pointer receiving the mapped/allocated VIRQ (0 is valid).
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_ENOMEM on allocation failure
+ * SBI_ENOSPC if allocator cannot allocate
+ * SBI_EINVAL on invalid parameters
+ */
+int sbi_virq_map_one(u32 channel_id, u32 chip_uid, u32 hwirq,
+ bool allow_identity, u32 identity_limit, u32 *out_virq);
+
+/*
+ * Force a mapping for (channel_id, chip_uid, hwirq) -> VIRQ.
+ *
+ * @channel_id:
+ * Paravirt channel ID; VIRQ numbering is local to each channel.
+ *
+ * @chip_uid:
+ * Unique 32-bit ID of the host irqchip device.
+ *
+ * @hwirq:
+ * Host HWIRQ number as produced by the irqchip driver.
+ *
+ * @virq:
+ * VIRQ number to assign (0 is valid).
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_ENOMEM on allocation failure
+ * SBI_EINVAL on invalid parameters
+ * SBI_EALREADY if a different mapping already exists
+ */
+int sbi_virq_map_set(u32 channel_id, u32 chip_uid, u32 hwirq, u32 virq);
+
+/*
+ * Ensure VIRQ map capacity for a given channel.
+ *
+ * @channel_id:
+ * Paravirt channel ID.
+ *
+ * @min_virq_cap:
+ * Minimum VIRQ bitmap capacity in bits (will be rounded up).
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_EINVAL if the map is not initialized (channel 0)
+ * SBI_ENOMEM on allocation failure
+ */
+int sbi_virq_map_ensure_cap(u32 channel_id, u32 min_virq_cap);
+
+/*
+ * Lookup existing mapping: (channel_id, chip_uid, hwirq) -> VIRQ.
+ *
+ * @channel_id:
+ * Paravirt channel ID; VIRQ numbering is local to each channel.
+ *
+ * @chip_uid:
+ * Irqchip unique id.
+ *
+ * @hwirq:
+ * Host hwirq number.
+ *
+ * @out_virq:
+ * Output VIRQ (0 is valid).
+ *
+ * Return:
+ * SBI_OK if found
+ * SBI_ENOENT if not mapped
+ * SBI_EINVAL on invalid input
+ */
+int sbi_virq_hwirq2virq(u32 channel_id, u32 chip_uid, u32 hwirq,
+ u32 *out_virq);
+
+/*
+ * Reverse lookup: (channel_id, VIRQ) -> (chip_uid, hwirq).
+ *
+ * @channel_id:
+ * Paravirt channel ID; VIRQ numbering is local to each channel.
+ *
+ * @virq:
+ * VIRQ number to look up.
+ *
+ * @out_chip_uid:
+ * Output pointer receiving irqchip unique id.
+ *
+ * @out_hwirq:
+ * Output pointer receiving host hwirq number.
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_EINVAL if virq is VIRQ_INVALID, out of range, not allocated, or
+ * reverse entry missing
+ */
+int sbi_virq_virq2hwirq(u32 channel_id, u32 virq,
+ u32 *out_chip_uid, u32 *out_hwirq);
+
+/*
+ * Unmap a single VIRQ mapping and free the VIRQ number.
+ *
+ * @virq:
+ * VIRQ number to unmap.
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_EINVAL if virq is invalid or state is inconsistent
+ */
+int sbi_virq_unmap_one(u32 virq);
+
+/*
+ * Uninitialize the VIRQ mapping allocator and free all resources.
+ *
+ * Notes:
+ * - This frees bitmap, forward vector, and reverse chunks.
+ */
+void sbi_virq_map_uninit(void);
+
+/*
+ * Reset all HWIRQ->Domain routing rules (frees the rule array).
+ *
+ * Typical usage:
+ * - Called once at cold boot during init before parsing DT domains.
+ */
+void sbi_virq_route_reset(void);
+
+/*
+ * Add a routing rule: hwirq -> dom with channel_id.
+ *
+ * @dom:
+ * Target domain that should receive HWIRQs in this range.
+ *
+ * @hwirq:
+ * HWIRQ number to route.
+ *
+ * @channel_id:
+ * Paravirt channel ID for VIRQ mapping (MPXY channel).
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_EINVAL on invalid parameters
+ * SBI_ENOMEM on allocation failure
+ * SBI_EALREADY if the HWIRQ already has a rule
+ */
+int sbi_virq_route_add(struct sbi_domain *dom, u32 hwirq, u32 channel_id);
+
+/*
+ * Lookup destination domain for a given HWIRQ.
+ *
+ * @hwirq:
+ * Incoming host HWIRQ number.
+ *
+ * @out_dom:
+ * Output pointer receiving destination domain. If no rule matches, &root
+ * is returned.
+ *
+ * @out_channel_id:
+ * Output pointer receiving channel id if non-NULL.
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_EINVAL on invalid parameters
+ */
+int sbi_virq_route_lookup(u32 hwirq, struct sbi_domain **out_dom,
+ u32 *out_channel_id);
+
+/*
+ * Enqueue a VIRQ for the destination domain on the current hart.
+ *
+ * @c:
+ * Courier binding containing:
+ * - c->dom : destination domain
+ * - c->chip : irqchip device pointer
+ * - c->virq : VIRQ number
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_EINVAL on invalid parameters
+ * SBI_ENODEV if per-(domain,hart) state is not available
+ * SBI_ENOMEM if queue is full
+ */
+int sbi_virq_enqueue(struct sbi_virq_courier_binding *c);
+
+/*
+ * Pop the next pending VIRQ for the current domain on the current hart.
+ *
+ * Return:
+ * VIRQ_INVALID if none pending or state not available
+ * otherwise a VIRQ number (zero is legal)
+ */
+u32 sbi_virq_pop_thishart(void);
+
+/*
+ * Complete a previously couriered VIRQ for the current domain/hart.
+ *
+ * @virq:
+ * VIRQ to complete.
+ */
+void sbi_virq_complete_thishart(u32 virq);
+
+/* Return to previous domain if a VIRQ-driven switch is pending. */
+void sbi_virq_return_to_prev_if_needed(void);
+
+
+/*
+ * Courier handler intended to be registered by host irqchip driver.
+ *
+ * @hwirq:
+ * Incoming host HWIRQ number asserted on the irqchip.
+ *
+ * @opaque:
+ * Point to a valid struct sbi_virq_courier_ctx, which provides the
+ * irqchip device pointer used for mapping and mask/unmask.
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_EINVAL on invalid parameters
+ * Other SBI_E* propagated from mapping or enqueue
+ */
+int sbi_virq_courier_handler(u32 hwirq, void *opaque);
+
+/*
+ * Initialize per-domain VIRQ state.
+ *
+ * @dom:
+ * Domain to initialize.
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_EINVAL on invalid parameters
+ * SBI_ENOMEM on allocation failure
+ */
+int sbi_virq_domain_init(struct sbi_domain *dom);
+
+/*
+ * Free per-domain VIRQ state.
+ *
+ * @dom:
+ * Free the per-domain VIRQ state.
+ */
+void sbi_virq_domain_exit(struct sbi_domain *dom);
+
+/*
+ * Initialize VIRQ subsystem (mapping allocator + route rules).
+ * Must be called once before parsing sysirq DT nodes.
+ *
+ * @init_virq_cap:
+ * Initial VIRQ bitmap capacity in bits
+ *
+ * Return:
+ * SBI_OK on success
+ * SBI_EALREADY if called more than once
+ * SBI_ENOMEM on allocation failure
+ * Other SBI_E* error codes propagated from mapping init
+ */
+int sbi_virq_init(u32 init_virq_cap);
+
+/*
+ * Query whether the VIRQ subsystem is initialized.
+ */
+bool sbi_virq_is_inited(void);
+
+#endif
diff --git a/lib/sbi/objects.mk b/lib/sbi/objects.mk
index 07d13229..184bf173 100644
--- a/lib/sbi/objects.mk
+++ b/lib/sbi/objects.mk
@@ -86,6 +86,7 @@ libsbi-objs-y += sbi_illegal_insn.o
libsbi-objs-y += sbi_init.o
libsbi-objs-y += sbi_ipi.o
libsbi-objs-y += sbi_irqchip.o
+libsbi-objs-y += sbi_virq.o
libsbi-objs-y += sbi_platform.o
libsbi-objs-y += sbi_pmu.o
libsbi-objs-y += sbi_dbtr.o
diff --git a/lib/sbi/sbi_domain.c b/lib/sbi/sbi_domain.c
index 7030848d..2a846eea 100644
--- a/lib/sbi/sbi_domain.c
+++ b/lib/sbi/sbi_domain.c
@@ -18,6 +18,7 @@
#include <sbi/sbi_platform.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h>
+#include <sbi/sbi_virq.h>
SBI_LIST_HEAD(domain_list);
@@ -693,6 +694,15 @@ int sbi_domain_register(struct sbi_domain *dom,
return rc;
}
+ /* Init per-domain wired-IRQ courier state */
+ rc = sbi_virq_domain_init(dom);
+ if (rc) {
+ sbi_printf("%s: virq init failed for %s (error %d)\n",
+ __func__, dom->name, rc);
+ sbi_list_del(&dom->node);
+ return rc;
+ }
+
return 0;
}
diff --git a/lib/sbi/sbi_virq.c b/lib/sbi/sbi_virq.c
new file mode 100644
index 00000000..fcd83369
--- /dev/null
+++ b/lib/sbi/sbi_virq.c
@@ -0,0 +1,1136 @@
+// SPDX-License-Identifier: BSD-2-Clause
+/*
+ * Copyright (c) 2026 RISCstar Solutions.
+ *
+ * Author: Raymond Mao <raymond.mao@riscstar.com>
+ */
+
+#include <sbi/sbi_console.h>
+#include <sbi/sbi_error.h>
+#include <sbi/sbi_heap.h>
+#include <sbi/sbi_irqchip.h>
+#include <sbi/sbi_hartmask.h>
+#include <sbi/sbi_platform.h>
+#include <sbi/sbi_scratch.h>
+#include <sbi/sbi_string.h>
+#include <sbi/sbi_virq.h>
+#include <sbi/sbi_domain.h>
+#include <sbi/sbi_domain_context.h>
+#include <sbi/riscv_asm.h>
+#include <sbi/riscv_locks.h>
+
+struct map_node {
+ u32 chip_uid;
+ u32 hwirq;
+ u32 virq;
+};
+
+struct sbi_virq_map {
+ spinlock_t lock;
+
+ /* allocator bitmap */
+ unsigned long *bmap;
+ u32 bmap_nbits; /* virq range: [0..nbits-1] */
+
+ /* reverse table: virq -> endpoint */
+ struct virq_chunk **chunks;
+ u32 chunks_cap; /* number of chunk pointers */
+
+ /* forward table: vector of mappings, linear search */
+ struct map_node *nodes;
+ u32 nodes_cnt;
+ u32 nodes_cap;
+};
+
+struct sbi_virq_map_list {
+ u32 channel_id;
+ struct sbi_virq_map map;
+};
+
+/*
+ * HWIRQ -> Domain routing rules
+ */
+
+struct sbi_virq_router {
+ spinlock_t lock;
+ struct sbi_virq_route_rule *rules;
+ u32 cnt;
+ u32 cap;
+};
+
+static struct sbi_virq_map g_virq_map; /* channel 0 */
+static struct sbi_virq_map_list *g_virq_maps;
+static u32 g_virq_maps_cnt;
+static u32 g_virq_maps_cap;
+static spinlock_t g_virq_maps_lock;
+static struct sbi_virq_router g_router;
+static bool g_virq_inited;
+
+void sbi_virq_route_reset(void)
+{
+ spin_lock(&g_router.lock);
+ if (g_router.rules) {
+ sbi_free(g_router.rules);
+ g_router.rules = NULL;
+ }
+ g_router.cnt = 0;
+ g_router.cap = 0;
+ spin_unlock(&g_router.lock);
+}
+
+static int router_ensure_cap(u32 need)
+{
+ struct sbi_virq_route_rule *newp;
+ u32 newcap;
+
+ if (g_router.cap >= need)
+ return 0;
+
+ newcap = g_router.cap ? (g_router.cap << 1) : 8;
+ while (newcap < need)
+ newcap <<= 1;
+
+ newp = sbi_zalloc((size_t)newcap * sizeof(*newp));
+ if (!newp)
+ return SBI_ENOMEM;
+
+ if (g_router.rules) {
+ sbi_memcpy(newp, g_router.rules,
+ (size_t)g_router.cnt * sizeof(*newp));
+ sbi_free(g_router.rules);
+ }
+
+ g_router.rules = newp;
+ g_router.cap = newcap;
+
+ return SBI_OK;
+}
+
+int sbi_virq_route_add(struct sbi_domain *dom, u32 hwirq, u32 channel_id)
+{
+ int rc;
+
+ if (!dom)
+ return SBI_EINVAL;
+
+ spin_lock(&g_router.lock);
+
+ /* Reject duplicates to keep routing unambiguous */
+ for (u32 i = 0; i < g_router.cnt; i++) {
+ if (g_router.rules[i].hwirq == hwirq) {
+ spin_unlock(&g_router.lock);
+ return SBI_EALREADY;
+ }
+ }
+
+ rc = router_ensure_cap(g_router.cnt + 1);
+ if (rc) {
+ spin_unlock(&g_router.lock);
+ return rc;
+ }
+
+ g_router.rules[g_router.cnt].hwirq = hwirq;
+ g_router.rules[g_router.cnt].dom = dom;
+ g_router.rules[g_router.cnt].channel_id = channel_id;
+ g_router.cnt++;
+
+ spin_unlock(&g_router.lock);
+
+ return SBI_OK;
+}
+
+int sbi_virq_route_lookup(u32 hwirq, struct sbi_domain **out_dom,
+ u32 *out_channel_id)
+{
+ /* Fast path: no rules */
+ if (!g_router.cnt) {
+ if (out_dom)
+ *out_dom = &root;
+ if (out_channel_id)
+ *out_channel_id = 0;
+ return SBI_OK;
+ }
+
+ spin_lock(&g_router.lock);
+ for (u32 i = 0; i < g_router.cnt; i++) {
+ if (hwirq == g_router.rules[i].hwirq) {
+ struct sbi_domain *d = g_router.rules[i].dom;
+ u32 cid = g_router.rules[i].channel_id;
+
+ spin_unlock(&g_router.lock);
+ if (out_dom)
+ *out_dom = d ? d : &root;
+ if (out_channel_id)
+ *out_channel_id = cid;
+ return SBI_OK;
+ }
+ }
+ spin_unlock(&g_router.lock);
+
+ if (out_dom)
+ *out_dom = &root;
+ if (out_channel_id)
+ *out_channel_id = 0;
+ return SBI_OK;
+}
+
+static inline void virq_state_init(struct sbi_domain_virq_state *st)
+{
+ SPIN_LOCK_INIT(st->lock);
+ st->head = 0;
+ st->tail = 0;
+ st->return_to_prev = false;
+}
+
+static inline
+struct sbi_domain_virq_state *domain_virq_thishart(struct sbi_domain *dom)
+{
+ unsigned long hartidx = sbi_hartid_to_hartindex(current_hartid());
+ struct sbi_domain_virq_priv *p;
+
+ p = (struct sbi_domain_virq_priv *)dom->virq_priv;
+ if (!p || hartidx >= p->nharts)
+ return NULL;
+
+ return p->st_by_hart[hartidx];
+}
+
+static inline bool q_full(struct sbi_domain_virq_state *st)
+{
+ return ((st->tail + 1) % VIRQ_QSIZE) == st->head;
+}
+
+static inline bool q_empty(struct sbi_domain_virq_state *st)
+{
+ return st->head == st->tail;
+}
+
+static inline void virq_set_domain_return_flag(struct sbi_domain *dom,
+ bool return_to_prev)
+{
+ struct sbi_domain_virq_state *st = domain_virq_thishart(dom);
+
+ if (!st)
+ return;
+
+ spin_lock(&st->lock);
+ st->return_to_prev = return_to_prev;
+ spin_unlock(&st->lock);
+}
+
+static u32 sbi_virq_platform_hart_count(void)
+{
+ struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
+ const struct sbi_platform *plat = sbi_platform_ptr(scratch);
+
+ return sbi_platform_hart_count(plat);
+}
+
+static int bmap_alloc_one(struct sbi_virq_map *m, u32 *out_virq)
+{
+ u32 v;
+
+ for (v = 0; v < m->bmap_nbits; v++) {
+ if (!bitmap_test(m->bmap, (int)v)) {
+ bitmap_set(m->bmap, (int)v, 1);
+ *out_virq = v;
+ return 0;
+ }
+ }
+
+ return SBI_ENOSPC;
+}
+
+static int bmap_alloc_specific(struct sbi_virq_map *m, u32 virq)
+{
+ if (virq >= m->bmap_nbits)
+ return SBI_EINVAL;
+ if (bitmap_test(m->bmap, (int)virq))
+ return SBI_EALREADY;
+ bitmap_set(m->bmap, (int)virq, 1);
+
+ return 0;
+}
+
+static void bmap_free_one(struct sbi_virq_map *m, u32 virq)
+{
+ if (virq < m->bmap_nbits)
+ bitmap_clear(m->bmap, (int)virq, 1);
+}
+
+static int chunks_ensure_cap(struct sbi_virq_map *m, u32 new_bmap_nbits)
+{
+ u32 new_chunks_cap =
+ (new_bmap_nbits + VIRQ_CHUNK_SIZE - 1U) >> VIRQ_CHUNK_SHIFT;
+ struct virq_chunk **newp;
+
+ if (new_chunks_cap <= m->chunks_cap)
+ return 0;
+
+ newp = sbi_zalloc((size_t)new_chunks_cap * sizeof(*newp));
+ if (!newp)
+ return SBI_ENOMEM;
+
+ if (m->chunks) {
+ sbi_memcpy(newp, m->chunks,
+ (size_t)m->chunks_cap * sizeof(*newp));
+ sbi_free(m->chunks);
+ }
+
+ m->chunks = newp;
+ m->chunks_cap = new_chunks_cap;
+
+ return 0;
+}
+
+static int bmap_grow(struct sbi_virq_map *m, u32 new_nbits)
+{
+ unsigned long *newmap;
+
+ if (new_nbits <= m->bmap_nbits)
+ return 0;
+
+ newmap = sbi_zalloc(bitmap_estimate_size((int)new_nbits));
+ if (!newmap)
+ return SBI_ENOMEM;
+
+ bitmap_zero(newmap, (int)new_nbits);
+ bitmap_copy(newmap, m->bmap, (int)m->bmap_nbits);
+
+ sbi_free(m->bmap);
+ m->bmap = newmap;
+ m->bmap_nbits = new_nbits;
+
+ return chunks_ensure_cap(m, new_nbits);
+}
+
+static struct virq_entry *rev_get_or_alloc(struct sbi_virq_map *m, u32 virq)
+{
+ u32 ci = virq >> VIRQ_CHUNK_SHIFT;
+ u32 off = virq & VIRQ_CHUNK_MASK;
+
+ if (ci >= m->chunks_cap)
+ return NULL;
+
+ if (!m->chunks[ci]) {
+ m->chunks[ci] = sbi_zalloc(sizeof(struct virq_chunk));
+ if (!m->chunks[ci])
+ return NULL;
+ }
+ return &m->chunks[ci]->e[off];
+}
+
+static struct virq_entry *rev_get_existing(struct sbi_virq_map *m, u32 virq)
+{
+ u32 ci = virq >> VIRQ_CHUNK_SHIFT;
+ u32 off = virq & VIRQ_CHUNK_MASK;
+
+ if (ci >= m->chunks_cap || !m->chunks[ci])
+ return NULL;
+ return &m->chunks[ci]->e[off];
+}
+
+static void rev_clear(struct sbi_virq_map *m, u32 virq)
+{
+ struct virq_entry *e = rev_get_existing(m, virq);
+
+ if (e) {
+ e->chip_uid = 0;
+ e->hwirq = 0;
+ }
+}
+
+static int vec_ensure_cap(struct sbi_virq_map *m, u32 need_cnt)
+{
+ struct map_node *newp;
+ u32 newcap;
+
+ if (m->nodes_cap >= need_cnt)
+ return 0;
+
+ newcap = m->nodes_cap ? (m->nodes_cap << 1) :
+ VEC_GROW_MIN;
+ while (newcap < need_cnt)
+ newcap <<= 1;
+
+ newp = sbi_zalloc((size_t)newcap * sizeof(*newp));
+ if (!newp)
+ return SBI_ENOMEM;
+
+ if (m->nodes) {
+ sbi_memcpy(newp, m->nodes,
+ (size_t)m->nodes_cnt * sizeof(*newp));
+ sbi_free(m->nodes);
+ }
+
+ m->nodes = newp;
+ m->nodes_cap = newcap;
+
+ return 0;
+}
+
+static int forward_find_idx(struct sbi_virq_map *m,
+ u32 chip_uid, u32 hwirq, u32 *out_idx)
+{
+ u32 i;
+
+ for (i = 0; i < m->nodes_cnt; i++) {
+ if (m->nodes[i].chip_uid == chip_uid &&
+ m->nodes[i].hwirq == hwirq) {
+ *out_idx = i;
+ return 0;
+ }
+ }
+
+ return SBI_ENOENT;
+}
+
+static int virq_map_init_one(struct sbi_virq_map *m, u32 init_virq_cap)
+{
+ int rc;
+
+ sbi_memset(m, 0, sizeof(*m));
+ SPIN_LOCK_INIT(m->lock);
+
+ if (init_virq_cap < 8U)
+ init_virq_cap = 8U;
+
+ m->bmap_nbits = init_virq_cap;
+ m->bmap =
+ sbi_zalloc(bitmap_estimate_size((int)m->bmap_nbits));
+ if (!m->bmap)
+ return SBI_ENOMEM;
+
+ bitmap_zero(m->bmap, (int)m->bmap_nbits);
+
+ rc = chunks_ensure_cap(m, m->bmap_nbits);
+ if (rc)
+ return rc;
+
+ return SBI_OK;
+}
+
+static struct sbi_virq_map *virq_map_get(u32 channel_id, bool create,
+ u32 init_virq_cap)
+{
+ u32 i;
+ struct sbi_virq_map_list *newp;
+
+ if (channel_id == 0)
+ return &g_virq_map;
+
+ spin_lock(&g_virq_maps_lock);
+ for (i = 0; i < g_virq_maps_cnt; i++) {
+ if (g_virq_maps[i].channel_id == channel_id) {
+ spin_unlock(&g_virq_maps_lock);
+ return &g_virq_maps[i].map;
+ }
+ }
+ if (!create) {
+ spin_unlock(&g_virq_maps_lock);
+ return NULL;
+ }
+
+ if (g_virq_maps_cnt == g_virq_maps_cap) {
+ u32 newcap = g_virq_maps_cap ? (g_virq_maps_cap << 1) : 4;
+
+ newp = sbi_zalloc((size_t)newcap * sizeof(*newp));
+ if (!newp) {
+ spin_unlock(&g_virq_maps_lock);
+ return NULL;
+ }
+ if (g_virq_maps) {
+ sbi_memcpy(newp, g_virq_maps,
+ (size_t)g_virq_maps_cnt * sizeof(*newp));
+ sbi_free(g_virq_maps);
+ }
+ g_virq_maps = newp;
+ g_virq_maps_cap = newcap;
+ }
+
+ g_virq_maps[g_virq_maps_cnt].channel_id = channel_id;
+ if (virq_map_init_one(&g_virq_maps[g_virq_maps_cnt].map,
+ init_virq_cap)) {
+ spin_unlock(&g_virq_maps_lock);
+ return NULL;
+ }
+ g_virq_maps_cnt++;
+ spin_unlock(&g_virq_maps_lock);
+
+ return &g_virq_maps[g_virq_maps_cnt - 1].map;
+}
+
+int sbi_virq_map_init(u32 channel_id, u32 init_virq_cap)
+{
+ if (channel_id == 0)
+ return virq_map_init_one(&g_virq_map, init_virq_cap);
+ SPIN_LOCK_INIT(g_virq_maps_lock);
+ return virq_map_get(channel_id, true, init_virq_cap) ?
+ SBI_OK : SBI_ENOMEM;
+}
+
+int sbi_virq_map_one(u32 channel_id, u32 chip_uid, u32 hwirq,
+ bool allow_identity, u32 identity_limit,
+ u32 *out_virq)
+{
+ u32 idx, virq = 0;
+ int rc;
+ struct sbi_virq_map *m;
+
+ m = virq_map_get(channel_id, true, 0);
+ if (!m)
+ return SBI_ENOMEM;
+
+ spin_lock(&m->lock);
+ /* already mapped? */
+ rc = forward_find_idx(m, chip_uid, hwirq, &idx);
+ if (!rc) {
+ *out_virq = m->nodes[idx].virq;
+ spin_unlock(&m->lock);
+ return 0;
+ }
+
+ /* ensure vector capacity for new node */
+ rc = vec_ensure_cap(m, m->nodes_cnt + 1U);
+ if (rc) {
+ spin_unlock(&m->lock);
+ return rc;
+ }
+
+ /* optional identity */
+ if (allow_identity && hwirq < identity_limit) {
+ /* ensure bitmap covers this virq */
+ if (hwirq >= m->bmap_nbits) {
+ u32 new_nbits = m->bmap_nbits;
+
+ while (new_nbits <= hwirq)
+ new_nbits <<= 1;
+ rc = bmap_grow(m, new_nbits);
+ if (rc) {
+ spin_unlock(&m->lock);
+ return rc;
+ }
+ }
+
+ rc = bmap_alloc_specific(m, hwirq);
+ if (!rc)
+ virq = hwirq;
+ else if (rc != SBI_EALREADY) {
+ spin_unlock(&m->lock);
+ return rc;
+ }
+ }
+
+ /* allocate new virq if identity not taken */
+ if (!virq) {
+ rc = bmap_alloc_one(m, &virq);
+ if (rc == SBI_ENOSPC) {
+ rc = bmap_grow(m, m->bmap_nbits << 1);
+ if (rc) {
+ spin_unlock(&m->lock);
+ return rc;
+ }
+ rc = bmap_alloc_one(m, &virq);
+ }
+ if (rc) {
+ spin_unlock(&m->lock);
+ return rc;
+ }
+ }
+
+ /* install reverse mapping */
+ {
+ struct virq_entry *e = rev_get_or_alloc(m, virq);
+
+ if (!e) {
+ bmap_free_one(m, virq);
+ spin_unlock(&m->lock);
+ return SBI_ENOMEM;
+ }
+ e->chip_uid = chip_uid;
+ e->hwirq = hwirq;
+ }
+
+ /* append forward node */
+ m->nodes[m->nodes_cnt].chip_uid = chip_uid;
+ m->nodes[m->nodes_cnt].hwirq = hwirq;
+ m->nodes[m->nodes_cnt].virq = virq;
+ m->nodes_cnt++;
+
+ *out_virq = virq;
+ spin_unlock(&m->lock);
+
+ return SBI_OK;
+}
+
+int sbi_virq_map_set(u32 channel_id, u32 chip_uid, u32 hwirq, u32 virq)
+{
+ struct sbi_virq_map *m;
+ u32 idx;
+ int rc;
+
+ m = virq_map_get(channel_id, true, virq + 1U);
+ if (!m)
+ return SBI_ENOMEM;
+
+ spin_lock(&m->lock);
+ rc = forward_find_idx(m, chip_uid, hwirq, &idx);
+ if (!rc) {
+ spin_unlock(&m->lock);
+ return (m->nodes[idx].virq == virq) ? SBI_OK : SBI_EALREADY;
+ }
+
+ if (virq >= m->bmap_nbits) {
+ u32 new_nbits = m->bmap_nbits;
+
+ while (new_nbits <= virq)
+ new_nbits <<= 1;
+ rc = bmap_grow(m, new_nbits);
+ if (rc) {
+ spin_unlock(&m->lock);
+ return rc;
+ }
+ }
+
+ rc = bmap_alloc_specific(m, virq);
+ if (rc == SBI_EALREADY) {
+ struct virq_entry *e = rev_get_existing(m, virq);
+
+ if (!e || e->chip_uid != chip_uid || e->hwirq != hwirq) {
+ spin_unlock(&m->lock);
+ return SBI_EALREADY;
+ }
+
+ spin_unlock(&m->lock);
+ return SBI_OK;
+ } else if (rc) {
+ spin_unlock(&m->lock);
+ return rc;
+ }
+
+ rc = vec_ensure_cap(m, m->nodes_cnt + 1U);
+ if (rc) {
+ spin_unlock(&m->lock);
+ return rc;
+ }
+
+ {
+ struct virq_entry *e = rev_get_or_alloc(m, virq);
+
+ if (!e) {
+ bmap_free_one(m, virq);
+ spin_unlock(&m->lock);
+ return SBI_ENOMEM;
+ }
+ e->chip_uid = chip_uid;
+ e->hwirq = hwirq;
+ }
+
+ m->nodes[m->nodes_cnt].chip_uid = chip_uid;
+ m->nodes[m->nodes_cnt].hwirq = hwirq;
+ m->nodes[m->nodes_cnt].virq = virq;
+ m->nodes_cnt++;
+ spin_unlock(&m->lock);
+
+ return SBI_OK;
+}
+
+int sbi_virq_map_ensure_cap(u32 channel_id, u32 min_virq_cap)
+{
+ struct sbi_virq_map *m;
+ u32 new_nbits;
+ int rc = SBI_OK;
+
+ if (min_virq_cap < 8U)
+ min_virq_cap = 8U;
+
+ if (channel_id == 0) {
+ m = &g_virq_map;
+ if (!m->bmap)
+ return SBI_EINVAL;
+ } else {
+ m = virq_map_get(channel_id, true, min_virq_cap);
+ if (!m)
+ return SBI_ENOMEM;
+ }
+
+ if (m->bmap_nbits >= min_virq_cap)
+ return SBI_OK;
+
+ spin_lock(&m->lock);
+ new_nbits = m->bmap_nbits ? m->bmap_nbits : 8U;
+ while (new_nbits < min_virq_cap)
+ new_nbits <<= 1;
+ rc = bmap_grow(m, new_nbits);
+ spin_unlock(&m->lock);
+
+ return rc;
+}
+
+int sbi_virq_hwirq2virq(u32 channel_id, u32 chip_uid, u32 hwirq,
+ u32 *out_virq)
+{
+ u32 idx;
+ int rc;
+ struct sbi_virq_map *m;
+
+ m = virq_map_get(channel_id, false, 0);
+ if (!m)
+ return SBI_ENOENT;
+
+ spin_lock(&m->lock);
+ rc = forward_find_idx(m, chip_uid, hwirq, &idx);
+ if (!rc)
+ *out_virq = m->nodes[idx].virq;
+ spin_unlock(&m->lock);
+
+ return rc;
+}
+
+int sbi_virq_virq2hwirq(u32 channel_id, u32 virq,
+ u32 *out_chip_uid, u32 *out_hwirq)
+{
+ struct virq_entry *e;
+ struct sbi_virq_map *m;
+
+ m = virq_map_get(channel_id, false, 0);
+ if (!m)
+ return SBI_EINVAL;
+
+ spin_lock(&m->lock);
+
+ if (virq >= m->bmap_nbits ||
+ !bitmap_test(m->bmap, (int)virq)) {
+ spin_unlock(&m->lock);
+ return SBI_EINVAL;
+ }
+
+ e = rev_get_existing(m, virq);
+ if (!e) {
+ spin_unlock(&m->lock);
+ return SBI_EINVAL;
+ }
+
+ *out_chip_uid = e->chip_uid;
+ *out_hwirq = e->hwirq;
+
+ spin_unlock(&m->lock);
+
+ return SBI_OK;
+}
+
+int sbi_virq_unmap_one(u32 virq)
+{
+ struct virq_entry *e;
+ u32 idx, last;
+ int rc;
+ struct sbi_virq_map *m = &g_virq_map;
+
+ spin_lock(&m->lock);
+
+ if (virq >= m->bmap_nbits ||
+ !bitmap_test(m->bmap, (int)virq)) {
+ spin_unlock(&m->lock);
+ return SBI_EINVAL;
+ }
+
+ e = rev_get_existing(m, virq);
+ if (!e) {
+ spin_unlock(&m->lock);
+ return SBI_EINVAL;
+ }
+
+ /* find forward node corresponding to this virq (linear) */
+ rc = SBI_ENOENT;
+ for (idx = 0; idx < m->nodes_cnt; idx++) {
+ if (m->nodes[idx].virq == virq) {
+ /* optionally also check endpoint matches e */
+ rc = 0;
+ break;
+ }
+ }
+ if (rc) {
+ /* inconsistent state */
+ spin_unlock(&m->lock);
+ return SBI_EINVAL;
+ }
+
+ /* remove node: swap with last */
+ last = m->nodes_cnt - 1U;
+ if (idx != last)
+ m->nodes[idx] = m->nodes[last];
+ m->nodes_cnt--;
+
+ /* clear reverse + free virq id */
+ rev_clear(m, virq);
+ bmap_free_one(m, virq);
+
+ spin_unlock(&m->lock);
+
+ return SBI_OK;
+}
+
+static void virq_map_uninit_one(struct sbi_virq_map *m)
+{
+ u32 i;
+
+ spin_lock(&m->lock);
+
+ /* free reverse chunks */
+ if (m->chunks) {
+ for (i = 0; i < m->chunks_cap; i++) {
+ if (m->chunks[i])
+ sbi_free(m->chunks[i]);
+ }
+ sbi_free(m->chunks);
+ m->chunks = NULL;
+ m->chunks_cap = 0;
+ }
+
+ /* free forward vector */
+ if (m->nodes) {
+ sbi_free(m->nodes);
+ m->nodes = NULL;
+ m->nodes_cnt = 0;
+ m->nodes_cap = 0;
+ }
+
+ /* free bitmap */
+ if (m->bmap) {
+ sbi_free(m->bmap);
+ m->bmap = NULL;
+ m->bmap_nbits = 0;
+ }
+
+ spin_unlock(&m->lock);
+}
+
+void sbi_virq_map_uninit(void)
+{
+ u32 i;
+
+ virq_map_uninit_one(&g_virq_map);
+
+ spin_lock(&g_virq_maps_lock);
+ for (i = 0; i < g_virq_maps_cnt; i++)
+ virq_map_uninit_one(&g_virq_maps[i].map);
+ if (g_virq_maps) {
+ sbi_free(g_virq_maps);
+ g_virq_maps = NULL;
+ g_virq_maps_cnt = 0;
+ g_virq_maps_cap = 0;
+ }
+ spin_unlock(&g_virq_maps_lock);
+}
+
+int sbi_virq_enqueue(struct sbi_virq_courier_binding *c)
+{
+ struct sbi_domain_virq_state *st;
+
+ if (!c->dom || c->virq == VIRQ_INVALID)
+ return SBI_EINVAL;
+
+ st = domain_virq_thishart(c->dom);
+ if (!st)
+ return SBI_ENODEV;
+
+ spin_lock(&st->lock);
+ if (q_full(st)) {
+ spin_unlock(&st->lock);
+ return SBI_ENOSPC;
+ }
+
+ st->q[st->tail].virq = c->virq;
+ st->q[st->tail].channel_id = c->channel_id;
+ st->q[st->tail].chip = c->chip;
+ st->tail = (st->tail + 1) % VIRQ_QSIZE;
+ spin_unlock(&st->lock);
+
+ return SBI_OK;
+}
+
+u32 sbi_virq_pop_thishart(void)
+{
+ struct sbi_domain *dom = sbi_domain_thishart_ptr();
+ struct sbi_domain_virq_state *st;
+ u32 virq = VIRQ_INVALID;
+
+ if (!dom)
+ return VIRQ_INVALID;
+
+ st = domain_virq_thishart(dom);
+ if (!st)
+ return VIRQ_INVALID;
+
+ spin_lock(&st->lock);
+ if (!q_empty(st)) {
+ virq = st->q[st->head].virq;
+ st->last_pop_virq = virq;
+ st->last_pop_channel_id = st->q[st->head].channel_id;
+ st->last_pop_chip = st->q[st->head].chip;
+ st->head = (st->head + 1) % VIRQ_QSIZE;
+ } else
+ virq = VIRQ_INVALID;
+ spin_unlock(&st->lock);
+
+ if (virq == VIRQ_INVALID) {
+ if (sbi_irqchip_notify_smode_get())
+ sbi_irqchip_notify_smode_clear();
+ }
+
+ return virq;
+}
+
+void sbi_virq_complete_thishart(u32 virq)
+{
+ struct sbi_domain *dom = sbi_domain_thishart_ptr();
+ struct sbi_domain_virq_state *st;
+ u32 hwirq;
+ u32 chip_uid;
+ u32 channel_id;
+ struct sbi_irqchip_device *chip;
+ bool drained = false;
+
+ if (virq == VIRQ_INVALID)
+ return;
+
+ if (!dom)
+ return;
+
+ st = domain_virq_thishart(dom);
+ if (!st)
+ return;
+
+ spin_lock(&st->lock);
+ channel_id = st->last_pop_channel_id;
+ chip = st->last_pop_chip;
+ if (st->last_pop_virq == virq) {
+ st->last_pop_virq = 0;
+ st->last_pop_channel_id = 0;
+ st->last_pop_chip = NULL;
+ }
+ drained = q_empty(st);
+ spin_unlock(&st->lock);
+
+ if (!chip)
+ return;
+
+ sbi_virq_virq2hwirq(channel_id, virq, &chip_uid, &hwirq);
+ (void)chip_uid;
+ if (chip->hwirq_eoi)
+ chip->hwirq_eoi(chip, hwirq);
+ sbi_irqchip_unmask_hwirq(chip, hwirq);
+
+ if (drained) {
+ if (sbi_irqchip_notify_smode_get())
+ sbi_irqchip_notify_smode_clear();
+ sbi_virq_return_to_prev_if_needed();
+ }
+}
+
+void sbi_virq_return_to_prev_if_needed(void)
+{
+ struct sbi_domain *dom = sbi_domain_thishart_ptr();
+ struct sbi_domain_virq_state *st;
+ bool do_return = false;
+
+ if (!dom)
+ return;
+
+ st = domain_virq_thishart(dom);
+ if (!st)
+ return;
+
+ spin_lock(&st->lock);
+ if (st->return_to_prev && q_empty(st)) {
+ st->return_to_prev = false;
+ do_return = true;
+ }
+ spin_unlock(&st->lock);
+
+ if (!do_return)
+ return;
+ sbi_domain_context_request_return_to_prev();
+}
+
+int sbi_virq_courier_handler(u32 hwirq, void *opaque)
+{
+ struct sbi_virq_courier_ctx *ctx =
+ (struct sbi_virq_courier_ctx *)opaque;
+ struct sbi_domain *dom;
+ struct sbi_virq_courier_binding courier;
+ u32 channel_id = 0;
+ u32 virq = 0;
+ int rc;
+ struct sbi_domain *curr_dom;
+
+ if (!ctx || !ctx->chip)
+ return SBI_EINVAL;
+
+ /* Route purely by HWIRQ -> Domain/channel rules (from FDT). */
+ rc = sbi_virq_route_lookup(hwirq, &dom, &channel_id);
+ if (rc || !dom)
+ return SBI_EINVAL;
+
+ curr_dom = sbi_domain_thishart_ptr();
+
+ /* Allocate/Get a stable VIRQ for (chip_uid, hwirq). */
+ rc = sbi_virq_map_one(channel_id, ctx->chip->id, hwirq,
+ false, 0, &virq);
+ if (rc)
+ return rc;
+
+ /*
+ * Mask to avoid level-trigger storm before S-mode clears device source.
+ * S-mode will call sbi_virq_complete_thishart(virq) to unmask.
+ */
+ sbi_irqchip_mask_hwirq(ctx->chip, hwirq);
+
+ courier.dom = dom;
+ courier.chip = ctx->chip;
+ courier.channel_id = channel_id;
+ courier.virq = virq;
+
+ rc = sbi_virq_enqueue(&courier);
+ if (rc) {
+ /* enqueue failed; re-enable to avoid deadlock */
+ sbi_irqchip_unmask_hwirq(ctx->chip, hwirq);
+ return rc;
+ }
+
+ /*
+ * Notify S-mode on notification rising edge.
+ *
+ * If the target is the current domain, operate on the live CSR.
+ * Otherwise, set the pending bit in the target domain context
+ * before switching (covers first-entry). After switching, set the
+ * live CSR only if needed (covers already-initialized targets).
+ */
+ if (dom != curr_dom) {
+ (void)sbi_domain_context_pending_notify_smode(
+ dom, current_hartindex());
+
+ /* Mark return_to_prev for VIRQ-driven domain switch. */
+ virq_set_domain_return_flag(dom, true);
+ rc = sbi_domain_context_enter(dom);
+ if (rc) {
+ /* Switch failed; do not defer EOI */
+ sbi_irqchip_unmask_hwirq(ctx->chip, hwirq);
+ if (ctx->chip->hwirq_eoi)
+ ctx->chip->hwirq_eoi(ctx->chip, hwirq);
+ return SBI_OK;
+ }
+
+ /*
+ * If the domain was already initialized,
+ * sbi_domain_context_enter() returns and CSR_SIP reflect
+ * dom_ctx->sip. For robustness, set the live notify bit if it
+ * is still clear.
+ */
+ if (!sbi_irqchip_notify_smode_get()) {
+ rc = sbi_irqchip_notify_smode_set();
+ if (rc) {
+ /*
+ * notification failed; re-enable to avoid
+ * deadlock
+ */
+ sbi_irqchip_unmask_hwirq(ctx->chip, hwirq);
+ return rc;
+ }
+ }
+ } else if (!sbi_irqchip_notify_smode_get()) {
+ rc = sbi_irqchip_notify_smode_set();
+ if (rc) {
+ /* notification failed; re-enable to avoid deadlock */
+ sbi_irqchip_unmask_hwirq(ctx->chip, hwirq);
+ return rc;
+ }
+ }
+
+ /*
+ * Return SBI_EALREADY to defer EOI until VIRQ COMPLETE so S-mode
+ * notification can be delivered to the target domain.
+ */
+ return SBI_EALREADY;
+}
+
+int sbi_virq_domain_init(struct sbi_domain *dom)
+{
+ struct sbi_domain_virq_priv *p;
+ u32 i, k, nharts, st_count;
+ struct sbi_domain_virq_state *st_base;
+ size_t alloc_size;
+
+ if (!dom)
+ return SBI_EINVAL;
+
+ if (dom->virq_priv)
+ return SBI_OK;
+
+ nharts = sbi_virq_platform_hart_count();
+ st_count = dom->possible_harts ?
+ (u32)sbi_hartmask_weight(dom->possible_harts) : nharts;
+
+ alloc_size = sizeof(*p) +
+ nharts * sizeof(p->st_by_hart[0]) +
+ st_count * sizeof(struct sbi_domain_virq_state);
+ p = sbi_zalloc(alloc_size);
+ if (!p)
+ return SBI_ENOMEM;
+
+ p->nharts = nharts;
+ p->st_count = st_count;
+ st_base = (struct sbi_domain_virq_state *)(p->st_by_hart + nharts);
+
+ if (!dom->possible_harts) {
+ for (i = 0; i < nharts; i++) {
+ p->st_by_hart[i] = &st_base[i];
+ virq_state_init(p->st_by_hart[i]);
+ }
+ } else {
+ for (i = 0; i < nharts; i++)
+ p->st_by_hart[i] = NULL;
+ k = 0;
+ sbi_hartmask_for_each_hartindex(i, dom->possible_harts) {
+ if (k >= st_count)
+ break;
+ p->st_by_hart[i] = &st_base[k++];
+ virq_state_init(p->st_by_hart[i]);
+ }
+ }
+ dom->virq_priv = p;
+
+ return SBI_OK;
+}
+
+void sbi_virq_domain_exit(struct sbi_domain *dom)
+{
+ if (!dom || !dom->virq_priv)
+ return;
+
+ sbi_free(dom->virq_priv);
+ dom->virq_priv = NULL;
+}
+
+int sbi_virq_init(u32 init_virq_cap)
+{
+ int rc = SBI_OK;
+
+ if (g_virq_inited)
+ return SBI_EALREADY;
+
+ rc = sbi_virq_map_init(0, init_virq_cap);
+ if (rc)
+ return rc;
+
+ SPIN_LOCK_INIT(g_virq_maps_lock);
+ SPIN_LOCK_INIT(g_router.lock);
+ sbi_virq_route_reset();
+ g_virq_inited = true;
+ return rc;
+}
+
+bool sbi_virq_is_inited(void)
+{
+ return g_virq_inited;
+}
--
2.25.1
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 04/10] lib: sbi: Add VIRQ ecall extension
2026-05-14 22:57 [PATCH 00/10] Introduce Virtual IRQ (VIRQ) framework Raymond Mao
` (2 preceding siblings ...)
2026-05-14 22:57 ` [PATCH 03/10] lib: sbi: Add Virtual IRQ (VIRQ) subsystem Raymond Mao
@ 2026-05-14 22:57 ` Raymond Mao
2026-05-14 22:57 ` [PATCH 05/10] lib: sbi: domain: add domain lookup by name Raymond Mao
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Raymond Mao @ 2026-05-14 22:57 UTC (permalink / raw)
To: opensbi
Cc: scott, dave.patel, raymond.mao, robin.randhawa, samuel.holland,
anup.patel, anuppate, anup, dhaval, peter.lin
From: Raymond Mao <raymond.mao@riscstar.com>
Add vendor SBI extension ecall for VIRQ.
This allows S-mode payload to pop/complete the next pending VIRQ has
couried into the current domain.
Signed-off-by: Raymond Mao <raymond.mao@riscstar.com>
---
include/sbi/sbi_ecall_interface.h | 26 ++++++++++++++
lib/sbi/Kconfig | 10 ++++++
lib/sbi/objects.mk | 3 ++
lib/sbi/sbi_ecall_virq.c | 56 +++++++++++++++++++++++++++++++
4 files changed, 95 insertions(+)
create mode 100644 lib/sbi/sbi_ecall_virq.c
diff --git a/include/sbi/sbi_ecall_interface.h b/include/sbi/sbi_ecall_interface.h
index 9a776f79..37937a0c 100644
--- a/include/sbi/sbi_ecall_interface.h
+++ b/include/sbi/sbi_ecall_interface.h
@@ -126,6 +126,32 @@
#define SBI_EXT_FWFT_SET 0x0
#define SBI_EXT_FWFT_GET 0x1
+#ifdef CONFIG_SBI_ECALL_VIRQ
+
+/* Vendor extension base range is defined by the SBI spec. Choose a private ID. */
+#define SBI_EXT_VIRQ 0x0900524d
+
+/* Function IDs for SBI_EXT_VIRQ */
+#define SBI_EXT_VIRQ_POP 0
+#define SBI_EXT_VIRQ_COMPLETE 1
+
+/*
+ * SBI_EXT_VIRQ_POP
+ * Returns:
+ * a0: SBI error code (0 for success)
+ * a1: next pending VIRQ (VIRQ_INVALID if none pending)
+ */
+
+/*
+ * SBI_EXT_VIRQ_COMPLETE
+ * Input:
+ * a0: VIRQ to complete
+ * Returns:
+ * a0: SBI error code (0 for success)
+ */
+
+#endif
+
enum sbi_fwft_feature_t {
SBI_FWFT_MISALIGNED_EXC_DELEG = 0x0,
SBI_FWFT_LANDING_PAD = 0x1,
diff --git a/lib/sbi/Kconfig b/lib/sbi/Kconfig
index c6cc04bc..cbb74640 100644
--- a/lib/sbi/Kconfig
+++ b/lib/sbi/Kconfig
@@ -69,4 +69,14 @@ config SBI_ECALL_SSE
config SBI_ECALL_MPXY
bool "MPXY extension"
default y
+
+config SBI_ECALL_VIRQ
+ bool "VIRQ extension"
+ default y
+ help
+ Enable the OpenSBI VIRQ ecall extension.
+ This extension allows an S-mode payload to pop a pending
+ virtual interrupt and complete its deferred host interrupt
+ handling after the payload has consumed the event.
+
endmenu
diff --git a/lib/sbi/objects.mk b/lib/sbi/objects.mk
index 184bf173..ea816e92 100644
--- a/lib/sbi/objects.mk
+++ b/lib/sbi/objects.mk
@@ -64,6 +64,9 @@ libsbi-objs-$(CONFIG_SBI_ECALL_SSE) += sbi_ecall_sse.o
carray-sbi_ecall_exts-$(CONFIG_SBI_ECALL_MPXY) += ecall_mpxy
libsbi-objs-$(CONFIG_SBI_ECALL_MPXY) += sbi_ecall_mpxy.o
+carray-sbi_ecall_exts-$(CONFIG_SBI_ECALL_VIRQ) += ecall_virq
+libsbi-objs-$(CONFIG_SBI_ECALL_VIRQ) += sbi_ecall_virq.o
+
libsbi-objs-y += sbi_bitmap.o
libsbi-objs-y += sbi_bitops.o
libsbi-objs-y += sbi_console.o
diff --git a/lib/sbi/sbi_ecall_virq.c b/lib/sbi/sbi_ecall_virq.c
new file mode 100644
index 00000000..a84a83d7
--- /dev/null
+++ b/lib/sbi/sbi_ecall_virq.c
@@ -0,0 +1,56 @@
+// SPDX-License-Identifier: BSD-2-Clause
+/*
+ * Copyright (c) 2026 RISCstar Solutions.
+ *
+ * Author: Raymond Mao <raymond.mao@riscstar.com>
+ */
+
+#include <sbi/sbi_console.h>
+#include <sbi/sbi_ecall.h>
+#include <sbi/sbi_ecall_interface.h>
+#include <sbi/sbi_error.h>
+#include <sbi/sbi_trap.h>
+#include <sbi/sbi_virq.h>
+
+static int sbi_ecall_virq_handler(unsigned long extid,
+ unsigned long funcid,
+ struct sbi_trap_regs *regs,
+ struct sbi_ecall_return *out)
+{
+ (void)extid;
+
+ sbi_printf("[ECALL VIRQ] VIRQ ecall handler, funcid: %ld\n", funcid);
+
+ switch (funcid) {
+ case SBI_EXT_VIRQ_POP:
+ out->value = (unsigned long)sbi_virq_pop_thishart();
+ return SBI_OK;
+ case SBI_EXT_VIRQ_COMPLETE:
+ u32 virq = (u32)regs->a0;
+
+ sbi_virq_complete_thishart(virq);
+ regs->a0 = 0;
+ return SBI_OK;
+ default:
+ return SBI_ENOTSUPP;
+ }
+}
+
+struct sbi_ecall_extension ecall_virq;
+
+static int sbi_ecall_virq_register_extensions(void)
+{
+ int ret;
+
+ ret = sbi_ecall_register_extension(&ecall_virq);
+ sbi_printf("[ECALL VIRQ] register VIRQ ecall extensions, ret=%d\n", ret);
+ return ret;
+}
+
+struct sbi_ecall_extension ecall_virq = {
+ .name = "virq",
+ .extid_start = SBI_EXT_VIRQ,
+ .extid_end = SBI_EXT_VIRQ,
+ .register_extensions = sbi_ecall_virq_register_extensions,
+ .handle = sbi_ecall_virq_handler,
+};
--
2.25.1
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 05/10] lib: sbi: domain: add domain lookup by name
2026-05-14 22:57 [PATCH 00/10] Introduce Virtual IRQ (VIRQ) framework Raymond Mao
` (3 preceding siblings ...)
2026-05-14 22:57 ` [PATCH 04/10] lib: sbi: Add VIRQ ecall extension Raymond Mao
@ 2026-05-14 22:57 ` Raymond Mao
2026-05-14 22:57 ` [PATCH 06/10] lib: utils: fdt: parse sysirq routing from DT Raymond Mao
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Raymond Mao @ 2026-05-14 22:57 UTC (permalink / raw)
To: opensbi
Cc: scott, dave.patel, raymond.mao, robin.randhawa, samuel.holland,
anup.patel, anuppate, anup, dhaval, peter.lin
From: Raymond Mao <raymond.mao@riscstar.com>
Provide a helper to resolve a domain by its DT node name, used by
sysirq DT parsing.
Signed-off-by: Raymond Mao <raymond.mao@riscstar.com>
---
include/sbi/sbi_domain.h | 3 +++
lib/sbi/sbi_domain.c | 15 +++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/include/sbi/sbi_domain.h b/include/sbi/sbi_domain.h
index 7e288cd8..b7267da8 100644
--- a/include/sbi/sbi_domain.h
+++ b/include/sbi/sbi_domain.h
@@ -232,6 +232,9 @@ struct sbi_domain *sbi_hartindex_to_domain(u32 hartindex);
/** Update HART local pointer to point to specified domain */
void sbi_update_hartindex_to_domain(u32 hartindex, struct sbi_domain *dom);
+/** Find domain by DT node name (domain name) */
+struct sbi_domain *sbi_domain_find_by_name(const char *name);
+
/** Get pointer to sbi_domain for current HART */
#define sbi_domain_thishart_ptr() \
sbi_hartindex_to_domain(current_hartindex())
diff --git a/lib/sbi/sbi_domain.c b/lib/sbi/sbi_domain.c
index 2a846eea..c33f2b3c 100644
--- a/lib/sbi/sbi_domain.c
+++ b/lib/sbi/sbi_domain.c
@@ -60,6 +60,21 @@ void sbi_update_hartindex_to_domain(u32 hartindex, struct sbi_domain *dom)
sbi_scratch_write_type(scratch, void *, domain_hart_ptr_offset, dom);
}
+struct sbi_domain *sbi_domain_find_by_name(const char *name)
+{
+ struct sbi_domain *dom;
+
+ if (!name)
+ return NULL;
+
+ sbi_domain_for_each(dom) {
+ if (!sbi_strncmp(dom->name, name, sizeof(dom->name)))
+ return dom;
+ }
+
+ return NULL;
+}
+
bool sbi_domain_is_assigned_hart(const struct sbi_domain *dom, u32 hartindex)
{
bool ret;
--
2.25.1
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 06/10] lib: utils: fdt: parse sysirq routing from DT
2026-05-14 22:57 [PATCH 00/10] Introduce Virtual IRQ (VIRQ) framework Raymond Mao
` (4 preceding siblings ...)
2026-05-14 22:57 ` [PATCH 05/10] lib: sbi: domain: add domain lookup by name Raymond Mao
@ 2026-05-14 22:57 ` Raymond Mao
2026-05-14 22:57 ` [PATCH 07/10] lib: utils: irqchip: derive APLIC targets from sysirq nodes Raymond Mao
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Raymond Mao @ 2026-05-14 22:57 UTC (permalink / raw)
To: opensbi
Cc: scott, dave.patel, raymond.mao, robin.randhawa, samuel.holland,
anup.patel, anuppate, anup, dhaval, peter.lin
From: Raymond Mao <raymond.mao@riscstar.com>
Init VIRQ and parse mpxy-sysirq nodes under /chosen/opensbi-domains,
pre-size per-channel VIRQ maps, and add HWIRQ routes from
interrupts-extended.
Mark sysirq domains to allow SEIP-based VIRQ notification.
Signed-off-by: Raymond Mao <raymond.mao@riscstar.com>
---
include/sbi_utils/fdt/fdt_helper.h | 17 +++++
lib/utils/fdt/fdt_domain.c | 119 ++++++++++++++++++++++++++++-
lib/utils/fdt/fdt_helper.c | 49 ++++++++++++
3 files changed, 183 insertions(+), 2 deletions(-)
diff --git a/include/sbi_utils/fdt/fdt_helper.h b/include/sbi_utils/fdt/fdt_helper.h
index 04c850cc..e49a5bca 100644
--- a/include/sbi_utils/fdt/fdt_helper.h
+++ b/include/sbi_utils/fdt/fdt_helper.h
@@ -12,6 +12,7 @@
#include <sbi/sbi_types.h>
#include <sbi/sbi_domain.h>
+#include <sbi/sbi_irqchip.h>
struct fdt_match {
const char *compatible;
@@ -38,6 +39,22 @@ int fdt_parse_phandle_with_args(const void *fdt, int nodeoff,
const char *prop, const char *cells_prop,
int index, struct fdt_phandle_args *out_args);
+/*
+ * Parse one entry from "interrupts-extended" and return irqchip device,
+ * hwirq, and optional flags.
+ *
+ * @index: entry index within interrupts-extended
+ * @out_flags_count: in/out, on input capacity of out_flags array;
+ * on output number of flags returned (or total flags
+ * if out_flags is NULL).
+ */
+int fdt_parse_interrupts_extended_entry(const void *fdt, int nodeoff,
+ int index,
+ struct sbi_irqchip_device **out_chip,
+ u32 *out_hwirq,
+ u32 *out_flags,
+ u32 *out_flags_count);
+
int fdt_get_node_addr_size(const void *fdt, int node, int index,
uint64_t *addr, uint64_t *size);
diff --git a/lib/utils/fdt/fdt_domain.c b/lib/utils/fdt/fdt_domain.c
index b2fa8633..4a75f25a 100644
--- a/lib/utils/fdt/fdt_domain.c
+++ b/lib/utils/fdt/fdt_domain.c
@@ -10,11 +10,14 @@
#include <libfdt.h>
#include <libfdt_env.h>
+#include <sbi/sbi_console.h>
#include <sbi/sbi_domain.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_hartmask.h>
#include <sbi/sbi_heap.h>
+#include <sbi/sbi_irqchip.h>
#include <sbi/sbi_scratch.h>
+#include <sbi/sbi_virq.h>
#include <sbi_utils/fdt/fdt_domain.h>
#include <sbi_utils/fdt/fdt_helper.h>
@@ -304,6 +307,7 @@ static int __fdt_parse_region(const void *fdt, int domain_offset,
return 0;
}
+
static int __fdt_parse_domain(const void *fdt, int domain_offset, void *opaque)
{
u32 val32;
@@ -511,6 +515,113 @@ fail_free_domain:
return err;
}
+static int __fdt_parse_mpxy_sysirq_node(const void *fdt, int nodeoff)
+{
+ const fdt32_t *val;
+ int len, rc, doff;
+ u32 channel_id;
+ u32 index;
+ struct sbi_domain *dom;
+
+ if (!fdt || nodeoff < 0)
+ return SBI_EINVAL;
+
+ val = fdt_getprop(fdt, nodeoff, "opensbi,mpxy-channel-id", &len);
+ if (!val || len < (int)sizeof(fdt32_t)) {
+ sbi_printf("[SYSIRQ] missing opensbi,mpxy-channel-id\n");
+ return SBI_EINVAL;
+ }
+ channel_id = fdt32_to_cpu(*val);
+
+ val = fdt_getprop(fdt, nodeoff, "opensbi,domain", &len);
+ if (!val || len < (int)sizeof(fdt32_t)) {
+ sbi_printf("[SYSIRQ] missing opensbi,domain\n");
+ return SBI_EINVAL;
+ }
+
+ doff = fdt_node_offset_by_phandle(fdt, fdt32_to_cpu(*val));
+ if (doff < 0)
+ return doff;
+
+ dom = sbi_domain_find_by_name(fdt_get_name(fdt, doff, NULL));
+ if (!dom) {
+ sbi_printf("[SYSIRQ] domain not found for node %s\n",
+ fdt_get_name(fdt, doff, NULL));
+ return SBI_ENOENT;
+ }
+ dom->virq_seip_notify = true;
+
+ /* Pre-allocate VIRQ map based on interrupts-extended count */
+ for (index = 0; ; index++) {
+ rc = fdt_parse_interrupts_extended_entry(fdt, nodeoff, index,
+ NULL, NULL,
+ NULL, NULL);
+ if (rc == SBI_ENOENT)
+ break;
+ if (rc)
+ return rc;
+ }
+
+ if (!sbi_virq_is_inited())
+ rc = sbi_virq_init(index);
+ else
+ rc = sbi_virq_map_ensure_cap(channel_id, index);
+ if (rc)
+ return rc;
+
+ for (index = 0; ; index++) {
+ struct sbi_irqchip_device *chip = NULL;
+ u32 hwirq = 0;
+
+ rc = fdt_parse_interrupts_extended_entry(fdt, nodeoff, index,
+ &chip, &hwirq,
+ NULL, NULL);
+ if (rc == SBI_ENOENT)
+ break;
+ if (rc)
+ return rc;
+ if (!chip)
+ return SBI_ENODEV;
+
+ rc = sbi_virq_map_set(channel_id, chip->id, hwirq, index);
+ if (rc)
+ return rc;
+
+ rc = sbi_virq_route_add(dom, hwirq, channel_id);
+ if (rc)
+ return rc;
+ }
+
+ return SBI_OK;
+}
+
+static int __fdt_parse_mpxy_sysirq_nodes(const void *fdt)
+{
+ int poffset, noff, rc;
+
+ if (!fdt)
+ return SBI_EINVAL;
+
+ poffset = fdt_path_offset(fdt, "/chosen");
+ if (poffset < 0)
+ return 0;
+ poffset = fdt_node_offset_by_compatible(fdt, poffset,
+ "opensbi,domain,config");
+ if (poffset < 0)
+ return 0;
+
+ fdt_for_each_subnode(noff, fdt, poffset) {
+ if (fdt_node_check_compatible(fdt, noff,
+ "opensbi,mpxy-sysirq"))
+ continue;
+ rc = __fdt_parse_mpxy_sysirq_node(fdt, noff);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
int fdt_domains_populate(const void *fdt)
{
const u32 *val;
@@ -550,6 +661,10 @@ int fdt_domains_populate(const void *fdt)
}
/* Iterate over each domain in FDT and populate details */
- return fdt_iterate_each_domain_ro(fdt, &cold_domain_offset,
- __fdt_parse_domain);
+ err = fdt_iterate_each_domain_ro(fdt, &cold_domain_offset,
+ __fdt_parse_domain);
+ if (err)
+ return err;
+
+ return __fdt_parse_mpxy_sysirq_nodes(fdt);
}
diff --git a/lib/utils/fdt/fdt_helper.c b/lib/utils/fdt/fdt_helper.c
index b57eae1a..3d7c4eec 100644
--- a/lib/utils/fdt/fdt_helper.c
+++ b/lib/utils/fdt/fdt_helper.c
@@ -10,6 +10,7 @@
#include <sbi/riscv_asm.h>
#include <sbi/sbi_console.h>
#include <sbi/sbi_hartmask.h>
+#include <sbi/sbi_irqchip.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_hart.h>
@@ -80,6 +81,54 @@ int fdt_parse_phandle_with_args(const void *fdt, int nodeoff,
return SBI_ENOENT;
}
+int fdt_parse_interrupts_extended_entry(const void *fdt, int nodeoff,
+ int index,
+ struct sbi_irqchip_device **out_chip,
+ u32 *out_hwirq,
+ u32 *out_flags,
+ u32 *out_flags_count)
+{
+ struct fdt_phandle_args args;
+ struct sbi_irqchip_device *chip;
+ u32 flags_cap = 0, flags_cnt = 0;
+ int rc, i;
+
+ if (!fdt || nodeoff < 0)
+ return SBI_EINVAL;
+
+ rc = fdt_parse_phandle_with_args(fdt, nodeoff, "interrupts-extended",
+ "#interrupt-cells", index, &args);
+ if (rc)
+ return rc;
+
+ if (args.args_count < 1)
+ return SBI_EINVAL;
+
+ if (out_hwirq)
+ *out_hwirq = args.args[0];
+
+ if (out_flags_count) {
+ flags_cap = *out_flags_count;
+ flags_cnt = (args.args_count > 1) ? (args.args_count - 1) : 0;
+ if (out_flags && flags_cap < flags_cnt)
+ flags_cnt = flags_cap;
+ if (out_flags) {
+ for (i = 0; i < (int)flags_cnt; i++)
+ out_flags[i] = args.args[i + 1];
+ }
+ *out_flags_count = flags_cnt;
+ }
+
+ if (out_chip) {
+ chip = sbi_irqchip_find_device((u32)args.node_offset);
+ if (!chip)
+ return SBI_ENODEV;
+ *out_chip = chip;
+ }
+
+ return 0;
+}
+
static int fdt_translate_address(const void *fdt, uint64_t reg, int parent,
uint64_t *addr)
{
--
2.25.1
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 07/10] lib: utils: irqchip: derive APLIC targets from sysirq nodes
2026-05-14 22:57 [PATCH 00/10] Introduce Virtual IRQ (VIRQ) framework Raymond Mao
` (5 preceding siblings ...)
2026-05-14 22:57 ` [PATCH 06/10] lib: utils: fdt: parse sysirq routing from DT Raymond Mao
@ 2026-05-14 22:57 ` Raymond Mao
2026-05-14 22:57 ` [PATCH 08/10] lib: irqchip: support deferred completion and per-HWIRQ APLIC targets Raymond Mao
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Raymond Mao @ 2026-05-14 22:57 UTC (permalink / raw)
To: opensbi
Cc: scott, dave.patel, raymond.mao, robin.randhawa, samuel.holland,
anup.patel, anuppate, anup, dhaval, peter.lin
From: Raymond Mao <raymond.mao@riscstar.com>
Derive a per-HWIRQ target hartindex map from each sysirq node's
target domain boot-hart, and store the result in aplic_data.
This is used for setting IDC target per HWIRQ under irqchip driver
instead of hardcoding to a specific target.
Signed-off-by: Raymond Mao <raymond.mao@riscstar.com>
---
include/sbi_utils/irqchip/aplic.h | 1 +
lib/utils/irqchip/fdt_irqchip_aplic.c | 103 ++++++++++++++++++++++++++
2 files changed, 104 insertions(+)
diff --git a/include/sbi_utils/irqchip/aplic.h b/include/sbi_utils/irqchip/aplic.h
index 3461d1c7..d6e088f8 100644
--- a/include/sbi_utils/irqchip/aplic.h
+++ b/include/sbi_utils/irqchip/aplic.h
@@ -47,6 +47,7 @@ struct aplic_data {
struct aplic_msicfg_data msicfg_smode;
struct aplic_delegate_data delegate[APLIC_MAX_DELEGATE];
u32 *idc_map;
+ u32 *hwirq_target_hartindex;
};
int aplic_cold_irqchip_init(struct aplic_data *aplic);
diff --git a/lib/utils/irqchip/fdt_irqchip_aplic.c b/lib/utils/irqchip/fdt_irqchip_aplic.c
index f9b567f5..cbbdf8d8 100644
--- a/lib/utils/irqchip/fdt_irqchip_aplic.c
+++ b/lib/utils/irqchip/fdt_irqchip_aplic.c
@@ -13,6 +13,7 @@
#include <sbi/sbi_console.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_heap.h>
+#include <sbi/sbi_scratch.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/irqchip/fdt_irqchip.h>
#include <sbi_utils/irqchip/aplic.h>
@@ -54,6 +55,88 @@ static int irqchip_aplic_update_idc_map(const void *fdt, int nodeoff,
return 0;
}
+static u32 irqchip_aplic_domain_boot_hartindex(void *fdt, int domain_offset)
+{
+ int len, cpu_offset;
+ const fdt32_t *val;
+
+ val = fdt_getprop(fdt, domain_offset, "boot-hart", &len);
+ if (val && len >= 4) {
+ cpu_offset = fdt_node_offset_by_phandle(fdt,
+ fdt32_to_cpu(*val));
+ if (cpu_offset >= 0) {
+ u32 hartid;
+
+ if (!fdt_parse_hart_id(fdt, cpu_offset, &hartid)) {
+ u32 hidx = sbi_hartid_to_hartindex(hartid);
+
+ if (sbi_hartindex_valid(hidx))
+ return hidx;
+ }
+ }
+ }
+
+ return current_hartindex();
+}
+
+static void irqchip_aplic_fill_hwirq_targets_from_sysirq(const void *fdt,
+ int aplic_nodeoff,
+ struct aplic_data *pd)
+{
+ int chosen_off, nodeoff;
+ int len, rc, index;
+ const fdt32_t *val;
+ u32 boot_hartindex;
+
+ if (!fdt || aplic_nodeoff < 0 || !pd || !pd->hwirq_target_hartindex)
+ return;
+
+ chosen_off = fdt_path_offset(fdt, "/chosen/opensbi-domains");
+ if (chosen_off < 0)
+ return;
+
+ fdt_for_each_subnode(nodeoff, fdt, chosen_off) {
+ if (fdt_node_check_compatible(fdt, nodeoff,
+ "opensbi,mpxy-sysirq"))
+ continue;
+
+ val = fdt_getprop(fdt, nodeoff, "opensbi,domain", &len);
+ if (!val || len < 4)
+ continue;
+
+ rc = fdt_node_offset_by_phandle(fdt, fdt32_to_cpu(*val));
+ if (rc < 0)
+ continue;
+
+ boot_hartindex = irqchip_aplic_domain_boot_hartindex((void *)fdt,
+ rc);
+
+ for (index = 0; ; index++) {
+ struct fdt_phandle_args args;
+
+ rc = fdt_parse_phandle_with_args(fdt, nodeoff,
+ "interrupts-extended",
+ "#interrupt-cells",
+ index, &args);
+ if (rc)
+ break;
+ if (args.args_count < 1)
+ continue;
+ if (args.node_offset != aplic_nodeoff)
+ continue;
+
+ u32 hwirq = args.args[0];
+
+ if (!hwirq || hwirq > pd->num_source)
+ continue;
+
+ if (pd->hwirq_target_hartindex[hwirq] == -1U)
+ pd->hwirq_target_hartindex[hwirq] =
+ boot_hartindex;
+ }
+ }
+}
+
static int irqchip_aplic_cold_init(const void *fdt, int nodeoff,
const struct fdt_match *match)
{
@@ -85,6 +168,24 @@ static int irqchip_aplic_cold_init(const void *fdt, int nodeoff,
goto fail_free_idc_map;
}
+ /* Precompute target hartindex per HWIRQ from DT. */
+ if (pd->targets_mmode) {
+ u32 i;
+
+ pd->hwirq_target_hartindex =
+ sbi_zalloc(sizeof(*pd->hwirq_target_hartindex) *
+ (pd->num_source + 1));
+ if (!pd->hwirq_target_hartindex) {
+ rc = SBI_ENOMEM;
+ goto fail_free_idc_map;
+ }
+
+ for (i = 0; i <= pd->num_source; i++)
+ pd->hwirq_target_hartindex[i] = -1U;
+
+ irqchip_aplic_fill_hwirq_targets_from_sysirq(fdt, nodeoff, pd);
+ }
+
rc = aplic_cold_irqchip_init(pd);
if (rc)
goto fail_free_idc_map;
@@ -93,6 +194,8 @@ static int irqchip_aplic_cold_init(const void *fdt, int nodeoff,
return 0;
fail_free_idc_map:
+ if (pd->hwirq_target_hartindex)
+ sbi_free(pd->hwirq_target_hartindex);
if (pd->num_idc)
sbi_free(pd->idc_map);
fail_free_data:
--
2.25.1
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 08/10] lib: irqchip: support deferred completion and per-HWIRQ APLIC targets
2026-05-14 22:57 [PATCH 00/10] Introduce Virtual IRQ (VIRQ) framework Raymond Mao
` (6 preceding siblings ...)
2026-05-14 22:57 ` [PATCH 07/10] lib: utils: irqchip: derive APLIC targets from sysirq nodes Raymond Mao
@ 2026-05-14 22:57 ` Raymond Mao
2026-05-14 22:57 ` [PATCH 09/10] lib: sbi: domain: ensure boot_hartid is assigned Raymond Mao
2026-05-14 22:57 ` [PATCH 10/10] docs: domain: document sysirq VIRQ mapping and routing rules Raymond Mao
9 siblings, 0 replies; 11+ messages in thread
From: Raymond Mao @ 2026-05-14 22:57 UTC (permalink / raw)
To: opensbi
Cc: scott, dave.patel, raymond.mao, robin.randhawa, samuel.holland,
anup.patel, anuppate, anup, dhaval, peter.lin
From: Raymond Mao <raymond.mao@riscstar.com>
Lazily resolve the active irqchip provider for each hart when the
scratch-local provider is not yet populated.
Program APLIC target IDCs from the precomputed per-HWIRQ hart map,
and treat SBI_EALREADY as deferred completion so the normal process
path does not complete an interrupt that will be finished later by
the VIRQ flow.
Signed-off-by: Raymond Mao <raymond.mao@riscstar.com>
---
lib/sbi/sbi_irqchip.c | 21 ++++++++++++++
lib/utils/irqchip/aplic.c | 60 ++++++++++++++++++++++++++++++---------
2 files changed, 68 insertions(+), 13 deletions(-)
diff --git a/lib/sbi/sbi_irqchip.c b/lib/sbi/sbi_irqchip.c
index e022d534..45b2992f 100644
--- a/lib/sbi/sbi_irqchip.c
+++ b/lib/sbi/sbi_irqchip.c
@@ -45,11 +45,28 @@ struct sbi_irqchip_hart_data {
static unsigned long irqchip_hart_data_off;
static SBI_LIST_HEAD(irqchip_list);
+static struct sbi_irqchip_device *sbi_irqchip_find_hart_device(u32 hartindex)
+{
+ struct sbi_irqchip_device *chip;
+
+ sbi_list_for_each_entry(chip, &irqchip_list, node) {
+ if (!chip->process_hwirqs)
+ continue;
+ if (!sbi_hartmask_test_hartindex(hartindex, &chip->target_harts))
+ continue;
+ return chip;
+ }
+
+ return NULL;
+}
+
int sbi_irqchip_process(void)
{
struct sbi_irqchip_hart_data *hd;
hd = sbi_scratch_thishart_offset_ptr(irqchip_hart_data_off);
+ if (hd && !hd->chip)
+ hd->chip = sbi_irqchip_find_hart_device(current_hartindex());
if (!hd || !hd->chip || !hd->chip->process_hwirqs)
return SBI_ENODEV;
@@ -306,6 +323,8 @@ int sbi_irqchip_init(struct sbi_scratch *scratch, bool cold_boot)
}
hd = sbi_scratch_thishart_offset_ptr(irqchip_hart_data_off);
+ if (hd && !hd->chip)
+ hd->chip = sbi_irqchip_find_hart_device(current_hartindex());
if (hd && hd->chip && hd->chip->process_hwirqs)
csr_set(CSR_MIE, MIP_MEIP);
@@ -317,6 +336,8 @@ void sbi_irqchip_exit(struct sbi_scratch *scratch)
struct sbi_irqchip_hart_data *hd;
hd = sbi_scratch_thishart_offset_ptr(irqchip_hart_data_off);
+ if (hd && !hd->chip)
+ hd->chip = sbi_irqchip_find_hart_device(current_hartindex());
if (hd && hd->chip && hd->chip->process_hwirqs)
csr_clear(CSR_MIE, MIP_MEIP);
}
diff --git a/lib/utils/irqchip/aplic.c b/lib/utils/irqchip/aplic.c
index 82efdb71..77743685 100644
--- a/lib/utils/irqchip/aplic.c
+++ b/lib/utils/irqchip/aplic.c
@@ -306,6 +306,8 @@ static inline struct aplic_data *aplic_irqchip_to_data(struct sbi_irqchip_device
return container_of(chip, struct aplic_data, irqchip);
}
+static bool aplic_mmode_direct(const struct aplic_data *aplic);
+
static bool aplic_hwirq_delegated(const struct aplic_data *aplic, u32 hwirq)
{
u32 i;
@@ -441,10 +443,49 @@ static int aplic_hwirq_claim(struct sbi_irqchip_device *chip, u32 *hwirq)
return SBI_OK;
}
+static void aplic_set_target(struct aplic_data *aplic, u32 hwirq, u32 idc_index)
+{
+ unsigned long idc;
+
+ if (!aplic->addr || !hwirq || hwirq > aplic->num_source)
+ return;
+ if (aplic->num_idc <= idc_index)
+ return;
+
+ idc = aplic->addr + APLIC_IDC_BASE +
+ (unsigned long)idc_index * APLIC_IDC_SIZE;
+
+ writel((idc_index << APLIC_TARGET_HART_IDX_SHIFT) |
+ APLIC_DEFAULT_PRIORITY,
+ (void *)(aplic->addr + APLIC_TARGET_BASE + (hwirq - 1) * 4));
+
+ /* IDC delivery */
+ writel(APLIC_ENABLE_IDELIVERY, (void *)(idc + APLIC_IDC_IDELIVERY));
+ writel(APLIC_ENABLE_ITHRESHOLD, (void *)(idc + APLIC_IDC_ITHRESHOLD));
+}
+
+static int aplic_hwirq_setup_target_idc_index(struct aplic_data *aplic,
+ struct sbi_irqchip_device *chip,
+ u32 hwirq)
+{
+ u32 hartindex;
+ int idc_index;
+
+ if (aplic->hwirq_target_hartindex) {
+ hartindex = aplic->hwirq_target_hartindex[hwirq];
+ if (hartindex != -1U) {
+ idc_index = aplic_hartindex_to_idc_index(aplic, hartindex);
+ if (idc_index >= 0)
+ return idc_index;
+ }
+ }
+
+ return aplic_hwirq_target_idc_index(chip);
+}
+
static int aplic_hwirq_setup(struct sbi_irqchip_device *chip, u32 hwirq)
{
struct aplic_data *aplic = aplic_irqchip_to_data(chip);
- unsigned long idc;
int idc_index;
if (!hwirq || hwirq > aplic->num_source)
@@ -454,29 +495,19 @@ static int aplic_hwirq_setup(struct sbi_irqchip_device *chip, u32 hwirq)
if (aplic_hwirq_delegated(aplic, hwirq))
return SBI_ENOTSUPP;
- idc_index = aplic_hwirq_target_idc_index(chip);
+ idc_index = aplic_hwirq_setup_target_idc_index(aplic, chip, hwirq);
if (idc_index < 0)
return idc_index;
- idc = aplic->addr + APLIC_IDC_BASE + idc_index * APLIC_IDC_SIZE;
-
/* APLIC: sourcecfg/target/enable */
writel(APLIC_SOURCECFG_SM_LEVEL_HIGH,
(void *)(aplic->addr + APLIC_SOURCECFG_BASE + (hwirq - 1) * 4));
-
- writel(((u32)idc_index << APLIC_TARGET_HART_IDX_SHIFT) |
- APLIC_DEFAULT_PRIORITY,
- (void *)(aplic->addr + APLIC_TARGET_BASE + (hwirq - 1) * 4));
-
+ aplic_set_target(aplic, hwirq, (u32)idc_index);
writel(hwirq, (void *)(aplic->addr + APLIC_SETIENUM));
/* Direct mode for aia=aplic: DM=0 => don't set DM bit */
writel(aplic_domaincfg_value(), (void *)(aplic->addr + APLIC_DOMAINCFG));
- /* IDC delivery */
- writel(APLIC_ENABLE_IDELIVERY, (void *)(idc + APLIC_IDC_IDELIVERY));
- writel(APLIC_ENABLE_ITHRESHOLD, (void *)(idc + APLIC_IDC_ITHRESHOLD));
-
return SBI_OK;
}
@@ -502,6 +533,9 @@ static int aplic_process_hwirqs(struct sbi_irqchip_device *chip)
}
rc = sbi_irqchip_process_hwirq(chip, hwirq);
+ /* Deferred completion paths consume the IRQ without EOI here. */
+ if (rc == SBI_EALREADY)
+ return SBI_OK;
if (rc)
return rc;
}
--
2.25.1
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 09/10] lib: sbi: domain: ensure boot_hartid is assigned
2026-05-14 22:57 [PATCH 00/10] Introduce Virtual IRQ (VIRQ) framework Raymond Mao
` (7 preceding siblings ...)
2026-05-14 22:57 ` [PATCH 08/10] lib: irqchip: support deferred completion and per-HWIRQ APLIC targets Raymond Mao
@ 2026-05-14 22:57 ` Raymond Mao
2026-05-14 22:57 ` [PATCH 10/10] docs: domain: document sysirq VIRQ mapping and routing rules Raymond Mao
9 siblings, 0 replies; 11+ messages in thread
From: Raymond Mao @ 2026-05-14 22:57 UTC (permalink / raw)
To: opensbi
Cc: scott, dave.patel, raymond.mao, robin.randhawa, samuel.holland,
anup.patel, anuppate, anup, dhaval, peter.lin
From: Raymond Mao <raymond.mao@riscstar.com>
When boot_hartid points to a hart that is not in the domain's
assigned hartmask (e.g. due to cold boot hart differences),
the domain startup can skip starting the intended boot hart,
leading to intermittent Linux boot failures.
Scan the assigned hartmask and pick the first available hartid
to guarantee a valid boot target.
Signed-off-by: Raymond Mao <raymond.mao@riscstar.com>
---
lib/sbi/sbi_domain.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/lib/sbi/sbi_domain.c b/lib/sbi/sbi_domain.c
index c33f2b3c..ac563615 100644
--- a/lib/sbi/sbi_domain.c
+++ b/lib/sbi/sbi_domain.c
@@ -819,6 +819,30 @@ int sbi_domain_startup(struct sbi_scratch *scratch, u32 cold_hartid)
/* Startup boot HART of domains */
sbi_domain_for_each(dom) {
+ u32 boot_hartindex = sbi_hartid_to_hartindex(dom->boot_hartid);
+ bool boot_assigned = false;
+
+ if (sbi_hartindex_valid(boot_hartindex)) {
+ spin_lock(&dom->assigned_harts_lock);
+ boot_assigned = sbi_hartmask_test_hartindex(
+ boot_hartindex, &dom->assigned_harts);
+ spin_unlock(&dom->assigned_harts_lock);
+ }
+
+ if (!boot_assigned) {
+ u32 new_hartid = -1U;
+
+ spin_lock(&dom->assigned_harts_lock);
+ sbi_hartmask_for_each_hartindex(dhart, &dom->assigned_harts) {
+ new_hartid = sbi_hartindex_to_hartid(dhart);
+ break;
+ }
+ spin_unlock(&dom->assigned_harts_lock);
+
+ if (new_hartid != -1U)
+ dom->boot_hartid = new_hartid;
+ }
+
/* Domain boot HART index */
dhart = sbi_hartid_to_hartindex(dom->boot_hartid);
--
2.25.1
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 10/10] docs: domain: document sysirq VIRQ mapping and routing rules
2026-05-14 22:57 [PATCH 00/10] Introduce Virtual IRQ (VIRQ) framework Raymond Mao
` (8 preceding siblings ...)
2026-05-14 22:57 ` [PATCH 09/10] lib: sbi: domain: ensure boot_hartid is assigned Raymond Mao
@ 2026-05-14 22:57 ` Raymond Mao
9 siblings, 0 replies; 11+ messages in thread
From: Raymond Mao @ 2026-05-14 22:57 UTC (permalink / raw)
To: opensbi
Cc: scott, dave.patel, raymond.mao, robin.randhawa, samuel.holland,
anup.patel, anuppate, anup, dhaval, peter.lin
From: Raymond Mao <raymond.mao@riscstar.com>
Document the DT binding semantics for opensbi,mpxy-sysirq nodes,
including how interrupts-extended entry order determines per-channel
VIRQ numbers and how opensbi,domain selects the destination OpenSBI
domain.
Add a rpmi_sysirq_intc example under /chosen/opensbi-domains.
Signed-off-by: Raymond Mao <raymond.mao@riscstar.com>
---
docs/domain_support.md | 63 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 63 insertions(+)
diff --git a/docs/domain_support.md b/docs/domain_support.md
index 93186c4a..8d474e50 100644
--- a/docs/domain_support.md
+++ b/docs/domain_support.md
@@ -198,6 +198,48 @@ The DT properties of a domain instance DT node are as follows:
* **system-suspend-allowed** (Optional) - A boolean flag representing
whether the domain instance is allowed to do system suspend.
+### Domain SysIRQ / VIRQ Routing Node
+
+The domain configuration DT node can also contain sysirq routing nodes for
+describing how a physical interrupt source should be mapped to a VIRQ and
+routed to a target domain. In local DTS overlays this node is often labeled
+`rpmi_sysirq_intc`, but OpenSBI matches it by compatible string rather than
+by node name or label.
+
+The DT properties of a domain sysirq routing DT node are as follows:
+
+* **compatible** (Mandatory) - The compatible string of the sysirq routing
+ node. This DT property should have value *"opensbi,mpxy-sysirq"*
+* **interrupt-controller** (Mandatory) - Marks the node as an interrupt
+ controller for child references.
+* **#interrupt-cells** (Mandatory) - Number of cells used by this interrupt
+ controller. Current examples use value **<1>**.
+* **interrupts-extended** (Mandatory) - The list of routed physical interrupt
+ sources. Each entry is `<&irqchip hwirq flags>`. OpenSBI interprets the
+ entry order as the VIRQ number within the selected MPXY channel, starting
+ from zero:
+ VIRQ 0 maps to entry 0, VIRQ 1 maps to entry 1, and so on.
+* **opensbi,mpxy-channel-id** (Mandatory) - The MPXY channel identifier used
+ as the VIRQ number space for this sysirq node.
+* **opensbi,domain** (Mandatory) - Phandle to the target domain instance DT
+ node. All physical interrupts listed in **interrupts-extended** are routed
+ to this domain after being mapped to VIRQs.
+
+The resulting VIRQ rule for a sysirq node is:
+
+* **mapping** - `(irqchip phandle, hwirq, entry index)` from
+ **interrupts-extended** becomes `(channel-id, virq)`
+* **routing** - `opensbi,domain` selects the destination OpenSBI domain for
+ all VIRQs created from that sysirq node
+
+In other words, a node labeled `rpmi_sysirq_intc` typically means:
+
+* the physical interrupt source is described by one **interrupts-extended**
+ entry
+* the VIRQ number is the position of that entry in the list
+* the VIRQ namespace is selected by **opensbi,mpxy-channel-id**
+* the destination domain is selected by **opensbi,domain**
+
### Assigning HART To Domain Instance
By default, all HARTs are assigned to **the ROOT domain**. The OpenSBI
@@ -270,6 +312,27 @@ be done:
possible-harts = <&cpu1 &cpu2 &cpu3 &cpu4>;
regions = <&tmem 0x0>, <&tuart 0x0>, <&allmem 0x3f>;
};
+
+ rpmi_sysirq_intc: interrupt-controller {
+ compatible = "opensbi,mpxy-sysirq";
+ interrupt-controller;
+ #interrupt-cells = <1>;
+
+ /*
+ * VIRQ numbers are assigned from zero in
+ * interrupts-extended order.
+ */
+ interrupts-extended =
+ <&aplic_m 10 4>, /* VIRQ 0 */
+ <&aplic_m 20 4>, /* VIRQ 1 */
+ <&aplic_m 21 4>; /* VIRQ 2 */
+
+ /* Select the VIRQ namespace / MPXY channel. */
+ opensbi,mpxy-channel-id = <4>;
+
+ /* Route all VIRQs from this node to udomain. */
+ opensbi,domain = <&udomain>;
+ };
};
};
--
2.25.1
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 11+ messages in thread