* [RFC PATCH v3 0/6] Extend the reserved PMP entries
@ 2025-11-30 11:16 Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 1/6] include: sbi: sbi_platform: add sbi_platform_reserved_pmp_count() Yu-Chien Peter Lin
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: Yu-Chien Peter Lin @ 2025-11-30 11:16 UTC (permalink / raw)
To: opensbi; +Cc: zong.li, greentime.hu, samuel.holland, Yu-Chien Peter Lin
This series extends OpenSBI to support multiple reserved PMP entries
that platforms can configure for critical memory protection needs.
Key characteristics of reserved PMP entries:
- Have highest priority
- Available in ToR mode for platform-specific use cases
- Persistent across domain context switches (cannot be disabled)
- Support runtime allocation through a dedicated allocator API
Motivation:
Reserved PMP entries address the need to protect memory regions that
cannot be covered by domain-managed PMP entries. For example, platforms
can enforce PMA_UNSAFE regions [1] parsed from the device tree. These
regions often cannot be precisely covered by one or two NAPOT entries,
so using reserved entries allocated in ToR mode optimizes PMP usage.
Additionally, reserved entries remain unchanged across domain transitions
and persist until hart reset, ensure consistent protections.
Use case demonstration:
This series includes a demonstration on the SiFive FU540 platform, which
uses a reserved PMP entry to protect the memory region at 0x0-0x1000
during early boot. This serves as a reference implementation showing how
platforms can leverage the reserved PMP allocator.
Changes v2->v3:
- Instead of using reserved-pmp-count DT property, this version adds
sbi_platform_reserved_pmp_count() to determine the reserved PMP count
[1] https://lore.kernel.org/all/20251113014656.2605447-20-samuel.holland@sifive.com/
Yu-Chien Peter Lin (6):
include: sbi: sbi_platform: add sbi_platform_reserved_pmp_count()
lib: sbi_init: print total and reserved PMP counts
lib: sbi: riscv_asm: support reserved PMP allocator
lib: sbi: sbi_hart: extend PMP handling to support multiple reserved
entries
lib: sbi: sbi_init: call sbi_hart_init() earlier
[TEMP] demonstrate hole protection using reserved PMP
include/sbi/riscv_asm.h | 6 +++
include/sbi/sbi_hart.h | 15 ------
include/sbi/sbi_platform.h | 35 +++++++++++++
lib/sbi/riscv_asm.c | 92 +++++++++++++++++++++++++++++++++
lib/sbi/sbi_domain_context.c | 6 ++-
lib/sbi/sbi_hart.c | 57 ++++++++++++++------
lib/sbi/sbi_init.c | 15 +++---
platform/generic/sifive/fu540.c | 56 ++++++++++++++++++++
8 files changed, 243 insertions(+), 39 deletions(-)
--
2.39.3
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply [flat|nested] 8+ messages in thread
* [RFC PATCH v3 1/6] include: sbi: sbi_platform: add sbi_platform_reserved_pmp_count()
2025-11-30 11:16 [RFC PATCH v3 0/6] Extend the reserved PMP entries Yu-Chien Peter Lin
@ 2025-11-30 11:16 ` Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 2/6] lib: sbi_init: print total and reserved PMP counts Yu-Chien Peter Lin
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Yu-Chien Peter Lin @ 2025-11-30 11:16 UTC (permalink / raw)
To: opensbi; +Cc: zong.li, greentime.hu, samuel.holland, Yu-Chien Peter Lin
Add sbi_platform_reserved_pmp_count() function to calculate
the total number of reserved PMP entries for a platform.
Also add get_reserved_pmp_count() callback to allow platforms
specifying their additional reserved PMP requirements.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
---
include/sbi/sbi_platform.h | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/include/sbi/sbi_platform.h b/include/sbi/sbi_platform.h
index d75c12de..0deeca9f 100644
--- a/include/sbi/sbi_platform.h
+++ b/include/sbi/sbi_platform.h
@@ -48,6 +48,7 @@
#include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_error.h>
+#include <sbi/sbi_hart.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_version.h>
#include <sbi/sbi_trap_ldst.h>
@@ -146,6 +147,8 @@ struct sbi_platform_operations {
unsigned long log2len);
/** platform specific pmp disable on current HART */
void (*pmp_disable)(unsigned int n);
+ /** Get number of additional reserved PMP entries. */
+ u32 (*get_reserved_pmp_count)(void);
};
/** Platform default per-HART stack size for exception/interrupt handling */
@@ -302,6 +305,38 @@ static inline u32 sbi_platform_tlb_fifo_num_entries(const struct sbi_platform *p
return sbi_hart_count();
}
+/**
+ * Get total number of reserved PMP entries for the platform.
+ *
+ * This includes:
+ * - One default PMP entry for sbi_hart_map_saddr() when smepmp is enabled
+ * - Additional platform-specific reserved PMP entries from get_reserved_pmp_count()
+ *
+ * @param plat pointer to struct sbi_platform
+ *
+ * @return total reserved PMP entry count for the platform
+ */
+static inline u32 sbi_platform_reserved_pmp_count(const struct sbi_platform *plat)
+{
+ struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
+ u32 pmp_count = 0;
+
+ if (!plat)
+ return 0;
+
+ /*
++ * If smepmp is enabled, reserve at least one PMP entry
++ * for sbi_hart_map_saddr().
++ */
+ if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
+ pmp_count += 1;
+
+ if (sbi_platform_ops(plat)->get_reserved_pmp_count)
+ pmp_count += sbi_platform_ops(plat)->get_reserved_pmp_count();
+
+ return pmp_count;
+}
+
/**
* Get total number of HARTs supported by the platform
*
--
2.39.3
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH v3 2/6] lib: sbi_init: print total and reserved PMP counts
2025-11-30 11:16 [RFC PATCH v3 0/6] Extend the reserved PMP entries Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 1/6] include: sbi: sbi_platform: add sbi_platform_reserved_pmp_count() Yu-Chien Peter Lin
@ 2025-11-30 11:16 ` Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 3/6] lib: sbi: riscv_asm: support reserved PMP allocator Yu-Chien Peter Lin
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Yu-Chien Peter Lin @ 2025-11-30 11:16 UTC (permalink / raw)
To: opensbi; +Cc: zong.li, greentime.hu, samuel.holland, Yu-Chien Peter Lin
Show both total and reserved PMP counts in boot log.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
---
lib/sbi/sbi_init.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/lib/sbi/sbi_init.c b/lib/sbi/sbi_init.c
index 663b486b..bb449d49 100644
--- a/lib/sbi/sbi_init.c
+++ b/lib/sbi/sbi_init.c
@@ -163,6 +163,7 @@ static void sbi_boot_print_hart(struct sbi_scratch *scratch, u32 hartid)
int xlen;
char str[256];
const struct sbi_domain *dom = sbi_domain_thishart_ptr();
+ const struct sbi_platform *plat = sbi_platform_ptr(scratch);
if (scratch->options & SBI_SCRATCH_NO_BOOT_PRINTS)
return;
@@ -183,8 +184,10 @@ static void sbi_boot_print_hart(struct sbi_scratch *scratch, u32 hartid)
sbi_printf("Boot HART Base ISA : %s\n", str);
sbi_hart_get_extensions_str(scratch, str, sizeof(str));
sbi_printf("Boot HART ISA Extensions : %s\n", str);
- sbi_printf("Boot HART PMP Count : %d\n",
- sbi_hart_pmp_count(scratch));
+ sbi_printf("Boot HART PMP Count : "
+ "%d (total), %d (reserved)\n",
+ sbi_hart_pmp_count(scratch),
+ sbi_platform_reserved_pmp_count(plat));
sbi_printf("Boot HART PMP Granularity : %u bits\n",
sbi_hart_pmp_log2gran(scratch));
sbi_printf("Boot HART PMP Address Bits : %d\n",
--
2.39.3
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH v3 3/6] lib: sbi: riscv_asm: support reserved PMP allocator
2025-11-30 11:16 [RFC PATCH v3 0/6] Extend the reserved PMP entries Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 1/6] include: sbi: sbi_platform: add sbi_platform_reserved_pmp_count() Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 2/6] lib: sbi_init: print total and reserved PMP counts Yu-Chien Peter Lin
@ 2025-11-30 11:16 ` Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 4/6] lib: sbi: sbi_hart: extend PMP handling to support multiple reserved entries Yu-Chien Peter Lin
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Yu-Chien Peter Lin @ 2025-11-30 11:16 UTC (permalink / raw)
To: opensbi; +Cc: zong.li, greentime.hu, samuel.holland, Yu-Chien Peter Lin
Add reserved PMP entry allocation and management functions to enable
dynamic allocation of high-priority PMP entries. The allocator uses
per-hart bitmaps stored in scratch space to track reserved PMP usage.
New functions:
- reserved_pmp_init(): Initialize allocator scratch space
- reserved_pmp_alloc(): Allocate unused reserved PMP entry
- reserved_pmp_free(): Release allocated PMP entry
The coldboot hart calls reserved_pmp_init() during sbi_hart_init()
to set up the tracking bitmaps for all harts.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
---
include/sbi/riscv_asm.h | 6 +++
lib/sbi/riscv_asm.c | 92 +++++++++++++++++++++++++++++++++++++++++
lib/sbi/sbi_hart.c | 4 ++
3 files changed, 102 insertions(+)
diff --git a/include/sbi/riscv_asm.h b/include/sbi/riscv_asm.h
index ef48dc89..4fd0be2b 100644
--- a/include/sbi/riscv_asm.h
+++ b/include/sbi/riscv_asm.h
@@ -221,6 +221,12 @@ int pmp_set(unsigned int n, unsigned long prot, unsigned long addr,
int pmp_get(unsigned int n, unsigned long *prot_out, unsigned long *addr_out,
unsigned long *log2len);
+int reserved_pmp_init(void);
+
+int reserved_pmp_alloc(unsigned int *pmp_id);
+
+int reserved_pmp_free(unsigned int pmp_id);
+
#endif /* !__ASSEMBLER__ */
#endif
diff --git a/lib/sbi/riscv_asm.c b/lib/sbi/riscv_asm.c
index 3e44320f..6c81708f 100644
--- a/lib/sbi/riscv_asm.c
+++ b/lib/sbi/riscv_asm.c
@@ -9,10 +9,14 @@
#include <sbi/riscv_asm.h>
#include <sbi/riscv_encoding.h>
+#include <sbi/sbi_bitmap.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_platform.h>
+#include <sbi/sbi_scratch.h>
#include <sbi/sbi_console.h>
+static unsigned long reserved_pmp_used_offset;
+
/* determine CPU extension, return non-zero support */
int misa_extension_imp(char ext)
{
@@ -432,3 +436,91 @@ int pmp_get(unsigned int n, unsigned long *prot_out, unsigned long *addr_out,
return 0;
}
+
+/**
+ * reserved_pmp_init() - Initialize the reserved PMP allocator
+ *
+ * This function initializes the reserved PMP allocator by allocating
+ * scratch space to track which reserved PMP entries are in use.
+ *
+ * Returns: 0 on success, negative error code on failure
+ */
+int reserved_pmp_init(void)
+{
+ if (reserved_pmp_used_offset)
+ return SBI_EINVAL;
+
+ reserved_pmp_used_offset = sbi_scratch_alloc_offset(
+ sizeof(unsigned long) * BITS_TO_LONGS(PMP_COUNT));
+ if (!reserved_pmp_used_offset)
+ return SBI_ENOMEM;
+
+ return SBI_SUCCESS;
+}
+
+/**
+ * reserved_pmp_alloc() - Allocate an unused reserved PMP entry
+ * @pmp_id: Pointer to store the allocated PMP entry ID
+ *
+ * Returns: 0 on success, negative error code on failure
+ *
+ * The allocated PMP entry should be used with the following
+ * programming sequence:
+ * - reserved_pmp_alloc(&pmp_id)
+ * - pmp_set(pmp_id, ...)
+ * - pmp_disable(pmp_id)
+ * - reserved_pmp_free(pmp_id)
+ */
+int reserved_pmp_alloc(unsigned int *pmp_id)
+{
+ const struct sbi_platform *plat = sbi_platform_thishart_ptr();
+ u32 reserved_pmp_count = sbi_platform_reserved_pmp_count(plat);
+ struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
+ unsigned long *reserved_pmp_used;
+
+ if (!reserved_pmp_used_offset)
+ return SBI_EINVAL;
+
+ reserved_pmp_used = sbi_scratch_offset_ptr(scratch,
+ reserved_pmp_used_offset);
+
+ for (int n = 0; n < reserved_pmp_count; n++) {
+ if (bitmap_test(reserved_pmp_used, n))
+ continue;
+ bitmap_set(reserved_pmp_used, n, 1);
+ *pmp_id = n;
+ return SBI_SUCCESS;
+ }
+
+ /* PMP allocation failed - all reserved entries in use */
+ return SBI_EFAIL;
+}
+
+/**
+ * reserved_pmp_free() - Free a reserved PMP entry
+ * @pmp_id: PMP entry ID to free
+ *
+ * Returns: 0 on success, negative error code on failure
+ */
+int reserved_pmp_free(unsigned int pmp_id)
+{
+ const struct sbi_platform *plat = sbi_platform_thishart_ptr();
+ u32 reserved_pmp_count = sbi_platform_reserved_pmp_count(plat);
+ struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
+ unsigned long *reserved_pmp_used;
+
+ if (!reserved_pmp_used_offset)
+ return SBI_EINVAL;
+
+ reserved_pmp_used = sbi_scratch_offset_ptr(scratch,
+ reserved_pmp_used_offset);
+
+ if (pmp_id >= reserved_pmp_count ||
+ !bitmap_test(reserved_pmp_used, pmp_id)) {
+ return SBI_EINVAL;
+ }
+
+ bitmap_clear(reserved_pmp_used, pmp_id, 1);
+
+ return SBI_SUCCESS;
+}
diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
index a91703b4..548fdecd 100644
--- a/lib/sbi/sbi_hart.c
+++ b/lib/sbi/sbi_hart.c
@@ -1031,6 +1031,10 @@ int sbi_hart_init(struct sbi_scratch *scratch, bool cold_boot)
sizeof(struct sbi_hart_features));
if (!hart_features_offset)
return SBI_ENOMEM;
+
+ rc = reserved_pmp_init();
+ if (rc)
+ return rc;
}
rc = hart_detect_features(scratch);
--
2.39.3
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH v3 4/6] lib: sbi: sbi_hart: extend PMP handling to support multiple reserved entries
2025-11-30 11:16 [RFC PATCH v3 0/6] Extend the reserved PMP entries Yu-Chien Peter Lin
` (2 preceding siblings ...)
2025-11-30 11:16 ` [RFC PATCH v3 3/6] lib: sbi: riscv_asm: support reserved PMP allocator Yu-Chien Peter Lin
@ 2025-11-30 11:16 ` Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 5/6] lib: sbi: sbi_init: call sbi_hart_init() earlier Yu-Chien Peter Lin
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Yu-Chien Peter Lin @ 2025-11-30 11:16 UTC (permalink / raw)
To: opensbi; +Cc: zong.li, greentime.hu, samuel.holland, Yu-Chien Peter Lin
Previously, OpenSBI supported only a single reserved PMP entry. Add
support for multiple reserved PMP entries, with the count determined
by the platform-specific sbi_platform_reserved_pmp_count() function.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
---
include/sbi/sbi_hart.h | 15 ----------
lib/sbi/sbi_domain_context.c | 6 ++--
lib/sbi/sbi_hart.c | 53 +++++++++++++++++++++++++-----------
3 files changed, 41 insertions(+), 33 deletions(-)
diff --git a/include/sbi/sbi_hart.h b/include/sbi/sbi_hart.h
index e66dd52f..6d5d0be7 100644
--- a/include/sbi/sbi_hart.h
+++ b/include/sbi/sbi_hart.h
@@ -105,21 +105,6 @@ enum sbi_hart_csrs {
SBI_HART_CSR_MAX,
};
-/*
- * Smepmp enforces access boundaries between M-mode and
- * S/U-mode. When it is enabled, the PMPs are programmed
- * such that M-mode doesn't have access to S/U-mode memory.
- *
- * To give M-mode R/W access to the shared memory between M and
- * S/U-mode, first entry is reserved. It is disabled at boot.
- * When shared memory access is required, the physical address
- * should be programmed into the first PMP entry with R/W
- * permissions to the M-mode. Once the work is done, it should be
- * unmapped. sbi_hart_map_saddr/sbi_hart_unmap_saddr function
- * pair should be used to map/unmap the shared memory.
- */
-#define SBI_SMEPMP_RESV_ENTRY 0
-
struct sbi_hart_features {
bool detected;
int priv_version;
diff --git a/lib/sbi/sbi_domain_context.c b/lib/sbi/sbi_domain_context.c
index 74ad25e8..d2269529 100644
--- a/lib/sbi/sbi_domain_context.c
+++ b/lib/sbi/sbi_domain_context.c
@@ -11,6 +11,7 @@
#include <sbi/sbi_hsm.h>
#include <sbi/sbi_hart.h>
#include <sbi/sbi_heap.h>
+#include <sbi/sbi_platform.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h>
#include <sbi/sbi_domain.h>
@@ -102,6 +103,8 @@ static int switch_to_next_domain_context(struct hart_context *ctx,
struct sbi_trap_context *trap_ctx;
struct sbi_domain *current_dom, *target_dom;
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
+ const struct sbi_platform *plat = sbi_platform_thishart_ptr();
+ u32 reserved_pmp_count = sbi_platform_reserved_pmp_count(plat);
unsigned int pmp_count = sbi_hart_pmp_count(scratch);
if (!ctx || !dom_ctx || ctx == dom_ctx)
@@ -121,11 +124,10 @@ static int switch_to_next_domain_context(struct hart_context *ctx,
spin_unlock(&target_dom->assigned_harts_lock);
/* Reconfigure PMP settings for the new domain */
- for (int i = 0; i < pmp_count; i++) {
+ for (int i = reserved_pmp_count; i < pmp_count; i++) {
/* Don't revoke firmware access permissions */
if (sbi_hart_smepmp_is_fw_region(i))
continue;
-
sbi_platform_pmp_disable(sbi_platform_thishart_ptr(), i);
pmp_disable(i);
}
diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
index 548fdecd..a7235758 100644
--- a/lib/sbi/sbi_hart.c
+++ b/lib/sbi/sbi_hart.c
@@ -32,6 +32,7 @@ void (*sbi_hart_expected_trap)(void) = &__sbi_expected_trap;
static unsigned long hart_features_offset;
static DECLARE_BITMAP(fw_smepmp_ids, PMP_COUNT);
static bool fw_smepmp_ids_inited;
+static unsigned int saddr_pmp_id;
static void mstatus_init(struct sbi_scratch *scratch)
{
@@ -349,6 +350,8 @@ static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
unsigned long pmp_addr_max)
{
struct sbi_domain_memregion *reg;
+ const struct sbi_platform *plat = sbi_platform_thishart_ptr();
+ u32 reserved_pmp_count = sbi_platform_reserved_pmp_count(plat);
struct sbi_domain *dom = sbi_domain_thishart_ptr();
unsigned int pmp_idx, pmp_flags;
@@ -358,15 +361,13 @@ static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
*/
csr_set(CSR_MSECCFG, MSECCFG_RLB);
- /* Disable the reserved entry */
- pmp_disable(SBI_SMEPMP_RESV_ENTRY);
+ /* Disable the reserved entries */
+ for (int i = 0; i < reserved_pmp_count; i++)
+ pmp_disable(i);
/* Program M-only regions when MML is not set. */
- pmp_idx = 0;
+ pmp_idx = reserved_pmp_count;
sbi_domain_for_each_memregion(dom, reg) {
- /* Skip reserved entry */
- if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
- pmp_idx++;
if (!is_valid_pmp_idx(pmp_count, pmp_idx))
return SBI_EFAIL;
@@ -405,11 +406,8 @@ static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
csr_set(CSR_MSECCFG, MSECCFG_MML);
/* Program shared and SU-only regions */
- pmp_idx = 0;
+ pmp_idx = reserved_pmp_count;
sbi_domain_for_each_memregion(dom, reg) {
- /* Skip reserved entry */
- if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
- pmp_idx++;
if (!is_valid_pmp_idx(pmp_count, pmp_idx))
return SBI_EFAIL;
@@ -439,11 +437,14 @@ static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
unsigned long pmp_addr_max)
{
struct sbi_domain_memregion *reg;
+ const struct sbi_platform *plat = sbi_platform_thishart_ptr();
+ u32 reserved_pmp_count = sbi_platform_reserved_pmp_count(plat);
struct sbi_domain *dom = sbi_domain_thishart_ptr();
- unsigned int pmp_idx = 0;
+ unsigned int pmp_idx;
unsigned int pmp_flags;
unsigned long pmp_addr;
+ pmp_idx = reserved_pmp_count;
sbi_domain_for_each_memregion(dom, reg) {
if (!is_valid_pmp_idx(pmp_count, pmp_idx))
return SBI_EFAIL;
@@ -481,6 +482,19 @@ static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
return 0;
}
+/*
+ * Smepmp enforces access boundaries between M-mode and
+ * S/U-mode. When it is enabled, the PMPs are programmed
+ * such that M-mode doesn't have access to S/U-mode memory.
+ *
+ * To give M-mode R/W access to the shared memory between M and
+ * S/U-mode, high-priority entry is reserved. It is disabled at boot.
+ * When shared memory access is required, the physical address
+ * should be programmed into the reserved PMP entry with R/W
+ * permissions to the M-mode. Once the work is done, it should be
+ * unmapped. sbi_hart_map_saddr/sbi_hart_unmap_saddr function
+ * pair should be used to map/unmap the shared memory.
+ */
int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
{
/* shared R/W access for M and S/U mode */
@@ -492,8 +506,9 @@ int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
return SBI_OK;
- if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
+ if (reserved_pmp_alloc(&saddr_pmp_id)) {
return SBI_ENOSPC;
+ }
for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
order <= __riscv_xlen; order++) {
@@ -509,23 +524,29 @@ int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
}
}
- sbi_platform_pmp_set(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY,
+ sbi_platform_pmp_set(sbi_platform_ptr(scratch), saddr_pmp_id,
SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW,
pmp_flags, base, order);
- pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
+ pmp_set(saddr_pmp_id, pmp_flags, base, order);
return SBI_OK;
}
int sbi_hart_unmap_saddr(void)
{
+ int rc;
+
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
return SBI_OK;
- sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
- return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
+ sbi_platform_pmp_disable(sbi_platform_ptr(scratch), saddr_pmp_id);
+ rc = pmp_disable(saddr_pmp_id);
+ if (rc)
+ return rc;
+
+ return reserved_pmp_free(saddr_pmp_id);
}
int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
--
2.39.3
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH v3 5/6] lib: sbi: sbi_init: call sbi_hart_init() earlier
2025-11-30 11:16 [RFC PATCH v3 0/6] Extend the reserved PMP entries Yu-Chien Peter Lin
` (3 preceding siblings ...)
2025-11-30 11:16 ` [RFC PATCH v3 4/6] lib: sbi: sbi_hart: extend PMP handling to support multiple reserved entries Yu-Chien Peter Lin
@ 2025-11-30 11:16 ` Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 6/6] [TEMP] demonstrate hole protection using reserved PMP Yu-Chien Peter Lin
2026-02-11 15:29 ` [RFC PATCH v3 0/6] Extend the reserved PMP entries Anup Patel
6 siblings, 0 replies; 8+ messages in thread
From: Yu-Chien Peter Lin @ 2025-11-30 11:16 UTC (permalink / raw)
To: opensbi; +Cc: zong.li, greentime.hu, samuel.holland, Yu-Chien Peter Lin
Move sbi_hart_init() earlier in the initialization
sequence, the function initializes reserved PMP regions
before platform-specific early initialization. This
allows platforms to call reserved_pmp_alloc() in their
early_init hooks.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
---
lib/sbi/sbi_init.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/sbi/sbi_init.c b/lib/sbi/sbi_init.c
index bb449d49..88c2720e 100644
--- a/lib/sbi/sbi_init.c
+++ b/lib/sbi/sbi_init.c
@@ -262,11 +262,11 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
*/
wake_coldboot_harts(scratch);
- rc = sbi_platform_early_init(plat, true);
+ rc = sbi_hart_init(scratch, true);
if (rc)
sbi_hart_hang();
- rc = sbi_hart_init(scratch, true);
+ rc = sbi_platform_early_init(plat, true);
if (rc)
sbi_hart_hang();
@@ -421,11 +421,11 @@ static void __noreturn init_warm_startup(struct sbi_scratch *scratch,
if (rc)
sbi_hart_hang();
- rc = sbi_platform_early_init(plat, false);
+ rc = sbi_hart_init(scratch, false);
if (rc)
sbi_hart_hang();
- rc = sbi_hart_init(scratch, false);
+ rc = sbi_platform_early_init(plat, false);
if (rc)
sbi_hart_hang();
--
2.39.3
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC PATCH v3 6/6] [TEMP] demonstrate hole protection using reserved PMP
2025-11-30 11:16 [RFC PATCH v3 0/6] Extend the reserved PMP entries Yu-Chien Peter Lin
` (4 preceding siblings ...)
2025-11-30 11:16 ` [RFC PATCH v3 5/6] lib: sbi: sbi_init: call sbi_hart_init() earlier Yu-Chien Peter Lin
@ 2025-11-30 11:16 ` Yu-Chien Peter Lin
2026-02-11 15:29 ` [RFC PATCH v3 0/6] Extend the reserved PMP entries Anup Patel
6 siblings, 0 replies; 8+ messages in thread
From: Yu-Chien Peter Lin @ 2025-11-30 11:16 UTC (permalink / raw)
To: opensbi; +Cc: zong.li, greentime.hu, samuel.holland, Yu-Chien Peter Lin
This implementation shows how platforms can use the reserved PMP
allocator to protect critical memory regions during early boot.
Benefits of using reserved PMPs:
1) Reserved PMPs are not managed by domains - platforms have full control
over them. Since reserved entries won't be freed, they can safely set
lock bits (pmpcfg.L), unlike domain entries which must allow being
temporarily revoked during context switches.
2) One can allocate 2 consecutive entries to create ToR mode regions to
save PMP usage
3) The reserved PMPs have higher priority so their permissions are less
likely to be overwritten by other entries
Note: This is a demonstration patch and should not be merged.
Signed-off-by: Yu-Chien Peter Lin <peter.lin@sifive.com>
---
platform/generic/sifive/fu540.c | 56 +++++++++++++++++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/platform/generic/sifive/fu540.c b/platform/generic/sifive/fu540.c
index 83e57145..3f0fd032 100644
--- a/platform/generic/sifive/fu540.c
+++ b/platform/generic/sifive/fu540.c
@@ -8,6 +8,10 @@
*/
#include <platform_override.h>
+#include <sbi/riscv_asm.h>
+#include <sbi/riscv_encoding.h>
+#include <sbi/sbi_error.h>
+#include <sbi/sbi_types.h>
#include <sbi_utils/fdt/fdt_helper.h>
#include <sbi_utils/fdt/fdt_fixup.h>
@@ -20,9 +24,61 @@ static u64 sifive_fu540_tlbr_flush_limit(void)
return 0;
}
+static u32 sifive_fu540_get_reserved_pmp_count(void)
+{
+ /*
+ * Reserve an entry for demonstrating hole protection
+ * on SiFive FU540.
+ */
+ return 1;
+}
+
+// This is a demonstration of PMP-based memory protection rather
+// than protecting an actual memory hole.
+static int sifive_fu540_hole_protection(void)
+{
+ int rc;
+ unsigned int pmp_id;
+
+ rc = reserved_pmp_alloc(&pmp_id);
+ if (rc)
+ return rc;
+
+ /*
+ * Protect the memory hole at 0x0 - 0x1000 by setting
+ * it as inaccessible (no R/W/X) with the lock bit set.
+ * This prevents any access to this region in all modes.
+ */
+ rc = pmp_set(pmp_id, PMP_L, 0x0, 12);
+ if (rc) {
+ reserved_pmp_free(pmp_id);
+ return rc;
+ }
+
+ return 0;
+}
+
+static int sifive_fu540_early_init(bool cold_boot)
+{
+ int rc;
+
+ /* Set up memory hole protection */
+ rc = sifive_fu540_hole_protection();
+ if (rc)
+ return rc;
+
+ rc = generic_early_init(cold_boot);
+ if (rc)
+ return rc;
+
+ return 0;
+}
+
static int sifive_fu540_platform_init(const void *fdt, int nodeoff, const struct fdt_match *match)
{
generic_platform_ops.get_tlbr_flush_limit = sifive_fu540_tlbr_flush_limit;
+ generic_platform_ops.get_reserved_pmp_count = sifive_fu540_get_reserved_pmp_count;
+ generic_platform_ops.early_init = sifive_fu540_early_init;
return 0;
}
--
2.39.3
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [RFC PATCH v3 0/6] Extend the reserved PMP entries
2025-11-30 11:16 [RFC PATCH v3 0/6] Extend the reserved PMP entries Yu-Chien Peter Lin
` (5 preceding siblings ...)
2025-11-30 11:16 ` [RFC PATCH v3 6/6] [TEMP] demonstrate hole protection using reserved PMP Yu-Chien Peter Lin
@ 2026-02-11 15:29 ` Anup Patel
6 siblings, 0 replies; 8+ messages in thread
From: Anup Patel @ 2026-02-11 15:29 UTC (permalink / raw)
To: Yu-Chien Peter Lin; +Cc: opensbi, zong.li, greentime.hu, samuel.holland
On Sun, Nov 30, 2025 at 4:46 PM Yu-Chien Peter Lin <peter.lin@sifive.com> wrote:
>
> This series extends OpenSBI to support multiple reserved PMP entries
> that platforms can configure for critical memory protection needs.
>
> Key characteristics of reserved PMP entries:
>
> - Have highest priority
> - Available in ToR mode for platform-specific use cases
> - Persistent across domain context switches (cannot be disabled)
> - Support runtime allocation through a dedicated allocator API
>
> Motivation:
>
> Reserved PMP entries address the need to protect memory regions that
> cannot be covered by domain-managed PMP entries. For example, platforms
> can enforce PMA_UNSAFE regions [1] parsed from the device tree. These
> regions often cannot be precisely covered by one or two NAPOT entries,
> so using reserved entries allocated in ToR mode optimizes PMP usage.
ToR has its own downside because the next PMP entry marks the end of
the ToR PMP region so in many cases we might endup using more PMP
entries with ToR as compared to NAPOT.
For the optimal number of PMP entries, there is no clear winner in the
NAPOT vs ToR debate.
>
> Additionally, reserved entries remain unchanged across domain transitions
> and persist until hart reset, ensure consistent protections.
>
> Use case demonstration:
>
> This series includes a demonstration on the SiFive FU540 platform, which
> uses a reserved PMP entry to protect the memory region at 0x0-0x1000
> during early boot. This serves as a reference implementation showing how
> platforms can leverage the reserved PMP allocator.
Instead of the infrastructure added by this series, a platform can
simply add root memregions in sbi_platform_early_init() using
sbi_domain_root_add_memrange().
If the above is still not sufficient then the platform can have a separate
hook for reserved PMP entries as below (although I don't recommend it).
diff --git a/include/sbi/sbi_platform.h b/include/sbi/sbi_platform.h
index e65d9877..4ecc4582 100644
--- a/include/sbi/sbi_platform.h
+++ b/include/sbi/sbi_platform.h
@@ -149,6 +149,8 @@ struct sbi_platform_operations {
unsigned long log2len);
/** platform specific pmp disable on current HART */
void (*pmp_disable)(unsigned int n);
+ /** platform specific way to update a PMP entry as reserved on
current HART */
+ bool (*pmp_update_reserved)(unsigned int n, bool skip_write);
};
/** Platform default per-HART stack size for exception/interrupt handling */
@@ -687,6 +689,23 @@ static inline void sbi_platform_pmp_disable(const
struct sbi_platform *plat,
sbi_platform_ops(plat)->pmp_disable(n);
}
+/**
+ * Platform specific way to update a PMP entry as reserved on current HART
+ *
+ * @param plat pointer to struct sbi_platform
+ * @param n index of the pmp entry
+ * @param skip_write flag indicating pmp entry must not be written
+ *
+ * @return true if a pmp entry is reserved and false otherwise
+ */
+static inline bool sbi_platform_pmp_update_reserved(const struct
sbi_platform *plat,
+ unsigned int n, bool skip_write)
+{
+ if (plat && sbi_platform_ops(plat)->pmp_update_reserved)
+ return sbi_platform_ops(plat)->pmp_update_reserved(n, skip_write);
+ return false;
+}
+
#endif
#endif
diff --git a/lib/sbi/sbi_hart_pmp.c b/lib/sbi/sbi_hart_pmp.c
index be459129..27242113 100644
--- a/lib/sbi/sbi_hart_pmp.c
+++ b/lib/sbi/sbi_hart_pmp.c
@@ -120,6 +120,7 @@ static bool is_valid_pmp_idx(unsigned int
pmp_count, unsigned int pmp_idx)
static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch)
{
+ const struct sbi_platform *plat = sbi_platform_ptr(scratch);
struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr();
unsigned int pmp_log2gran, pmp_bits;
@@ -147,6 +148,9 @@ static int sbi_hart_smepmp_configure(struct
sbi_scratch *scratch)
/* Skip reserved entry */
if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
pmp_idx++;
+ while (sbi_platform_pmp_update_reserved(plat, pmp_idx, false))
+ pmp_idx++;
+
if (!is_valid_pmp_idx(pmp_count, pmp_idx))
return SBI_EFAIL;
@@ -190,6 +194,9 @@ static int sbi_hart_smepmp_configure(struct
sbi_scratch *scratch)
/* Skip reserved entry */
if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
pmp_idx++;
+ while (sbi_platform_pmp_update_reserved(plat, pmp_idx, true))
+ pmp_idx++;
+
if (!is_valid_pmp_idx(pmp_count, pmp_idx))
return SBI_EFAIL;
@@ -255,6 +262,7 @@ static int sbi_hart_smepmp_unmap_range(struct
sbi_scratch *scratch,
static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch)
{
+ const struct sbi_platform *plat = sbi_platform_ptr(scratch);
struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr();
unsigned long pmp_addr, pmp_addr_max;
@@ -269,6 +277,9 @@ static int sbi_hart_oldpmp_configure(struct
sbi_scratch *scratch)
pmp_idx = 0;
sbi_domain_for_each_memregion(dom, reg) {
+ while (sbi_platform_pmp_update_reserved(plat, pmp_idx, false))
+ pmp_idx++;
+
if (!is_valid_pmp_idx(pmp_count, pmp_idx))
return SBI_EFAIL;
Regards,
Anup
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-02-11 15:29 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-30 11:16 [RFC PATCH v3 0/6] Extend the reserved PMP entries Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 1/6] include: sbi: sbi_platform: add sbi_platform_reserved_pmp_count() Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 2/6] lib: sbi_init: print total and reserved PMP counts Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 3/6] lib: sbi: riscv_asm: support reserved PMP allocator Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 4/6] lib: sbi: sbi_hart: extend PMP handling to support multiple reserved entries Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 5/6] lib: sbi: sbi_init: call sbi_hart_init() earlier Yu-Chien Peter Lin
2025-11-30 11:16 ` [RFC PATCH v3 6/6] [TEMP] demonstrate hole protection using reserved PMP Yu-Chien Peter Lin
2026-02-11 15:29 ` [RFC PATCH v3 0/6] Extend the reserved PMP entries Anup Patel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox