* [PATCH 0/5] OpenSBI hart protection abstraction
@ 2025-11-26 14:18 Anup Patel
2025-11-26 14:18 ` [PATCH 1/5] lib: sbi: Introduce sbi_hart_pmp_unconfigure() function Anup Patel
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Anup Patel @ 2025-11-26 14:18 UTC (permalink / raw)
To: Atish Patra; +Cc: Andrew Jones, Anup Patel, opensbi, Anup Patel
Currently, PMP and ePMP are the only hart protection mechanisms
available in OpenSBI but new protection mechanisms (such as Smmpt)
will be added in the near future. This series adds hart protection
abstraction and related APIs for allowing multiple hart protection
mechanisms.
These patches can also found in hart_protection_v1 branch at:
https://github.com/avpatel/opensbi.git
Anup Patel (5):
lib: sbi: Introduce sbi_hart_pmp_unconfigure() function
lib: sbi: Introduce hart protection abstraction
lib: sbi: Implement hart protection for PMP and ePMP
lib: sbi: Replace sbi_hart_pmp_xyz() and sbi_hart_map/unmap_addr()
lib: sbi: Factor-out PMP programming into separate sources
include/sbi/sbi_hart.h | 25 +--
include/sbi/sbi_hart_pmp.h | 20 ++
include/sbi/sbi_hart_protection.h | 88 ++++++++
lib/sbi/objects.mk | 2 +
lib/sbi/sbi_dbtr.c | 33 +--
lib/sbi/sbi_domain_context.c | 13 +-
lib/sbi/sbi_ecall_dbcn.c | 6 +-
lib/sbi/sbi_hart.c | 308 +-------------------------
lib/sbi/sbi_hart_pmp.c | 356 ++++++++++++++++++++++++++++++
lib/sbi/sbi_hart_protection.c | 110 +++++++++
lib/sbi/sbi_init.c | 22 +-
lib/sbi/sbi_mpxy.c | 25 ++-
lib/sbi/sbi_pmu.c | 5 +-
lib/sbi/sbi_sse.c | 9 +-
14 files changed, 648 insertions(+), 374 deletions(-)
create mode 100644 include/sbi/sbi_hart_pmp.h
create mode 100644 include/sbi/sbi_hart_protection.h
create mode 100644 lib/sbi/sbi_hart_pmp.c
create mode 100644 lib/sbi/sbi_hart_protection.c
--
2.43.0
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/5] lib: sbi: Introduce sbi_hart_pmp_unconfigure() function
2025-11-26 14:18 [PATCH 0/5] OpenSBI hart protection abstraction Anup Patel
@ 2025-11-26 14:18 ` Anup Patel
2025-11-26 14:18 ` [PATCH 2/5] lib: sbi: Introduce hart protection abstraction Anup Patel
` (3 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Anup Patel @ 2025-11-26 14:18 UTC (permalink / raw)
To: Atish Patra; +Cc: Andrew Jones, Anup Patel, opensbi, Anup Patel
Currently, the unconfiguring PMP is implemented directly inside
switch_to_next_domain_context() whereas rest of the PMP programming
is done via functions implemented in sbi_hart.c.
Introduce a separate sbi_hart_pmp_unconfigure() function so that
all PMP programming is in one place.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
include/sbi/sbi_hart.h | 1 +
lib/sbi/sbi_domain_context.c | 10 +---------
lib/sbi/sbi_hart.c | 14 ++++++++++++++
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/include/sbi/sbi_hart.h b/include/sbi/sbi_hart.h
index e66dd52f..93682880 100644
--- a/include/sbi/sbi_hart.h
+++ b/include/sbi/sbi_hart.h
@@ -148,6 +148,7 @@ unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch);
unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch);
bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx);
int sbi_hart_pmp_configure(struct sbi_scratch *scratch);
+void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch);
int sbi_hart_map_saddr(unsigned long base, unsigned long size);
int sbi_hart_unmap_saddr(void);
int sbi_hart_priv_version(struct sbi_scratch *scratch);
diff --git a/lib/sbi/sbi_domain_context.c b/lib/sbi/sbi_domain_context.c
index 74ad25e8..8cf47323 100644
--- a/lib/sbi/sbi_domain_context.c
+++ b/lib/sbi/sbi_domain_context.c
@@ -102,7 +102,6 @@ static int switch_to_next_domain_context(struct hart_context *ctx,
struct sbi_trap_context *trap_ctx;
struct sbi_domain *current_dom, *target_dom;
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
- unsigned int pmp_count = sbi_hart_pmp_count(scratch);
if (!ctx || !dom_ctx || ctx == dom_ctx)
return SBI_EINVAL;
@@ -121,14 +120,7 @@ static int switch_to_next_domain_context(struct hart_context *ctx,
spin_unlock(&target_dom->assigned_harts_lock);
/* Reconfigure PMP settings for the new domain */
- for (int i = 0; i < pmp_count; i++) {
- /* Don't revoke firmware access permissions */
- if (sbi_hart_smepmp_is_fw_region(i))
- continue;
-
- sbi_platform_pmp_disable(sbi_platform_thishart_ptr(), i);
- pmp_disable(i);
- }
+ sbi_hart_pmp_unconfigure(scratch);
sbi_hart_pmp_configure(scratch);
/* Save current CSR context and restore target domain's CSR context */
diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
index a91703b4..c5a8d248 100644
--- a/lib/sbi/sbi_hart.c
+++ b/lib/sbi/sbi_hart.c
@@ -573,6 +573,20 @@ int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
return rc;
}
+void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
+{
+ int i, pmp_count = sbi_hart_pmp_count(scratch);
+
+ for (i = 0; i < pmp_count; i++) {
+ /* Don't revoke firmware access permissions */
+ if (sbi_hart_smepmp_is_fw_region(i))
+ continue;
+
+ sbi_platform_pmp_disable(sbi_platform_ptr(scratch), i);
+ pmp_disable(i);
+ }
+}
+
int sbi_hart_priv_version(struct sbi_scratch *scratch)
{
struct sbi_hart_features *hfeatures =
--
2.43.0
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/5] lib: sbi: Introduce hart protection abstraction
2025-11-26 14:18 [PATCH 0/5] OpenSBI hart protection abstraction Anup Patel
2025-11-26 14:18 ` [PATCH 1/5] lib: sbi: Introduce sbi_hart_pmp_unconfigure() function Anup Patel
@ 2025-11-26 14:18 ` Anup Patel
2025-12-07 11:10 ` Samuel Holland
2025-11-26 14:18 ` [PATCH 3/5] lib: sbi: Implement hart protection for PMP and ePMP Anup Patel
` (2 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: Anup Patel @ 2025-11-26 14:18 UTC (permalink / raw)
To: Atish Patra; +Cc: Andrew Jones, Anup Patel, opensbi, Anup Patel
Currently, PMP and ePMP are the only hart protection mechanisms
available in OpenSBI but new protection mechanisms (such as Smmpt)
will be added in the near future.
To allow multiple hart protection mechanisms, introduce hart
protection abstraction and related APIs.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
include/sbi/sbi_hart_protection.h | 88 ++++++++++++++++++++++++
lib/sbi/objects.mk | 1 +
lib/sbi/sbi_hart_protection.c | 110 ++++++++++++++++++++++++++++++
lib/sbi/sbi_init.c | 5 ++
4 files changed, 204 insertions(+)
create mode 100644 include/sbi/sbi_hart_protection.h
create mode 100644 lib/sbi/sbi_hart_protection.c
diff --git a/include/sbi/sbi_hart_protection.h b/include/sbi/sbi_hart_protection.h
new file mode 100644
index 00000000..c9e452db
--- /dev/null
+++ b/include/sbi/sbi_hart_protection.h
@@ -0,0 +1,88 @@
+/*
+ * SPDX-License-Identifier: BSD-2-Clause
+ *
+ * Copyright (c) 2025 Ventana Micro Systems Inc.
+ */
+
+#ifndef __SBI_HART_PROTECTION_H__
+#define __SBI_HART_PROTECTION_H__
+
+#include <sbi/sbi_types.h>
+#include <sbi/sbi_list.h>
+
+struct sbi_scratch;
+
+/** Representation of hart protection mechanism */
+struct sbi_hart_protection {
+ /** List head */
+ struct sbi_dlist head;
+
+ /** Name of the hart protection mechanism */
+ char name[32];
+
+ /** Ratings of the hart protection mechanism (higher is better) */
+ unsigned long rating;
+
+ /** Configure protection for current HART (Mandatory) */
+ int (*configure)(struct sbi_scratch *scratch);
+
+ /** Unconfigure protection for current HART (Mandatory) */
+ void (*unconfigure)(struct sbi_scratch *scratch);
+
+ /** Create temporary mapping to access address range on current HART (Optional) */
+ int (*map_range)(struct sbi_scratch *scratch,
+ unsigned long base, unsigned long size);
+
+ /** Destroy temporary mapping on current HART (Optional) */
+ int (*unmap_range)(struct sbi_scratch *scratch,
+ unsigned long base, unsigned long size);
+};
+
+/**
+ * Get the best hart protection mechanism
+ *
+ * @return pointer to best hart protection mechanism
+ */
+struct sbi_hart_protection *sbi_hart_protection_best(void);
+
+/**
+ * Register a hart protection mechanism
+ *
+ * @return 0 on success and negative error code on failure
+ */
+int sbi_hart_protection_register(struct sbi_hart_protection *hprot);
+
+/**
+ * Unregister a hart protection mechanism
+ *
+ * @return 0 on success and negative error code on failure
+ */
+int sbi_hart_protection_unregister(struct sbi_hart_protection *hprot);
+
+/**
+ * Configure protection for current HART
+ *
+ * @return 0 on success and negative error code on failure
+ */
+int sbi_hart_protection_configure(struct sbi_scratch *scratch);
+
+/**
+ * Unconfigure protection for current HART
+ */
+void sbi_hart_protection_unconfigure(struct sbi_scratch *scratch);
+
+/**
+ * Create temporary mapping to access address range on current HART
+ *
+ * @return 0 on success and negative error code on failure
+ */
+int sbi_hart_protection_map_range(unsigned long base, unsigned long size);
+
+/**
+ * Destroy temporary mapping to access address range on current HART
+ *
+ * @return 0 on success and negative error code on failure
+ */
+int sbi_hart_protection_unmap_range(unsigned long base, unsigned long size);
+
+#endif /* __SBI_HART_PROTECTION_H__ */
diff --git a/lib/sbi/objects.mk b/lib/sbi/objects.mk
index 8abe1e8e..51588cd1 100644
--- a/lib/sbi/objects.mk
+++ b/lib/sbi/objects.mk
@@ -75,6 +75,7 @@ libsbi-objs-y += sbi_emulate_csr.o
libsbi-objs-y += sbi_fifo.o
libsbi-objs-y += sbi_fwft.o
libsbi-objs-y += sbi_hart.o
+libsbi-objs-y += sbi_hart_protection.o
libsbi-objs-y += sbi_heap.o
libsbi-objs-y += sbi_math.o
libsbi-objs-y += sbi_hfence.o
diff --git a/lib/sbi/sbi_hart_protection.c b/lib/sbi/sbi_hart_protection.c
new file mode 100644
index 00000000..831072ad
--- /dev/null
+++ b/lib/sbi/sbi_hart_protection.c
@@ -0,0 +1,110 @@
+/*
+ * SPDX-License-Identifier: BSD-2-Clause
+ *
+ * Copyright (c) 2025 Ventana Micro Systems Inc.
+ */
+
+#include <sbi/sbi_error.h>
+#include <sbi/sbi_hart_protection.h>
+#include <sbi/sbi_scratch.h>
+
+static SBI_LIST_HEAD(hart_protection_list);
+
+struct sbi_hart_protection *sbi_hart_protection_best(void)
+{
+ if (sbi_list_empty(&hart_protection_list))
+ return NULL;
+
+ return sbi_list_first_entry(&hart_protection_list, struct sbi_hart_protection, head);
+}
+
+int sbi_hart_protection_register(struct sbi_hart_protection *hprot)
+{
+ struct sbi_hart_protection *pos = NULL;
+ bool found_pos = false;
+
+ if (!hprot)
+ return SBI_EINVAL;
+
+ sbi_list_for_each_entry(pos, &hart_protection_list, head) {
+ if (hprot->rating > pos->rating) {
+ found_pos = true;
+ break;
+ }
+ }
+
+ if (found_pos)
+ sbi_list_add_tail(&hprot->head, &pos->head);
+ else
+ sbi_list_add_tail(&hprot->head, &hart_protection_list);
+
+ return 0;
+}
+
+int sbi_hart_protection_unregister(struct sbi_hart_protection *hprot)
+{
+ struct sbi_hart_protection *pos = NULL;
+ bool found_pos = false;
+
+ if (!hprot)
+ return SBI_EINVAL;
+
+ sbi_list_for_each_entry(pos, &hart_protection_list, head) {
+ if (hprot == pos) {
+ found_pos = true;
+ break;
+ }
+ }
+
+ if (!found_pos)
+ return SBI_ENOENT;
+
+ sbi_list_del(&hprot->head);
+ return 0;
+}
+
+int sbi_hart_protection_configure(struct sbi_scratch *scratch)
+{
+ struct sbi_hart_protection *hprot = sbi_hart_protection_best();
+
+ if (!hprot)
+ return SBI_EINVAL;
+ if (!hprot->configure)
+ return SBI_ENOSYS;
+
+ return hprot->configure(scratch);
+}
+
+void sbi_hart_protection_unconfigure(struct sbi_scratch *scratch)
+{
+ struct sbi_hart_protection *hprot = sbi_hart_protection_best();
+
+ if (!hprot || !hprot->unconfigure)
+ return;
+
+ hprot->unconfigure(scratch);
+}
+
+int sbi_hart_protection_map_range(unsigned long base, unsigned long size)
+{
+ struct sbi_hart_protection *hprot = sbi_hart_protection_best();
+
+ if (!hprot)
+ return SBI_EINVAL;
+ if (!hprot->map_range)
+ return 0;
+
+ return hprot->map_range(sbi_scratch_thishart_ptr(), base, size);
+}
+
+int sbi_hart_protection_unmap_range(unsigned long base, unsigned long size)
+{
+ struct sbi_hart_protection *hprot = sbi_hart_protection_best();
+
+ if (!hprot)
+ return SBI_EINVAL;
+ if (!hprot->unmap_range)
+ return 0;
+
+ return hprot->unmap_range(sbi_scratch_thishart_ptr(), base, size);
+}
diff --git a/lib/sbi/sbi_init.c b/lib/sbi/sbi_init.c
index 663b486b..b161d1c1 100644
--- a/lib/sbi/sbi_init.c
+++ b/lib/sbi/sbi_init.c
@@ -18,6 +18,7 @@
#include <sbi/sbi_fwft.h>
#include <sbi/sbi_hart.h>
#include <sbi/sbi_hartmask.h>
+#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_hsm.h>
#include <sbi/sbi_ipi.h>
@@ -74,6 +75,7 @@ static void sbi_boot_print_general(struct sbi_scratch *scratch)
const struct sbi_hsm_device *hdev;
const struct sbi_ipi_device *idev;
const struct sbi_timer_device *tdev;
+ const struct sbi_hart_protection *hprot;
const struct sbi_console_device *cdev;
const struct sbi_system_reset_device *srdev;
const struct sbi_system_suspend_device *susp_dev;
@@ -90,6 +92,9 @@ static void sbi_boot_print_general(struct sbi_scratch *scratch)
sbi_printf("Platform Features : %s\n", str);
sbi_printf("Platform HART Count : %u\n",
sbi_platform_hart_count(plat));
+ hprot = sbi_hart_protection_best();
+ sbi_printf("Platform HART Protection : %s\n",
+ (hprot) ? hprot->name : "---");
idev = sbi_ipi_get_device();
sbi_printf("Platform IPI Device : %s\n",
(idev) ? idev->name : "---");
--
2.43.0
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 3/5] lib: sbi: Implement hart protection for PMP and ePMP
2025-11-26 14:18 [PATCH 0/5] OpenSBI hart protection abstraction Anup Patel
2025-11-26 14:18 ` [PATCH 1/5] lib: sbi: Introduce sbi_hart_pmp_unconfigure() function Anup Patel
2025-11-26 14:18 ` [PATCH 2/5] lib: sbi: Introduce hart protection abstraction Anup Patel
@ 2025-11-26 14:18 ` Anup Patel
2025-12-07 11:12 ` Samuel Holland
2025-11-26 14:18 ` [PATCH 4/5] lib: sbi: Replace sbi_hart_pmp_xyz() and sbi_hart_map/unmap_addr() Anup Patel
2025-11-26 14:18 ` [PATCH 5/5] lib: sbi: Factor-out PMP programming into separate sources Anup Patel
4 siblings, 1 reply; 12+ messages in thread
From: Anup Patel @ 2025-11-26 14:18 UTC (permalink / raw)
To: Atish Patra; +Cc: Andrew Jones, Anup Patel, opensbi, Anup Patel
Implement PMP and ePMP based hart protection abstraction so
that usage of sbi_hart_pmp_xyz() functions can be replaced
with sbi_hart_protection_xyz() functions.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
lib/sbi/sbi_hart.c | 209 ++++++++++++++++++++++++++++-----------------
1 file changed, 133 insertions(+), 76 deletions(-)
diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
index c5a8d248..8869839d 100644
--- a/lib/sbi/sbi_hart.c
+++ b/lib/sbi/sbi_hart.c
@@ -17,6 +17,7 @@
#include <sbi/sbi_csr_detect.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_hart.h>
+#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_math.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_pmu.h>
@@ -311,6 +312,30 @@ bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx)
return bitmap_test(fw_smepmp_ids, pmp_idx) ? true : false;
}
+static void sbi_hart_pmp_fence(void)
+{
+ /*
+ * As per section 3.7.2 of privileged specification v1.12,
+ * virtual address translations can be speculatively performed
+ * (even before actual access). These, along with PMP traslations,
+ * can be cached. This can pose a problem with CPU hotplug
+ * and non-retentive suspend scenario because PMP states are
+ * not preserved.
+ * It is advisable to flush the caching structures under such
+ * conditions.
+ */
+ if (misa_extension('S')) {
+ __asm__ __volatile__("sfence.vma");
+
+ /*
+ * If hypervisor mode is supported, flush caching
+ * structures in guest mode too.
+ */
+ if (misa_extension('H'))
+ __sbi_hfence_gvma_all();
+ }
+}
+
static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
struct sbi_domain *dom,
struct sbi_domain_memregion *reg,
@@ -343,14 +368,19 @@ static bool is_valid_pmp_idx(unsigned int pmp_count, unsigned int pmp_idx)
return false;
}
-static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
- unsigned int pmp_count,
- unsigned int pmp_log2gran,
- unsigned long pmp_addr_max)
+static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch)
{
struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr();
- unsigned int pmp_idx, pmp_flags;
+ unsigned int pmp_log2gran, pmp_bits;
+ unsigned int pmp_idx, pmp_count;
+ unsigned long pmp_addr_max;
+ unsigned int pmp_flags;
+
+ pmp_count = sbi_hart_pmp_count(scratch);
+ pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
+ pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
+ pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
/*
* Set the RLB so that, we can write to PMP entries without
@@ -430,20 +460,64 @@ static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
* Keep the RLB bit so that dynamic mappings can be done.
*/
+ sbi_hart_pmp_fence();
return 0;
}
-static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
- unsigned int pmp_count,
- unsigned int pmp_log2gran,
- unsigned long pmp_addr_max)
+static int sbi_hart_smepmp_map_range(struct sbi_scratch *scratch,
+ unsigned long addr, unsigned long size)
+{
+ /* shared R/W access for M and S/U mode */
+ unsigned int pmp_flags = (PMP_W | PMP_X);
+ unsigned long order, base = 0;
+
+ if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
+ return SBI_ENOSPC;
+
+ for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
+ order <= __riscv_xlen; order++) {
+ if (order < __riscv_xlen) {
+ base = addr & ~((1UL << order) - 1UL);
+ if ((base <= addr) &&
+ (addr < (base + (1UL << order))) &&
+ (base <= (addr + size - 1UL)) &&
+ ((addr + size - 1UL) < (base + (1UL << order))))
+ break;
+ } else {
+ return SBI_EFAIL;
+ }
+ }
+
+ sbi_platform_pmp_set(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY,
+ SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW,
+ pmp_flags, base, order);
+ pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
+
+ return SBI_OK;
+}
+
+static int sbi_hart_smepmp_unmap_range(struct sbi_scratch *scratch,
+ unsigned long addr, unsigned long size)
+{
+ sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
+ return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
+}
+
+static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch)
{
struct sbi_domain_memregion *reg;
struct sbi_domain *dom = sbi_domain_thishart_ptr();
- unsigned int pmp_idx = 0;
+ unsigned long pmp_addr, pmp_addr_max;
+ unsigned int pmp_log2gran, pmp_bits;
+ unsigned int pmp_idx, pmp_count;
unsigned int pmp_flags;
- unsigned long pmp_addr;
+ pmp_count = sbi_hart_pmp_count(scratch);
+ pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
+ pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
+ pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
+
+ pmp_idx = 0;
sbi_domain_for_each_memregion(dom, reg) {
if (!is_valid_pmp_idx(pmp_count, pmp_idx))
return SBI_EFAIL;
@@ -478,43 +552,19 @@ static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
}
}
+ sbi_hart_pmp_fence();
return 0;
}
int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
{
- /* shared R/W access for M and S/U mode */
- unsigned int pmp_flags = (PMP_W | PMP_X);
- unsigned long order, base = 0;
struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
/* If Smepmp is not supported no special mapping is required */
if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
return SBI_OK;
- if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
- return SBI_ENOSPC;
-
- for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
- order <= __riscv_xlen; order++) {
- if (order < __riscv_xlen) {
- base = addr & ~((1UL << order) - 1UL);
- if ((base <= addr) &&
- (addr < (base + (1UL << order))) &&
- (base <= (addr + size - 1UL)) &&
- ((addr + size - 1UL) < (base + (1UL << order))))
- break;
- } else {
- return SBI_EFAIL;
- }
- }
-
- sbi_platform_pmp_set(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY,
- SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW,
- pmp_flags, base, order);
- pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
-
- return SBI_OK;
+ return sbi_hart_smepmp_map_range(scratch, addr, size);
}
int sbi_hart_unmap_saddr(void)
@@ -524,53 +574,18 @@ int sbi_hart_unmap_saddr(void)
if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
return SBI_OK;
- sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
- return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
+ return sbi_hart_smepmp_unmap_range(scratch, 0, 0);
}
int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
{
- int rc;
- unsigned int pmp_bits, pmp_log2gran;
- unsigned int pmp_count = sbi_hart_pmp_count(scratch);
- unsigned long pmp_addr_max;
-
- if (!pmp_count)
+ if (!sbi_hart_pmp_count(scratch))
return 0;
- pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
- pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
- pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
-
if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
- rc = sbi_hart_smepmp_configure(scratch, pmp_count,
- pmp_log2gran, pmp_addr_max);
+ return sbi_hart_smepmp_configure(scratch);
else
- rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
- pmp_log2gran, pmp_addr_max);
-
- /*
- * As per section 3.7.2 of privileged specification v1.12,
- * virtual address translations can be speculatively performed
- * (even before actual access). These, along with PMP traslations,
- * can be cached. This can pose a problem with CPU hotplug
- * and non-retentive suspend scenario because PMP states are
- * not preserved.
- * It is advisable to flush the caching structures under such
- * conditions.
- */
- if (misa_extension('S')) {
- __asm__ __volatile__("sfence.vma");
-
- /*
- * If hypervisor mode is supported, flush caching
- * structures in guest mode too.
- */
- if (misa_extension('H'))
- __sbi_hfence_gvma_all();
- }
-
- return rc;
+ return sbi_hart_oldpmp_configure(scratch);
}
void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
@@ -587,6 +602,42 @@ void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
}
}
+static struct sbi_hart_protection pmp_protection = {
+ .name = "pmp",
+ .rating = 100,
+ .configure = sbi_hart_oldpmp_configure,
+ .unconfigure = sbi_hart_pmp_unconfigure,
+};
+
+static struct sbi_hart_protection epmp_protection = {
+ .name = "epmp",
+ .rating = 200,
+ .configure = sbi_hart_smepmp_configure,
+ .unconfigure = sbi_hart_pmp_unconfigure,
+ .map_range = sbi_hart_smepmp_map_range,
+ .unmap_range = sbi_hart_smepmp_unmap_range,
+};
+
+static int sbi_hart_pmp_init(struct sbi_scratch *scratch)
+{
+ int rc;
+
+ if (!sbi_hart_pmp_count(scratch))
+ return 0;
+
+ rc = sbi_hart_protection_register(&pmp_protection);
+ if (rc)
+ return rc;
+
+ if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP)) {
+ rc = sbi_hart_protection_register(&epmp_protection);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
int sbi_hart_priv_version(struct sbi_scratch *scratch)
{
struct sbi_hart_features *hfeatures =
@@ -1051,6 +1102,12 @@ int sbi_hart_init(struct sbi_scratch *scratch, bool cold_boot)
if (rc)
return rc;
+ if (cold_boot) {
+ rc = sbi_hart_pmp_init(scratch);
+ if (rc)
+ return rc;
+ }
+
rc = delegate_traps(scratch);
if (rc)
return rc;
--
2.43.0
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 4/5] lib: sbi: Replace sbi_hart_pmp_xyz() and sbi_hart_map/unmap_addr()
2025-11-26 14:18 [PATCH 0/5] OpenSBI hart protection abstraction Anup Patel
` (2 preceding siblings ...)
2025-11-26 14:18 ` [PATCH 3/5] lib: sbi: Implement hart protection for PMP and ePMP Anup Patel
@ 2025-11-26 14:18 ` Anup Patel
2025-11-26 14:18 ` [PATCH 5/5] lib: sbi: Factor-out PMP programming into separate sources Anup Patel
4 siblings, 0 replies; 12+ messages in thread
From: Anup Patel @ 2025-11-26 14:18 UTC (permalink / raw)
To: Atish Patra; +Cc: Andrew Jones, Anup Patel, opensbi, Anup Patel
The sbi_hart_pmp_xyz() and sbi_hart_map/unmap_addr() functions can
now be replaced by various sbi_hart_protection_xyz() functions.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
include/sbi/sbi_hart.h | 8 ++------
lib/sbi/sbi_dbtr.c | 33 ++++++++++++++++++++-------------
lib/sbi/sbi_domain_context.c | 5 +++--
lib/sbi/sbi_ecall_dbcn.c | 6 +++---
lib/sbi/sbi_hart.c | 34 +---------------------------------
lib/sbi/sbi_init.c | 16 ++++++++--------
lib/sbi/sbi_mpxy.c | 25 +++++++++++++------------
lib/sbi/sbi_pmu.c | 5 +++--
lib/sbi/sbi_sse.c | 9 +++++----
9 files changed, 58 insertions(+), 83 deletions(-)
diff --git a/include/sbi/sbi_hart.h b/include/sbi/sbi_hart.h
index 93682880..539f95de 100644
--- a/include/sbi/sbi_hart.h
+++ b/include/sbi/sbi_hart.h
@@ -115,8 +115,8 @@ enum sbi_hart_csrs {
* When shared memory access is required, the physical address
* should be programmed into the first PMP entry with R/W
* permissions to the M-mode. Once the work is done, it should be
- * unmapped. sbi_hart_map_saddr/sbi_hart_unmap_saddr function
- * pair should be used to map/unmap the shared memory.
+ * unmapped. sbi_hart_protection_map_range/sbi_hart_protection_unmap_range
+ * function pair should be used to map/unmap the shared memory.
*/
#define SBI_SMEPMP_RESV_ENTRY 0
@@ -147,10 +147,6 @@ unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch);
unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch);
unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch);
bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx);
-int sbi_hart_pmp_configure(struct sbi_scratch *scratch);
-void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch);
-int sbi_hart_map_saddr(unsigned long base, unsigned long size);
-int sbi_hart_unmap_saddr(void);
int sbi_hart_priv_version(struct sbi_scratch *scratch);
void sbi_hart_get_priv_version_str(struct sbi_scratch *scratch,
char *version_str, int nvstr);
diff --git a/lib/sbi/sbi_dbtr.c b/lib/sbi/sbi_dbtr.c
index 656a6261..8bcb4312 100644
--- a/lib/sbi/sbi_dbtr.c
+++ b/lib/sbi/sbi_dbtr.c
@@ -16,6 +16,7 @@
#include <sbi/sbi_trap.h>
#include <sbi/sbi_dbtr.h>
#include <sbi/sbi_heap.h>
+#include <sbi/sbi_hart_protection.h>
#include <sbi/riscv_encoding.h>
#include <sbi/riscv_asm.h>
@@ -558,8 +559,8 @@ int sbi_dbtr_read_trig(unsigned long smode,
shmem_base = hart_shmem_base(hs);
- sbi_hart_map_saddr((unsigned long)shmem_base,
- trig_count * sizeof(*entry));
+ sbi_hart_protection_map_range((unsigned long)shmem_base,
+ trig_count * sizeof(*entry));
for_each_trig_entry(shmem_base, trig_count, typeof(*entry), entry) {
xmit = &entry->data;
trig = INDEX_TO_TRIGGER((_idx + trig_idx_base));
@@ -572,7 +573,8 @@ int sbi_dbtr_read_trig(unsigned long smode,
xmit->tdata2 = cpu_to_lle(trig->tdata2);
xmit->tdata3 = cpu_to_lle(trig->tdata3);
}
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)shmem_base,
+ trig_count * sizeof(*entry));
return SBI_SUCCESS;
}
@@ -596,8 +598,8 @@ int sbi_dbtr_install_trig(unsigned long smode,
return SBI_ERR_NO_SHMEM;
shmem_base = hart_shmem_base(hs);
- sbi_hart_map_saddr((unsigned long)shmem_base,
- trig_count * sizeof(*entry));
+ sbi_hart_protection_map_range((unsigned long)shmem_base,
+ trig_count * sizeof(*entry));
/* Check requested triggers configuration */
for_each_trig_entry(shmem_base, trig_count, typeof(*entry), entry) {
@@ -606,20 +608,23 @@ int sbi_dbtr_install_trig(unsigned long smode,
if (!dbtr_trigger_supported(TDATA1_GET_TYPE(ctrl))) {
*out = _idx;
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)shmem_base,
+ trig_count * sizeof(*entry));
return SBI_ERR_FAILED;
}
if (!dbtr_trigger_valid(TDATA1_GET_TYPE(ctrl), ctrl)) {
*out = _idx;
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)shmem_base,
+ trig_count * sizeof(*entry));
return SBI_ERR_FAILED;
}
}
if (hs->available_trigs < trig_count) {
*out = hs->available_trigs;
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)shmem_base,
+ trig_count * sizeof(*entry));
return SBI_ERR_FAILED;
}
@@ -639,7 +644,9 @@ int sbi_dbtr_install_trig(unsigned long smode,
xmit->idx = cpu_to_lle(trig->index);
}
- sbi_hart_unmap_saddr();
+
+ sbi_hart_protection_unmap_range((unsigned long)shmem_base,
+ trig_count * sizeof(*entry));
return SBI_SUCCESS;
}
@@ -712,23 +719,23 @@ int sbi_dbtr_update_trig(unsigned long smode,
return SBI_ERR_BAD_RANGE;
for_each_trig_entry(shmem_base, trig_count, typeof(*entry), entry) {
- sbi_hart_map_saddr((unsigned long)entry, sizeof(*entry));
+ sbi_hart_protection_map_range((unsigned long)entry, sizeof(*entry));
trig_idx = entry->id.idx;
if (trig_idx >= hs->total_trigs) {
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)entry, sizeof(*entry));
return SBI_ERR_INVALID_PARAM;
}
trig = INDEX_TO_TRIGGER(trig_idx);
if (!(trig->state & RV_DBTR_BIT_MASK(TS, MAPPED))) {
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)entry, sizeof(*entry));
return SBI_ERR_FAILED;
}
dbtr_trigger_setup(trig, &entry->data);
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)entry, sizeof(*entry));
dbtr_trigger_enable(trig);
}
diff --git a/lib/sbi/sbi_domain_context.c b/lib/sbi/sbi_domain_context.c
index 8cf47323..f12be049 100644
--- a/lib/sbi/sbi_domain_context.c
+++ b/lib/sbi/sbi_domain_context.c
@@ -10,6 +10,7 @@
#include <sbi/sbi_console.h>
#include <sbi/sbi_hsm.h>
#include <sbi/sbi_hart.h>
+#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_scratch.h>
#include <sbi/sbi_string.h>
@@ -120,8 +121,8 @@ static int switch_to_next_domain_context(struct hart_context *ctx,
spin_unlock(&target_dom->assigned_harts_lock);
/* Reconfigure PMP settings for the new domain */
- sbi_hart_pmp_unconfigure(scratch);
- sbi_hart_pmp_configure(scratch);
+ sbi_hart_protection_unconfigure(scratch);
+ sbi_hart_protection_configure(scratch);
/* Save current CSR context and restore target domain's CSR context */
ctx->sstatus = csr_swap(CSR_SSTATUS, dom_ctx->sstatus);
diff --git a/lib/sbi/sbi_ecall_dbcn.c b/lib/sbi/sbi_ecall_dbcn.c
index 1b0aebdc..75c8455d 100644
--- a/lib/sbi/sbi_ecall_dbcn.c
+++ b/lib/sbi/sbi_ecall_dbcn.c
@@ -14,7 +14,7 @@
#include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_trap.h>
#include <sbi/riscv_asm.h>
-#include <sbi/sbi_hart.h>
+#include <sbi/sbi_hart_protection.h>
static int sbi_ecall_dbcn_handler(unsigned long extid, unsigned long funcid,
struct sbi_trap_regs *regs,
@@ -46,12 +46,12 @@ static int sbi_ecall_dbcn_handler(unsigned long extid, unsigned long funcid,
regs->a1, regs->a0, smode,
SBI_DOMAIN_READ|SBI_DOMAIN_WRITE))
return SBI_ERR_INVALID_PARAM;
- sbi_hart_map_saddr(regs->a1, regs->a0);
+ sbi_hart_protection_map_range(regs->a1, regs->a0);
if (funcid == SBI_EXT_DBCN_CONSOLE_WRITE)
out->value = sbi_nputs((const char *)regs->a1, regs->a0);
else
out->value = sbi_ngets((char *)regs->a1, regs->a0);
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range(regs->a1, regs->a0);
return 0;
case SBI_EXT_DBCN_CONSOLE_WRITE_BYTE:
sbi_putc(regs->a0);
diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
index 8869839d..3fdf1047 100644
--- a/lib/sbi/sbi_hart.c
+++ b/lib/sbi/sbi_hart.c
@@ -556,39 +556,7 @@ static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch)
return 0;
}
-int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
-{
- struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
-
- /* If Smepmp is not supported no special mapping is required */
- if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
- return SBI_OK;
-
- return sbi_hart_smepmp_map_range(scratch, addr, size);
-}
-
-int sbi_hart_unmap_saddr(void)
-{
- struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
-
- if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
- return SBI_OK;
-
- return sbi_hart_smepmp_unmap_range(scratch, 0, 0);
-}
-
-int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
-{
- if (!sbi_hart_pmp_count(scratch))
- return 0;
-
- if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
- return sbi_hart_smepmp_configure(scratch);
- else
- return sbi_hart_oldpmp_configure(scratch);
-}
-
-void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
+static void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
{
int i, pmp_count = sbi_hart_pmp_count(scratch);
diff --git a/lib/sbi/sbi_init.c b/lib/sbi/sbi_init.c
index b161d1c1..e01d26bf 100644
--- a/lib/sbi/sbi_init.c
+++ b/lib/sbi/sbi_init.c
@@ -389,12 +389,12 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid)
}
/*
- * Configure PMP at last because if SMEPMP is detected,
- * M-mode access to the S/U space will be rescinded.
+ * Configure hart isolation at last because if SMEPMP is,
+ * detected, M-mode access to the S/U space will be rescinded.
*/
- rc = sbi_hart_pmp_configure(scratch);
+ rc = sbi_hart_protection_configure(scratch);
if (rc) {
- sbi_printf("%s: PMP configure failed (error %d)\n",
+ sbi_printf("%s: hart isolation configure failed (error %d)\n",
__func__, rc);
sbi_hart_hang();
}
@@ -468,10 +468,10 @@ static void __noreturn init_warm_startup(struct sbi_scratch *scratch,
sbi_hart_hang();
/*
- * Configure PMP at last because if SMEPMP is detected,
- * M-mode access to the S/U space will be rescinded.
+ * Configure hart isolation at last because if SMEPMP is,
+ * detected, M-mode access to the S/U space will be rescinded.
*/
- rc = sbi_hart_pmp_configure(scratch);
+ rc = sbi_hart_protection_configure(scratch);
if (rc)
sbi_hart_hang();
@@ -492,7 +492,7 @@ static void __noreturn init_warm_resume(struct sbi_scratch *scratch,
if (rc)
sbi_hart_hang();
- rc = sbi_hart_pmp_configure(scratch);
+ rc = sbi_hart_protection_configure(scratch);
if (rc)
sbi_hart_hang();
diff --git a/lib/sbi/sbi_mpxy.c b/lib/sbi/sbi_mpxy.c
index 681cfbe9..a83cf16c 100644
--- a/lib/sbi/sbi_mpxy.c
+++ b/lib/sbi/sbi_mpxy.c
@@ -11,6 +11,7 @@
#include <sbi/sbi_domain.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_hart.h>
+#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_mpxy.h>
@@ -375,10 +376,10 @@ int sbi_mpxy_set_shmem(unsigned long shmem_phys_lo,
if (flags == SBI_EXT_MPXY_SHMEM_FLAG_OVERWRITE_RETURN) {
ret_buf = (unsigned long *)(ulong)SHMEM_PHYS_ADDR(shmem_phys_hi,
shmem_phys_lo);
- sbi_hart_map_saddr((unsigned long)ret_buf, mpxy_shmem_size);
+ sbi_hart_protection_map_range((unsigned long)ret_buf, mpxy_shmem_size);
ret_buf[0] = cpu_to_lle(ms->shmem.shmem_addr_lo);
ret_buf[1] = cpu_to_lle(ms->shmem.shmem_addr_hi);
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)ret_buf, mpxy_shmem_size);
}
/** Setup the new shared memory */
@@ -407,7 +408,7 @@ int sbi_mpxy_get_channel_ids(u32 start_index)
return SBI_ERR_INVALID_PARAM;
shmem_base = hart_shmem_base(ms);
- sbi_hart_map_saddr((unsigned long)hart_shmem_base(ms), mpxy_shmem_size);
+ sbi_hart_protection_map_range((unsigned long)hart_shmem_base(ms), mpxy_shmem_size);
/** number of channel ids which can be stored in shmem adjusting
* for remaining and returned fields */
@@ -434,7 +435,7 @@ int sbi_mpxy_get_channel_ids(u32 start_index)
shmem_base[0] = cpu_to_le32(remaining);
shmem_base[1] = cpu_to_le32(returned);
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)hart_shmem_base(ms), mpxy_shmem_size);
return SBI_SUCCESS;
}
@@ -465,7 +466,7 @@ int sbi_mpxy_read_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count)
shmem_base = hart_shmem_base(ms);
end_id = base_attr_id + attr_count - 1;
- sbi_hart_map_saddr((unsigned long)hart_shmem_base(ms), mpxy_shmem_size);
+ sbi_hart_protection_map_range((unsigned long)hart_shmem_base(ms), mpxy_shmem_size);
/* Standard attributes range check */
if (mpxy_is_std_attr(base_attr_id)) {
@@ -504,7 +505,7 @@ int sbi_mpxy_read_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count)
base_attr_id, attr_count);
}
out:
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)hart_shmem_base(ms), mpxy_shmem_size);
return ret;
}
@@ -616,7 +617,7 @@ int sbi_mpxy_write_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count)
shmem_base = hart_shmem_base(ms);
end_id = base_attr_id + attr_count - 1;
- sbi_hart_map_saddr((unsigned long)shmem_base, mpxy_shmem_size);
+ sbi_hart_protection_map_range((unsigned long)shmem_base, mpxy_shmem_size);
mem_ptr = (u32 *)shmem_base;
@@ -673,7 +674,7 @@ int sbi_mpxy_write_attrs(u32 channel_id, u32 base_attr_id, u32 attr_count)
base_attr_id, attr_count);
}
out:
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)shmem_base, mpxy_shmem_size);
return ret;
}
@@ -705,7 +706,7 @@ int sbi_mpxy_send_message(u32 channel_id, u8 msg_id,
return SBI_ERR_INVALID_PARAM;
shmem_base = hart_shmem_base(ms);
- sbi_hart_map_saddr((unsigned long)shmem_base, mpxy_shmem_size);
+ sbi_hart_protection_map_range((unsigned long)shmem_base, mpxy_shmem_size);
if (resp_data_len) {
resp_buf = shmem_base;
@@ -722,7 +723,7 @@ int sbi_mpxy_send_message(u32 channel_id, u8 msg_id,
msg_data_len);
}
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)shmem_base, mpxy_shmem_size);
if (ret == SBI_ERR_TIMEOUT || ret == SBI_ERR_IO)
return ret;
@@ -752,12 +753,12 @@ int sbi_mpxy_get_notification_events(u32 channel_id, unsigned long *events_len)
return SBI_ERR_NOT_SUPPORTED;
shmem_base = hart_shmem_base(ms);
- sbi_hart_map_saddr((unsigned long)shmem_base, mpxy_shmem_size);
+ sbi_hart_protection_map_range((unsigned long)shmem_base, mpxy_shmem_size);
eventsbuf = shmem_base;
ret = channel->get_notification_events(channel, eventsbuf,
mpxy_shmem_size,
events_len);
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range((unsigned long)shmem_base, mpxy_shmem_size);
if (ret)
return ret;
diff --git a/lib/sbi/sbi_pmu.c b/lib/sbi/sbi_pmu.c
index 70c49abc..b943d99c 100644
--- a/lib/sbi/sbi_pmu.c
+++ b/lib/sbi/sbi_pmu.c
@@ -13,6 +13,7 @@
#include <sbi/sbi_domain.h>
#include <sbi/sbi_ecall_interface.h>
#include <sbi/sbi_hart.h>
+#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_pmu.h>
@@ -1034,7 +1035,7 @@ int sbi_pmu_event_get_info(unsigned long shmem_phys_lo, unsigned long shmem_phys
SBI_DOMAIN_READ | SBI_DOMAIN_WRITE))
return SBI_ERR_INVALID_ADDRESS;
- sbi_hart_map_saddr(shmem_phys_lo, shmem_size);
+ sbi_hart_protection_map_range(shmem_phys_lo, shmem_size);
einfo = (struct sbi_pmu_event_info *)(shmem_phys_lo);
for (i = 0; i < num_events; i++) {
@@ -1068,7 +1069,7 @@ int sbi_pmu_event_get_info(unsigned long shmem_phys_lo, unsigned long shmem_phys
}
}
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range(shmem_phys_lo, shmem_size);
return 0;
}
diff --git a/lib/sbi/sbi_sse.c b/lib/sbi/sbi_sse.c
index 986b7701..a14754f8 100644
--- a/lib/sbi/sbi_sse.c
+++ b/lib/sbi/sbi_sse.c
@@ -15,6 +15,7 @@
#include <sbi/sbi_error.h>
#include <sbi/sbi_fifo.h>
#include <sbi/sbi_hart.h>
+#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_hsm.h>
#include <sbi/sbi_ipi.h>
@@ -1036,7 +1037,7 @@ int sbi_sse_read_attrs(uint32_t event_id, uint32_t base_attr_id,
if (ret)
return ret;
- sbi_hart_map_saddr(output_phys_lo, sizeof(unsigned long) * attr_count);
+ sbi_hart_protection_map_range(output_phys_lo, sizeof(unsigned long) * attr_count);
/*
* Copy all attributes at once since struct sse_event_attrs is matching
@@ -1049,7 +1050,7 @@ int sbi_sse_read_attrs(uint32_t event_id, uint32_t base_attr_id,
attrs = (unsigned long *)output_phys_lo;
copy_attrs(attrs, &e_attrs[base_attr_id], attr_count);
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range(output_phys_lo, sizeof(unsigned long) * attr_count);
sse_event_put(e);
@@ -1064,7 +1065,7 @@ static int sse_write_attrs(struct sbi_sse_event *e, uint32_t base_attr_id,
uint32_t id, end_id = base_attr_id + attr_count;
unsigned long *attrs = (unsigned long *)input_phys;
- sbi_hart_map_saddr(input_phys, sizeof(unsigned long) * attr_count);
+ sbi_hart_protection_map_range(input_phys, sizeof(unsigned long) * attr_count);
for (id = base_attr_id; id < end_id; id++) {
val = attrs[attr++];
@@ -1080,7 +1081,7 @@ static int sse_write_attrs(struct sbi_sse_event *e, uint32_t base_attr_id,
}
out:
- sbi_hart_unmap_saddr();
+ sbi_hart_protection_unmap_range(input_phys, sizeof(unsigned long) * attr_count);
return ret;
}
--
2.43.0
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 5/5] lib: sbi: Factor-out PMP programming into separate sources
2025-11-26 14:18 [PATCH 0/5] OpenSBI hart protection abstraction Anup Patel
` (3 preceding siblings ...)
2025-11-26 14:18 ` [PATCH 4/5] lib: sbi: Replace sbi_hart_pmp_xyz() and sbi_hart_map/unmap_addr() Anup Patel
@ 2025-11-26 14:18 ` Anup Patel
4 siblings, 0 replies; 12+ messages in thread
From: Anup Patel @ 2025-11-26 14:18 UTC (permalink / raw)
To: Atish Patra; +Cc: Andrew Jones, Anup Patel, opensbi, Anup Patel
The PMP programming is a significant part of sbi_hart.c so factor-out
this into separate sources sbi_hart_pmp.c and sbi_hart_pmp.h for better
maintainability.
Signed-off-by: Anup Patel <apatel@ventanamicro.com>
---
include/sbi/sbi_hart.h | 22 +--
include/sbi/sbi_hart_pmp.h | 20 +++
lib/sbi/objects.mk | 1 +
lib/sbi/sbi_hart.c | 335 +---------------------------------
lib/sbi/sbi_hart_pmp.c | 356 +++++++++++++++++++++++++++++++++++++
lib/sbi/sbi_init.c | 1 +
6 files changed, 383 insertions(+), 352 deletions(-)
create mode 100644 include/sbi/sbi_hart_pmp.h
create mode 100644 lib/sbi/sbi_hart_pmp.c
diff --git a/include/sbi/sbi_hart.h b/include/sbi/sbi_hart.h
index 539f95de..81019f73 100644
--- a/include/sbi/sbi_hart.h
+++ b/include/sbi/sbi_hart.h
@@ -105,21 +105,6 @@ enum sbi_hart_csrs {
SBI_HART_CSR_MAX,
};
-/*
- * Smepmp enforces access boundaries between M-mode and
- * S/U-mode. When it is enabled, the PMPs are programmed
- * such that M-mode doesn't have access to S/U-mode memory.
- *
- * To give M-mode R/W access to the shared memory between M and
- * S/U-mode, first entry is reserved. It is disabled at boot.
- * When shared memory access is required, the physical address
- * should be programmed into the first PMP entry with R/W
- * permissions to the M-mode. Once the work is done, it should be
- * unmapped. sbi_hart_protection_map_range/sbi_hart_protection_unmap_range
- * function pair should be used to map/unmap the shared memory.
- */
-#define SBI_SMEPMP_RESV_ENTRY 0
-
struct sbi_hart_features {
bool detected;
int priv_version;
@@ -132,6 +117,9 @@ struct sbi_hart_features {
unsigned int mhpm_bits;
};
+extern unsigned long hart_features_offset;
+#define sbi_hart_features_ptr(__s) sbi_scratch_offset_ptr(__s, hart_features_offset)
+
struct sbi_scratch;
int sbi_hart_reinit(struct sbi_scratch *scratch);
@@ -142,11 +130,7 @@ extern void (*sbi_hart_expected_trap)(void);
unsigned int sbi_hart_mhpm_mask(struct sbi_scratch *scratch);
void sbi_hart_delegation_dump(struct sbi_scratch *scratch,
const char *prefix, const char *suffix);
-unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch);
-unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch);
-unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch);
unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch);
-bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx);
int sbi_hart_priv_version(struct sbi_scratch *scratch);
void sbi_hart_get_priv_version_str(struct sbi_scratch *scratch,
char *version_str, int nvstr);
diff --git a/include/sbi/sbi_hart_pmp.h b/include/sbi/sbi_hart_pmp.h
new file mode 100644
index 00000000..f54e8b2a
--- /dev/null
+++ b/include/sbi/sbi_hart_pmp.h
@@ -0,0 +1,20 @@
+/*
+ * SPDX-License-Identifier: BSD-2-Clause
+ *
+ * Copyright (c) 2025 Ventana Micro Systems Inc.
+ */
+
+#ifndef __SBI_HART_PMP_H__
+#define __SBI_HART_PMP_H__
+
+#include <sbi/sbi_types.h>
+
+struct sbi_scratch;
+
+unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch);
+unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch);
+unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch);
+bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx);
+int sbi_hart_pmp_init(struct sbi_scratch *scratch);
+
+#endif
diff --git a/lib/sbi/objects.mk b/lib/sbi/objects.mk
index 51588cd1..07d13229 100644
--- a/lib/sbi/objects.mk
+++ b/lib/sbi/objects.mk
@@ -75,6 +75,7 @@ libsbi-objs-y += sbi_emulate_csr.o
libsbi-objs-y += sbi_fifo.o
libsbi-objs-y += sbi_fwft.o
libsbi-objs-y += sbi_hart.o
+libsbi-objs-y += sbi_hart_pmp.o
libsbi-objs-y += sbi_hart_protection.o
libsbi-objs-y += sbi_heap.o
libsbi-objs-y += sbi_math.o
diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
index 3fdf1047..d9151569 100644
--- a/lib/sbi/sbi_hart.c
+++ b/lib/sbi/sbi_hart.c
@@ -13,26 +13,21 @@
#include <sbi/riscv_fp.h>
#include <sbi/sbi_bitops.h>
#include <sbi/sbi_console.h>
-#include <sbi/sbi_domain.h>
#include <sbi/sbi_csr_detect.h>
#include <sbi/sbi_error.h>
#include <sbi/sbi_hart.h>
-#include <sbi/sbi_hart_protection.h>
-#include <sbi/sbi_math.h>
+#include <sbi/sbi_hart_pmp.h>
#include <sbi/sbi_platform.h>
#include <sbi/sbi_pmu.h>
#include <sbi/sbi_string.h>
#include <sbi/sbi_trap.h>
-#include <sbi/sbi_hfence.h>
extern void __sbi_expected_trap(void);
extern void __sbi_expected_trap_hext(void);
void (*sbi_hart_expected_trap)(void) = &__sbi_expected_trap;
-static unsigned long hart_features_offset;
-static DECLARE_BITMAP(fw_smepmp_ids, PMP_COUNT);
-static bool fw_smepmp_ids_inited;
+unsigned long hart_features_offset;
static void mstatus_init(struct sbi_scratch *scratch)
{
@@ -272,30 +267,6 @@ unsigned int sbi_hart_mhpm_mask(struct sbi_scratch *scratch)
return hfeatures->mhpm_mask;
}
-unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch)
-{
- struct sbi_hart_features *hfeatures =
- sbi_scratch_offset_ptr(scratch, hart_features_offset);
-
- return hfeatures->pmp_count;
-}
-
-unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch)
-{
- struct sbi_hart_features *hfeatures =
- sbi_scratch_offset_ptr(scratch, hart_features_offset);
-
- return hfeatures->pmp_log2gran;
-}
-
-unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch)
-{
- struct sbi_hart_features *hfeatures =
- sbi_scratch_offset_ptr(scratch, hart_features_offset);
-
- return hfeatures->pmp_addr_bits;
-}
-
unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch)
{
struct sbi_hart_features *hfeatures =
@@ -304,308 +275,6 @@ unsigned int sbi_hart_mhpm_bits(struct sbi_scratch *scratch)
return hfeatures->mhpm_bits;
}
-bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx)
-{
- if (!fw_smepmp_ids_inited)
- return false;
-
- return bitmap_test(fw_smepmp_ids, pmp_idx) ? true : false;
-}
-
-static void sbi_hart_pmp_fence(void)
-{
- /*
- * As per section 3.7.2 of privileged specification v1.12,
- * virtual address translations can be speculatively performed
- * (even before actual access). These, along with PMP traslations,
- * can be cached. This can pose a problem with CPU hotplug
- * and non-retentive suspend scenario because PMP states are
- * not preserved.
- * It is advisable to flush the caching structures under such
- * conditions.
- */
- if (misa_extension('S')) {
- __asm__ __volatile__("sfence.vma");
-
- /*
- * If hypervisor mode is supported, flush caching
- * structures in guest mode too.
- */
- if (misa_extension('H'))
- __sbi_hfence_gvma_all();
- }
-}
-
-static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
- struct sbi_domain *dom,
- struct sbi_domain_memregion *reg,
- unsigned int pmp_idx,
- unsigned int pmp_flags,
- unsigned int pmp_log2gran,
- unsigned long pmp_addr_max)
-{
- unsigned long pmp_addr = reg->base >> PMP_SHIFT;
-
- if (pmp_log2gran <= reg->order && pmp_addr < pmp_addr_max) {
- sbi_platform_pmp_set(sbi_platform_ptr(scratch),
- pmp_idx, reg->flags, pmp_flags,
- reg->base, reg->order);
- pmp_set(pmp_idx, pmp_flags, reg->base, reg->order);
- } else {
- sbi_printf("Can not configure pmp for domain %s because"
- " memory region address 0x%lx or size 0x%lx "
- "is not in range.\n", dom->name, reg->base,
- reg->order);
- }
-}
-
-static bool is_valid_pmp_idx(unsigned int pmp_count, unsigned int pmp_idx)
-{
- if (pmp_count > pmp_idx)
- return true;
-
- sbi_printf("error: insufficient PMP entries\n");
- return false;
-}
-
-static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch)
-{
- struct sbi_domain_memregion *reg;
- struct sbi_domain *dom = sbi_domain_thishart_ptr();
- unsigned int pmp_log2gran, pmp_bits;
- unsigned int pmp_idx, pmp_count;
- unsigned long pmp_addr_max;
- unsigned int pmp_flags;
-
- pmp_count = sbi_hart_pmp_count(scratch);
- pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
- pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
- pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
-
- /*
- * Set the RLB so that, we can write to PMP entries without
- * enforcement even if some entries are locked.
- */
- csr_set(CSR_MSECCFG, MSECCFG_RLB);
-
- /* Disable the reserved entry */
- pmp_disable(SBI_SMEPMP_RESV_ENTRY);
-
- /* Program M-only regions when MML is not set. */
- pmp_idx = 0;
- sbi_domain_for_each_memregion(dom, reg) {
- /* Skip reserved entry */
- if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
- pmp_idx++;
- if (!is_valid_pmp_idx(pmp_count, pmp_idx))
- return SBI_EFAIL;
-
- /* Skip shared and SU-only regions */
- if (!SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
- pmp_idx++;
- continue;
- }
-
- /*
- * Track firmware PMP entries to preserve them during
- * domain switches. Under SmePMP, M-mode requires
- * explicit PMP entries to access firmware code/data.
- * These entries must remain enabled across domain
- * context switches to prevent M-mode access faults.
- */
- if (SBI_DOMAIN_MEMREGION_IS_FIRMWARE(reg->flags)) {
- if (fw_smepmp_ids_inited) {
- /* Check inconsistent firmware region */
- if (!sbi_hart_smepmp_is_fw_region(pmp_idx))
- return SBI_EINVAL;
- } else {
- bitmap_set(fw_smepmp_ids, pmp_idx, 1);
- }
- }
-
- pmp_flags = sbi_domain_get_smepmp_flags(reg);
-
- sbi_hart_smepmp_set(scratch, dom, reg, pmp_idx++, pmp_flags,
- pmp_log2gran, pmp_addr_max);
- }
-
- fw_smepmp_ids_inited = true;
-
- /* Set the MML to enforce new encoding */
- csr_set(CSR_MSECCFG, MSECCFG_MML);
-
- /* Program shared and SU-only regions */
- pmp_idx = 0;
- sbi_domain_for_each_memregion(dom, reg) {
- /* Skip reserved entry */
- if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
- pmp_idx++;
- if (!is_valid_pmp_idx(pmp_count, pmp_idx))
- return SBI_EFAIL;
-
- /* Skip M-only regions */
- if (SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
- pmp_idx++;
- continue;
- }
-
- pmp_flags = sbi_domain_get_smepmp_flags(reg);
-
- sbi_hart_smepmp_set(scratch, dom, reg, pmp_idx++, pmp_flags,
- pmp_log2gran, pmp_addr_max);
- }
-
- /*
- * All entries are programmed.
- * Keep the RLB bit so that dynamic mappings can be done.
- */
-
- sbi_hart_pmp_fence();
- return 0;
-}
-
-static int sbi_hart_smepmp_map_range(struct sbi_scratch *scratch,
- unsigned long addr, unsigned long size)
-{
- /* shared R/W access for M and S/U mode */
- unsigned int pmp_flags = (PMP_W | PMP_X);
- unsigned long order, base = 0;
-
- if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
- return SBI_ENOSPC;
-
- for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
- order <= __riscv_xlen; order++) {
- if (order < __riscv_xlen) {
- base = addr & ~((1UL << order) - 1UL);
- if ((base <= addr) &&
- (addr < (base + (1UL << order))) &&
- (base <= (addr + size - 1UL)) &&
- ((addr + size - 1UL) < (base + (1UL << order))))
- break;
- } else {
- return SBI_EFAIL;
- }
- }
-
- sbi_platform_pmp_set(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY,
- SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW,
- pmp_flags, base, order);
- pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
-
- return SBI_OK;
-}
-
-static int sbi_hart_smepmp_unmap_range(struct sbi_scratch *scratch,
- unsigned long addr, unsigned long size)
-{
- sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
- return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
-}
-
-static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch)
-{
- struct sbi_domain_memregion *reg;
- struct sbi_domain *dom = sbi_domain_thishart_ptr();
- unsigned long pmp_addr, pmp_addr_max;
- unsigned int pmp_log2gran, pmp_bits;
- unsigned int pmp_idx, pmp_count;
- unsigned int pmp_flags;
-
- pmp_count = sbi_hart_pmp_count(scratch);
- pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
- pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
- pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
-
- pmp_idx = 0;
- sbi_domain_for_each_memregion(dom, reg) {
- if (!is_valid_pmp_idx(pmp_count, pmp_idx))
- return SBI_EFAIL;
-
- pmp_flags = 0;
-
- /*
- * If permissions are to be enforced for all modes on
- * this region, the lock bit should be set.
- */
- if (reg->flags & SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS)
- pmp_flags |= PMP_L;
-
- if (reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE)
- pmp_flags |= PMP_R;
- if (reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE)
- pmp_flags |= PMP_W;
- if (reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
- pmp_flags |= PMP_X;
-
- pmp_addr = reg->base >> PMP_SHIFT;
- if (pmp_log2gran <= reg->order && pmp_addr < pmp_addr_max) {
- sbi_platform_pmp_set(sbi_platform_ptr(scratch),
- pmp_idx, reg->flags, pmp_flags,
- reg->base, reg->order);
- pmp_set(pmp_idx++, pmp_flags, reg->base, reg->order);
- } else {
- sbi_printf("Can not configure pmp for domain %s because"
- " memory region address 0x%lx or size 0x%lx "
- "is not in range.\n", dom->name, reg->base,
- reg->order);
- }
- }
-
- sbi_hart_pmp_fence();
- return 0;
-}
-
-static void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
-{
- int i, pmp_count = sbi_hart_pmp_count(scratch);
-
- for (i = 0; i < pmp_count; i++) {
- /* Don't revoke firmware access permissions */
- if (sbi_hart_smepmp_is_fw_region(i))
- continue;
-
- sbi_platform_pmp_disable(sbi_platform_ptr(scratch), i);
- pmp_disable(i);
- }
-}
-
-static struct sbi_hart_protection pmp_protection = {
- .name = "pmp",
- .rating = 100,
- .configure = sbi_hart_oldpmp_configure,
- .unconfigure = sbi_hart_pmp_unconfigure,
-};
-
-static struct sbi_hart_protection epmp_protection = {
- .name = "epmp",
- .rating = 200,
- .configure = sbi_hart_smepmp_configure,
- .unconfigure = sbi_hart_pmp_unconfigure,
- .map_range = sbi_hart_smepmp_map_range,
- .unmap_range = sbi_hart_smepmp_unmap_range,
-};
-
-static int sbi_hart_pmp_init(struct sbi_scratch *scratch)
-{
- int rc;
-
- if (!sbi_hart_pmp_count(scratch))
- return 0;
-
- rc = sbi_hart_protection_register(&pmp_protection);
- if (rc)
- return rc;
-
- if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP)) {
- rc = sbi_hart_protection_register(&epmp_protection);
- if (rc)
- return rc;
- }
-
- return 0;
-}
-
int sbi_hart_priv_version(struct sbi_scratch *scratch)
{
struct sbi_hart_features *hfeatures =
diff --git a/lib/sbi/sbi_hart_pmp.c b/lib/sbi/sbi_hart_pmp.c
new file mode 100644
index 00000000..ab96e2fa
--- /dev/null
+++ b/lib/sbi/sbi_hart_pmp.c
@@ -0,0 +1,356 @@
+/*
+ * SPDX-License-Identifier: BSD-2-Clause
+ *
+ * Copyright (c) 2025 Ventana Micro Systems Inc.
+ */
+
+#include <sbi/sbi_bitmap.h>
+#include <sbi/sbi_console.h>
+#include <sbi/sbi_domain.h>
+#include <sbi/sbi_hart.h>
+#include <sbi/sbi_hart_protection.h>
+#include <sbi/sbi_hfence.h>
+#include <sbi/sbi_math.h>
+#include <sbi/sbi_platform.h>
+#include <sbi/riscv_asm.h>
+
+/*
+ * Smepmp enforces access boundaries between M-mode and
+ * S/U-mode. When it is enabled, the PMPs are programmed
+ * such that M-mode doesn't have access to S/U-mode memory.
+ *
+ * To give M-mode R/W access to the shared memory between M and
+ * S/U-mode, first entry is reserved. It is disabled at boot.
+ * When shared memory access is required, the physical address
+ * should be programmed into the first PMP entry with R/W
+ * permissions to the M-mode. Once the work is done, it should be
+ * unmapped. sbi_hart_protection_map_range/sbi_hart_protection_unmap_range
+ * function pair should be used to map/unmap the shared memory.
+ */
+#define SBI_SMEPMP_RESV_ENTRY 0
+
+static DECLARE_BITMAP(fw_smepmp_ids, PMP_COUNT);
+static bool fw_smepmp_ids_inited;
+
+unsigned int sbi_hart_pmp_count(struct sbi_scratch *scratch)
+{
+ struct sbi_hart_features *hfeatures = sbi_hart_features_ptr(scratch);
+
+ return hfeatures->pmp_count;
+}
+
+unsigned int sbi_hart_pmp_log2gran(struct sbi_scratch *scratch)
+{
+ struct sbi_hart_features *hfeatures = sbi_hart_features_ptr(scratch);
+
+ return hfeatures->pmp_log2gran;
+}
+
+unsigned int sbi_hart_pmp_addrbits(struct sbi_scratch *scratch)
+{
+ struct sbi_hart_features *hfeatures = sbi_hart_features_ptr(scratch);
+
+ return hfeatures->pmp_addr_bits;
+}
+
+bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx)
+{
+ if (!fw_smepmp_ids_inited)
+ return false;
+
+ return bitmap_test(fw_smepmp_ids, pmp_idx) ? true : false;
+}
+
+static void sbi_hart_pmp_fence(void)
+{
+ /*
+ * As per section 3.7.2 of privileged specification v1.12,
+ * virtual address translations can be speculatively performed
+ * (even before actual access). These, along with PMP traslations,
+ * can be cached. This can pose a problem with CPU hotplug
+ * and non-retentive suspend scenario because PMP states are
+ * not preserved.
+ * It is advisable to flush the caching structures under such
+ * conditions.
+ */
+ if (misa_extension('S')) {
+ __asm__ __volatile__("sfence.vma");
+
+ /*
+ * If hypervisor mode is supported, flush caching
+ * structures in guest mode too.
+ */
+ if (misa_extension('H'))
+ __sbi_hfence_gvma_all();
+ }
+}
+
+static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
+ struct sbi_domain *dom,
+ struct sbi_domain_memregion *reg,
+ unsigned int pmp_idx,
+ unsigned int pmp_flags,
+ unsigned int pmp_log2gran,
+ unsigned long pmp_addr_max)
+{
+ unsigned long pmp_addr = reg->base >> PMP_SHIFT;
+
+ if (pmp_log2gran <= reg->order && pmp_addr < pmp_addr_max) {
+ sbi_platform_pmp_set(sbi_platform_ptr(scratch),
+ pmp_idx, reg->flags, pmp_flags,
+ reg->base, reg->order);
+ pmp_set(pmp_idx, pmp_flags, reg->base, reg->order);
+ } else {
+ sbi_printf("Can not configure pmp for domain %s because"
+ " memory region address 0x%lx or size 0x%lx "
+ "is not in range.\n", dom->name, reg->base,
+ reg->order);
+ }
+}
+
+static bool is_valid_pmp_idx(unsigned int pmp_count, unsigned int pmp_idx)
+{
+ if (pmp_count > pmp_idx)
+ return true;
+
+ sbi_printf("error: insufficient PMP entries\n");
+ return false;
+}
+
+static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch)
+{
+ struct sbi_domain_memregion *reg;
+ struct sbi_domain *dom = sbi_domain_thishart_ptr();
+ unsigned int pmp_log2gran, pmp_bits;
+ unsigned int pmp_idx, pmp_count;
+ unsigned long pmp_addr_max;
+ unsigned int pmp_flags;
+
+ pmp_count = sbi_hart_pmp_count(scratch);
+ pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
+ pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
+ pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
+
+ /*
+ * Set the RLB so that, we can write to PMP entries without
+ * enforcement even if some entries are locked.
+ */
+ csr_set(CSR_MSECCFG, MSECCFG_RLB);
+
+ /* Disable the reserved entry */
+ pmp_disable(SBI_SMEPMP_RESV_ENTRY);
+
+ /* Program M-only regions when MML is not set. */
+ pmp_idx = 0;
+ sbi_domain_for_each_memregion(dom, reg) {
+ /* Skip reserved entry */
+ if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
+ pmp_idx++;
+ if (!is_valid_pmp_idx(pmp_count, pmp_idx))
+ return SBI_EFAIL;
+
+ /* Skip shared and SU-only regions */
+ if (!SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
+ pmp_idx++;
+ continue;
+ }
+
+ /*
+ * Track firmware PMP entries to preserve them during
+ * domain switches. Under SmePMP, M-mode requires
+ * explicit PMP entries to access firmware code/data.
+ * These entries must remain enabled across domain
+ * context switches to prevent M-mode access faults.
+ */
+ if (SBI_DOMAIN_MEMREGION_IS_FIRMWARE(reg->flags)) {
+ if (fw_smepmp_ids_inited) {
+ /* Check inconsistent firmware region */
+ if (!sbi_hart_smepmp_is_fw_region(pmp_idx))
+ return SBI_EINVAL;
+ } else {
+ bitmap_set(fw_smepmp_ids, pmp_idx, 1);
+ }
+ }
+
+ pmp_flags = sbi_domain_get_smepmp_flags(reg);
+
+ sbi_hart_smepmp_set(scratch, dom, reg, pmp_idx++, pmp_flags,
+ pmp_log2gran, pmp_addr_max);
+ }
+
+ fw_smepmp_ids_inited = true;
+
+ /* Set the MML to enforce new encoding */
+ csr_set(CSR_MSECCFG, MSECCFG_MML);
+
+ /* Program shared and SU-only regions */
+ pmp_idx = 0;
+ sbi_domain_for_each_memregion(dom, reg) {
+ /* Skip reserved entry */
+ if (pmp_idx == SBI_SMEPMP_RESV_ENTRY)
+ pmp_idx++;
+ if (!is_valid_pmp_idx(pmp_count, pmp_idx))
+ return SBI_EFAIL;
+
+ /* Skip M-only regions */
+ if (SBI_DOMAIN_MEMREGION_M_ONLY_ACCESS(reg->flags)) {
+ pmp_idx++;
+ continue;
+ }
+
+ pmp_flags = sbi_domain_get_smepmp_flags(reg);
+
+ sbi_hart_smepmp_set(scratch, dom, reg, pmp_idx++, pmp_flags,
+ pmp_log2gran, pmp_addr_max);
+ }
+
+ /*
+ * All entries are programmed.
+ * Keep the RLB bit so that dynamic mappings can be done.
+ */
+
+ sbi_hart_pmp_fence();
+ return 0;
+}
+
+static int sbi_hart_smepmp_map_range(struct sbi_scratch *scratch,
+ unsigned long addr, unsigned long size)
+{
+ /* shared R/W access for M and S/U mode */
+ unsigned int pmp_flags = (PMP_W | PMP_X);
+ unsigned long order, base = 0;
+
+ if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
+ return SBI_ENOSPC;
+
+ for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
+ order <= __riscv_xlen; order++) {
+ if (order < __riscv_xlen) {
+ base = addr & ~((1UL << order) - 1UL);
+ if ((base <= addr) &&
+ (addr < (base + (1UL << order))) &&
+ (base <= (addr + size - 1UL)) &&
+ ((addr + size - 1UL) < (base + (1UL << order))))
+ break;
+ } else {
+ return SBI_EFAIL;
+ }
+ }
+
+ sbi_platform_pmp_set(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY,
+ SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW,
+ pmp_flags, base, order);
+ pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
+
+ return SBI_OK;
+}
+
+static int sbi_hart_smepmp_unmap_range(struct sbi_scratch *scratch,
+ unsigned long addr, unsigned long size)
+{
+ sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
+ return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
+}
+
+static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch)
+{
+ struct sbi_domain_memregion *reg;
+ struct sbi_domain *dom = sbi_domain_thishart_ptr();
+ unsigned long pmp_addr, pmp_addr_max;
+ unsigned int pmp_log2gran, pmp_bits;
+ unsigned int pmp_idx, pmp_count;
+ unsigned int pmp_flags;
+
+ pmp_count = sbi_hart_pmp_count(scratch);
+ pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
+ pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
+ pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
+
+ pmp_idx = 0;
+ sbi_domain_for_each_memregion(dom, reg) {
+ if (!is_valid_pmp_idx(pmp_count, pmp_idx))
+ return SBI_EFAIL;
+
+ pmp_flags = 0;
+
+ /*
+ * If permissions are to be enforced for all modes on
+ * this region, the lock bit should be set.
+ */
+ if (reg->flags & SBI_DOMAIN_MEMREGION_ENF_PERMISSIONS)
+ pmp_flags |= PMP_L;
+
+ if (reg->flags & SBI_DOMAIN_MEMREGION_SU_READABLE)
+ pmp_flags |= PMP_R;
+ if (reg->flags & SBI_DOMAIN_MEMREGION_SU_WRITABLE)
+ pmp_flags |= PMP_W;
+ if (reg->flags & SBI_DOMAIN_MEMREGION_SU_EXECUTABLE)
+ pmp_flags |= PMP_X;
+
+ pmp_addr = reg->base >> PMP_SHIFT;
+ if (pmp_log2gran <= reg->order && pmp_addr < pmp_addr_max) {
+ sbi_platform_pmp_set(sbi_platform_ptr(scratch),
+ pmp_idx, reg->flags, pmp_flags,
+ reg->base, reg->order);
+ pmp_set(pmp_idx++, pmp_flags, reg->base, reg->order);
+ } else {
+ sbi_printf("Can not configure pmp for domain %s because"
+ " memory region address 0x%lx or size 0x%lx "
+ "is not in range.\n", dom->name, reg->base,
+ reg->order);
+ }
+ }
+
+ sbi_hart_pmp_fence();
+ return 0;
+}
+
+static void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
+{
+ int i, pmp_count = sbi_hart_pmp_count(scratch);
+
+ for (i = 0; i < pmp_count; i++) {
+ /* Don't revoke firmware access permissions */
+ if (sbi_hart_smepmp_is_fw_region(i))
+ continue;
+
+ sbi_platform_pmp_disable(sbi_platform_ptr(scratch), i);
+ pmp_disable(i);
+ }
+}
+
+static struct sbi_hart_protection pmp_protection = {
+ .name = "pmp",
+ .rating = 100,
+ .configure = sbi_hart_oldpmp_configure,
+ .unconfigure = sbi_hart_pmp_unconfigure,
+};
+
+static struct sbi_hart_protection epmp_protection = {
+ .name = "epmp",
+ .rating = 200,
+ .configure = sbi_hart_smepmp_configure,
+ .unconfigure = sbi_hart_pmp_unconfigure,
+ .map_range = sbi_hart_smepmp_map_range,
+ .unmap_range = sbi_hart_smepmp_unmap_range,
+};
+
+int sbi_hart_pmp_init(struct sbi_scratch *scratch)
+{
+ int rc;
+
+ if (!sbi_hart_pmp_count(scratch))
+ return 0;
+
+ rc = sbi_hart_protection_register(&pmp_protection);
+ if (rc)
+ return rc;
+
+ if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP)) {
+ rc = sbi_hart_protection_register(&epmp_protection);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
diff --git a/lib/sbi/sbi_init.c b/lib/sbi/sbi_init.c
index e01d26bf..5259064b 100644
--- a/lib/sbi/sbi_init.c
+++ b/lib/sbi/sbi_init.c
@@ -18,6 +18,7 @@
#include <sbi/sbi_fwft.h>
#include <sbi/sbi_hart.h>
#include <sbi/sbi_hartmask.h>
+#include <sbi/sbi_hart_pmp.h>
#include <sbi/sbi_hart_protection.h>
#include <sbi/sbi_heap.h>
#include <sbi/sbi_hsm.h>
--
2.43.0
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] lib: sbi: Introduce hart protection abstraction
2025-11-26 14:18 ` [PATCH 2/5] lib: sbi: Introduce hart protection abstraction Anup Patel
@ 2025-12-07 11:10 ` Samuel Holland
2025-12-07 15:58 ` Anup Patel
0 siblings, 1 reply; 12+ messages in thread
From: Samuel Holland @ 2025-12-07 11:10 UTC (permalink / raw)
To: Anup Patel, Atish Patra; +Cc: Andrew Jones, Anup Patel, opensbi
Hi Anup,
On 2025-11-26 8:18 AM, Anup Patel wrote:
> Currently, PMP and ePMP are the only hart protection mechanisms
> available in OpenSBI but new protection mechanisms (such as Smmpt)
> will be added in the near future.
>
> To allow multiple hart protection mechanisms, introduce hart
> protection abstraction and related APIs.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
> include/sbi/sbi_hart_protection.h | 88 ++++++++++++++++++++++++
> lib/sbi/objects.mk | 1 +
> lib/sbi/sbi_hart_protection.c | 110 ++++++++++++++++++++++++++++++
> lib/sbi/sbi_init.c | 5 ++
> 4 files changed, 204 insertions(+)
> create mode 100644 include/sbi/sbi_hart_protection.h
> create mode 100644 lib/sbi/sbi_hart_protection.c
>
> diff --git a/include/sbi/sbi_hart_protection.h b/include/sbi/sbi_hart_protection.h
> new file mode 100644
> index 00000000..c9e452db
> --- /dev/null
> +++ b/include/sbi/sbi_hart_protection.h
> @@ -0,0 +1,88 @@
> +/*
> + * SPDX-License-Identifier: BSD-2-Clause
> + *
> + * Copyright (c) 2025 Ventana Micro Systems Inc.
> + */
> +
> +#ifndef __SBI_HART_PROTECTION_H__
> +#define __SBI_HART_PROTECTION_H__
> +
> +#include <sbi/sbi_types.h>
> +#include <sbi/sbi_list.h>
> +
> +struct sbi_scratch;
> +
> +/** Representation of hart protection mechanism */
> +struct sbi_hart_protection {
> + /** List head */
> + struct sbi_dlist head;
> +
> + /** Name of the hart protection mechanism */
> + char name[32];
> +
> + /** Ratings of the hart protection mechanism (higher is better) */
> + unsigned long rating;
> +
> + /** Configure protection for current HART (Mandatory) */
> + int (*configure)(struct sbi_scratch *scratch);
> +
> + /** Unconfigure protection for current HART (Mandatory) */
> + void (*unconfigure)(struct sbi_scratch *scratch);
> +
> + /** Create temporary mapping to access address range on current HART (Optional) */
> + int (*map_range)(struct sbi_scratch *scratch,
> + unsigned long base, unsigned long size);
> +
> + /** Destroy temporary mapping on current HART (Optional) */
> + int (*unmap_range)(struct sbi_scratch *scratch,
> + unsigned long base, unsigned long size);
> +};
> +
> +/**
> + * Get the best hart protection mechanism
> + *
> + * @return pointer to best hart protection mechanism
> + */
> +struct sbi_hart_protection *sbi_hart_protection_best(void);
> +
> +/**
> + * Register a hart protection mechanism
> + *
> + * @return 0 on success and negative error code on failure
> + */
> +int sbi_hart_protection_register(struct sbi_hart_protection *hprot);
> +
> +/**
> + * Unregister a hart protection mechanism
> + *
> + * @return 0 on success and negative error code on failure
> + */
> +int sbi_hart_protection_unregister(struct sbi_hart_protection *hprot);
> +
> +/**
> + * Configure protection for current HART
> + *
> + * @return 0 on success and negative error code on failure
> + */
> +int sbi_hart_protection_configure(struct sbi_scratch *scratch);
> +
> +/**
> + * Unconfigure protection for current HART
> + */
> +void sbi_hart_protection_unconfigure(struct sbi_scratch *scratch);
> +
> +/**
> + * Create temporary mapping to access address range on current HART
> + *
> + * @return 0 on success and negative error code on failure
> + */
> +int sbi_hart_protection_map_range(unsigned long base, unsigned long size);
> +
> +/**
> + * Destroy temporary mapping to access address range on current HART
> + *
> + * @return 0 on success and negative error code on failure
> + */
> +int sbi_hart_protection_unmap_range(unsigned long base, unsigned long size);
These functions should document (or take a parameter for) the permissions of the
temporary mapping. "saddr" sort of documents that for the existing functions.
> +
> +#endif /* __SBI_HART_PROTECTION_H__ */
> diff --git a/lib/sbi/objects.mk b/lib/sbi/objects.mk
> index 8abe1e8e..51588cd1 100644
> --- a/lib/sbi/objects.mk
> +++ b/lib/sbi/objects.mk
> @@ -75,6 +75,7 @@ libsbi-objs-y += sbi_emulate_csr.o
> libsbi-objs-y += sbi_fifo.o
> libsbi-objs-y += sbi_fwft.o
> libsbi-objs-y += sbi_hart.o
> +libsbi-objs-y += sbi_hart_protection.o
> libsbi-objs-y += sbi_heap.o
> libsbi-objs-y += sbi_math.o
> libsbi-objs-y += sbi_hfence.o
> diff --git a/lib/sbi/sbi_hart_protection.c b/lib/sbi/sbi_hart_protection.c
> new file mode 100644
> index 00000000..831072ad
> --- /dev/null
> +++ b/lib/sbi/sbi_hart_protection.c
> @@ -0,0 +1,110 @@
> +/*
> + * SPDX-License-Identifier: BSD-2-Clause
> + *
> + * Copyright (c) 2025 Ventana Micro Systems Inc.
> + */
> +
> +#include <sbi/sbi_error.h>
> +#include <sbi/sbi_hart_protection.h>
> +#include <sbi/sbi_scratch.h>
> +
> +static SBI_LIST_HEAD(hart_protection_list);
> +
> +struct sbi_hart_protection *sbi_hart_protection_best(void)
> +{
> + if (sbi_list_empty(&hart_protection_list))
> + return NULL;
> +
> + return sbi_list_first_entry(&hart_protection_list, struct sbi_hart_protection, head);
> +}
> +
> +int sbi_hart_protection_register(struct sbi_hart_protection *hprot)
> +{
> + struct sbi_hart_protection *pos = NULL;
> + bool found_pos = false;
> +
> + if (!hprot)
> + return SBI_EINVAL;
> +
> + sbi_list_for_each_entry(pos, &hart_protection_list, head) {
> + if (hprot->rating > pos->rating) {
> + found_pos = true;
> + break;
> + }
> + }
> +
> + if (found_pos)
> + sbi_list_add_tail(&hprot->head, &pos->head);
> + else
> + sbi_list_add_tail(&hprot->head, &hart_protection_list);
> +
> + return 0;
> +}
> +
> +int sbi_hart_protection_unregister(struct sbi_hart_protection *hprot)
> +{
> + struct sbi_hart_protection *pos = NULL;
> + bool found_pos = false;
> +
> + if (!hprot)
> + return SBI_EINVAL;
> +
> + sbi_list_for_each_entry(pos, &hart_protection_list, head) {
> + if (hprot == pos) {
> + found_pos = true;
> + break;
> + }
> + }
> +
> + if (!found_pos)
> + return SBI_ENOENT;
> +
> + sbi_list_del(&hprot->head);
> + return 0;
I don't think this function is needed. ISA extensions aren't going to disappear
at runtime. Also, what would you expect to happen if the active hart protection
mechanism is unregistered?
> +}
> +
> +int sbi_hart_protection_configure(struct sbi_scratch *scratch)
> +{
> + struct sbi_hart_protection *hprot = sbi_hart_protection_best();
> +
> + if (!hprot)
> + return SBI_EINVAL;
> + if (!hprot->configure)
> + return SBI_ENOSYS;
> +
> + return hprot->configure(scratch);
> +}
If you only ever use the "best" backend, there is no need to maintain a list,
just a single pointer.
But then that seems too simplistic. We'll probably want to use (e)PMP and Smmpt
together, to avoid splitting huge MPT pages to make M-mode carveouts. So we
would want to loop through all registered protection schemes.
I'm guessing you plan to use an entirely separate set of APIs for system-level
protection schemes like WorldGuard and IOPMP?
Regards,
Samuel
> +
> +void sbi_hart_protection_unconfigure(struct sbi_scratch *scratch)
> +{
> + struct sbi_hart_protection *hprot = sbi_hart_protection_best();
> +
> + if (!hprot || !hprot->unconfigure)
> + return;
> +
> + hprot->unconfigure(scratch);
> +}
> +
> +int sbi_hart_protection_map_range(unsigned long base, unsigned long size)
> +{
> + struct sbi_hart_protection *hprot = sbi_hart_protection_best();
> +
> + if (!hprot)
> + return SBI_EINVAL;
> + if (!hprot->map_range)
> + return 0;
> +
> + return hprot->map_range(sbi_scratch_thishart_ptr(), base, size);
> +}
> +
> +int sbi_hart_protection_unmap_range(unsigned long base, unsigned long size)
> +{
> + struct sbi_hart_protection *hprot = sbi_hart_protection_best();
> +
> + if (!hprot)
> + return SBI_EINVAL;
> + if (!hprot->unmap_range)
> + return 0;
> +
> + return hprot->unmap_range(sbi_scratch_thishart_ptr(), base, size);
> +}
> diff --git a/lib/sbi/sbi_init.c b/lib/sbi/sbi_init.c
> index 663b486b..b161d1c1 100644
> --- a/lib/sbi/sbi_init.c
> +++ b/lib/sbi/sbi_init.c
> @@ -18,6 +18,7 @@
> #include <sbi/sbi_fwft.h>
> #include <sbi/sbi_hart.h>
> #include <sbi/sbi_hartmask.h>
> +#include <sbi/sbi_hart_protection.h>
> #include <sbi/sbi_heap.h>
> #include <sbi/sbi_hsm.h>
> #include <sbi/sbi_ipi.h>
> @@ -74,6 +75,7 @@ static void sbi_boot_print_general(struct sbi_scratch *scratch)
> const struct sbi_hsm_device *hdev;
> const struct sbi_ipi_device *idev;
> const struct sbi_timer_device *tdev;
> + const struct sbi_hart_protection *hprot;
> const struct sbi_console_device *cdev;
> const struct sbi_system_reset_device *srdev;
> const struct sbi_system_suspend_device *susp_dev;
> @@ -90,6 +92,9 @@ static void sbi_boot_print_general(struct sbi_scratch *scratch)
> sbi_printf("Platform Features : %s\n", str);
> sbi_printf("Platform HART Count : %u\n",
> sbi_platform_hart_count(plat));
> + hprot = sbi_hart_protection_best();
> + sbi_printf("Platform HART Protection : %s\n",
> + (hprot) ? hprot->name : "---");
> idev = sbi_ipi_get_device();
> sbi_printf("Platform IPI Device : %s\n",
> (idev) ? idev->name : "---");
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/5] lib: sbi: Implement hart protection for PMP and ePMP
2025-11-26 14:18 ` [PATCH 3/5] lib: sbi: Implement hart protection for PMP and ePMP Anup Patel
@ 2025-12-07 11:12 ` Samuel Holland
2025-12-09 12:55 ` Anup Patel
0 siblings, 1 reply; 12+ messages in thread
From: Samuel Holland @ 2025-12-07 11:12 UTC (permalink / raw)
To: Anup Patel, Atish Patra; +Cc: Andrew Jones, Anup Patel, opensbi
Hi Anup,
On 2025-11-26 8:18 AM, Anup Patel wrote:
> Implement PMP and ePMP based hart protection abstraction so
> that usage of sbi_hart_pmp_xyz() functions can be replaced
> with sbi_hart_protection_xyz() functions.
>
> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> ---
> lib/sbi/sbi_hart.c | 209 ++++++++++++++++++++++++++++-----------------
> 1 file changed, 133 insertions(+), 76 deletions(-)
>
> diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
> index c5a8d248..8869839d 100644
> --- a/lib/sbi/sbi_hart.c
> +++ b/lib/sbi/sbi_hart.c
> @@ -17,6 +17,7 @@
> #include <sbi/sbi_csr_detect.h>
> #include <sbi/sbi_error.h>
> #include <sbi/sbi_hart.h>
> +#include <sbi/sbi_hart_protection.h>
> #include <sbi/sbi_math.h>
> #include <sbi/sbi_platform.h>
> #include <sbi/sbi_pmu.h>
> @@ -311,6 +312,30 @@ bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx)
> return bitmap_test(fw_smepmp_ids, pmp_idx) ? true : false;
> }
>
> +static void sbi_hart_pmp_fence(void)
> +{
> + /*
> + * As per section 3.7.2 of privileged specification v1.12,
> + * virtual address translations can be speculatively performed
> + * (even before actual access). These, along with PMP traslations,
> + * can be cached. This can pose a problem with CPU hotplug
> + * and non-retentive suspend scenario because PMP states are
> + * not preserved.
> + * It is advisable to flush the caching structures under such
> + * conditions.
> + */
> + if (misa_extension('S')) {
> + __asm__ __volatile__("sfence.vma");
> +
> + /*
> + * If hypervisor mode is supported, flush caching
> + * structures in guest mode too.
> + */
> + if (misa_extension('H'))
> + __sbi_hfence_gvma_all();
> + }
> +}
> +
> static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
> struct sbi_domain *dom,
> struct sbi_domain_memregion *reg,
> @@ -343,14 +368,19 @@ static bool is_valid_pmp_idx(unsigned int pmp_count, unsigned int pmp_idx)
> return false;
> }
>
> -static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
> - unsigned int pmp_count,
> - unsigned int pmp_log2gran,
> - unsigned long pmp_addr_max)
> +static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch)
> {
> struct sbi_domain_memregion *reg;
> struct sbi_domain *dom = sbi_domain_thishart_ptr();
> - unsigned int pmp_idx, pmp_flags;
> + unsigned int pmp_log2gran, pmp_bits;
> + unsigned int pmp_idx, pmp_count;
> + unsigned long pmp_addr_max;
> + unsigned int pmp_flags;
> +
> + pmp_count = sbi_hart_pmp_count(scratch);
> + pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
> + pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
> + pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
>
> /*
> * Set the RLB so that, we can write to PMP entries without
> @@ -430,20 +460,64 @@ static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
> * Keep the RLB bit so that dynamic mappings can be done.
> */
>
> + sbi_hart_pmp_fence();
> return 0;
> }
>
> -static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
> - unsigned int pmp_count,
> - unsigned int pmp_log2gran,
> - unsigned long pmp_addr_max)
> +static int sbi_hart_smepmp_map_range(struct sbi_scratch *scratch,
> + unsigned long addr, unsigned long size)
> +{
> + /* shared R/W access for M and S/U mode */
> + unsigned int pmp_flags = (PMP_W | PMP_X);
> + unsigned long order, base = 0;
> +
> + if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
> + return SBI_ENOSPC;
> +
> + for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
> + order <= __riscv_xlen; order++) {
> + if (order < __riscv_xlen) {
> + base = addr & ~((1UL << order) - 1UL);
> + if ((base <= addr) &&
> + (addr < (base + (1UL << order))) &&
> + (base <= (addr + size - 1UL)) &&
> + ((addr + size - 1UL) < (base + (1UL << order))))
> + break;
> + } else {
> + return SBI_EFAIL;
> + }
> + }
> +
> + sbi_platform_pmp_set(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY,
> + SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW,
> + pmp_flags, base, order);
> + pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
> +
> + return SBI_OK;
> +}
> +
> +static int sbi_hart_smepmp_unmap_range(struct sbi_scratch *scratch,
> + unsigned long addr, unsigned long size)
> +{
> + sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
> + return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> +}
> +
> +static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch)
> {
> struct sbi_domain_memregion *reg;
> struct sbi_domain *dom = sbi_domain_thishart_ptr();
> - unsigned int pmp_idx = 0;
> + unsigned long pmp_addr, pmp_addr_max;
> + unsigned int pmp_log2gran, pmp_bits;
> + unsigned int pmp_idx, pmp_count;
> unsigned int pmp_flags;
> - unsigned long pmp_addr;
>
> + pmp_count = sbi_hart_pmp_count(scratch);
> + pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
> + pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
> + pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
> +
> + pmp_idx = 0;
> sbi_domain_for_each_memregion(dom, reg) {
> if (!is_valid_pmp_idx(pmp_count, pmp_idx))
> return SBI_EFAIL;
> @@ -478,43 +552,19 @@ static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
> }
> }
>
> + sbi_hart_pmp_fence();
> return 0;
> }
>
> int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
> {
> - /* shared R/W access for M and S/U mode */
> - unsigned int pmp_flags = (PMP_W | PMP_X);
> - unsigned long order, base = 0;
> struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
>
> /* If Smepmp is not supported no special mapping is required */
> if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
> return SBI_OK;
>
> - if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
> - return SBI_ENOSPC;
> -
> - for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
> - order <= __riscv_xlen; order++) {
> - if (order < __riscv_xlen) {
> - base = addr & ~((1UL << order) - 1UL);
> - if ((base <= addr) &&
> - (addr < (base + (1UL << order))) &&
> - (base <= (addr + size - 1UL)) &&
> - ((addr + size - 1UL) < (base + (1UL << order))))
> - break;
> - } else {
> - return SBI_EFAIL;
> - }
> - }
> -
> - sbi_platform_pmp_set(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY,
> - SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW,
> - pmp_flags, base, order);
> - pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
> -
> - return SBI_OK;
> + return sbi_hart_smepmp_map_range(scratch, addr, size);
> }
>
> int sbi_hart_unmap_saddr(void)
> @@ -524,53 +574,18 @@ int sbi_hart_unmap_saddr(void)
> if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
> return SBI_OK;
>
> - sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
> - return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> + return sbi_hart_smepmp_unmap_range(scratch, 0, 0);
> }
>
> int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
> {
> - int rc;
> - unsigned int pmp_bits, pmp_log2gran;
> - unsigned int pmp_count = sbi_hart_pmp_count(scratch);
> - unsigned long pmp_addr_max;
> -
> - if (!pmp_count)
> + if (!sbi_hart_pmp_count(scratch))
> return 0;
>
> - pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
> - pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
> - pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
> -
> if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
> - rc = sbi_hart_smepmp_configure(scratch, pmp_count,
> - pmp_log2gran, pmp_addr_max);
> + return sbi_hart_smepmp_configure(scratch);
> else
> - rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
> - pmp_log2gran, pmp_addr_max);
> -
> - /*
> - * As per section 3.7.2 of privileged specification v1.12,
> - * virtual address translations can be speculatively performed
> - * (even before actual access). These, along with PMP traslations,
> - * can be cached. This can pose a problem with CPU hotplug
> - * and non-retentive suspend scenario because PMP states are
> - * not preserved.
> - * It is advisable to flush the caching structures under such
> - * conditions.
> - */
> - if (misa_extension('S')) {
> - __asm__ __volatile__("sfence.vma");
> -
> - /*
> - * If hypervisor mode is supported, flush caching
> - * structures in guest mode too.
> - */
> - if (misa_extension('H'))
> - __sbi_hfence_gvma_all();
> - }
> -
> - return rc;
> + return sbi_hart_oldpmp_configure(scratch);
> }
>
> void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
> @@ -587,6 +602,42 @@ void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
> }
> }
>
> +static struct sbi_hart_protection pmp_protection = {
> + .name = "pmp",
> + .rating = 100,
> + .configure = sbi_hart_oldpmp_configure,
> + .unconfigure = sbi_hart_pmp_unconfigure,
> +};
> +
> +static struct sbi_hart_protection epmp_protection = {
> + .name = "epmp",
> + .rating = 200,
> + .configure = sbi_hart_smepmp_configure,
> + .unconfigure = sbi_hart_pmp_unconfigure,
> + .map_range = sbi_hart_smepmp_map_range,
> + .unmap_range = sbi_hart_smepmp_unmap_range,
> +};
> +
> +static int sbi_hart_pmp_init(struct sbi_scratch *scratch)
> +{
> + int rc;
> +
> + if (!sbi_hart_pmp_count(scratch))
> + return 0;
> +
> + rc = sbi_hart_protection_register(&pmp_protection);
> + if (rc)
> + return rc;
> +
> + if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP)) {
> + rc = sbi_hart_protection_register(&epmp_protection);
> + if (rc)
> + return rc;
> + }
The only reason I can see for registering both PMP and ePMP is to fall back to
PMP if ePMP configuration fails (as ePMP requires more slots). But you don't do
that. We shouldn't register PMP if we know it will not be used.
Regards,
Samuel
> +
> + return 0;
> +}
> +
> int sbi_hart_priv_version(struct sbi_scratch *scratch)
> {
> struct sbi_hart_features *hfeatures =
> @@ -1051,6 +1102,12 @@ int sbi_hart_init(struct sbi_scratch *scratch, bool cold_boot)
> if (rc)
> return rc;
>
> + if (cold_boot) {
> + rc = sbi_hart_pmp_init(scratch);
> + if (rc)
> + return rc;
> + }
> +
> rc = delegate_traps(scratch);
> if (rc)
> return rc;
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] lib: sbi: Introduce hart protection abstraction
2025-12-07 11:10 ` Samuel Holland
@ 2025-12-07 15:58 ` Anup Patel
2025-12-09 7:04 ` Samuel Holland
0 siblings, 1 reply; 12+ messages in thread
From: Anup Patel @ 2025-12-07 15:58 UTC (permalink / raw)
To: Samuel Holland; +Cc: Anup Patel, Atish Patra, Andrew Jones, opensbi
On Sun, Dec 7, 2025 at 4:40 PM Samuel Holland <samuel.holland@sifive.com> wrote:
>
> Hi Anup,
>
> On 2025-11-26 8:18 AM, Anup Patel wrote:
> > Currently, PMP and ePMP are the only hart protection mechanisms
> > available in OpenSBI but new protection mechanisms (such as Smmpt)
> > will be added in the near future.
> >
> > To allow multiple hart protection mechanisms, introduce hart
> > protection abstraction and related APIs.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> > include/sbi/sbi_hart_protection.h | 88 ++++++++++++++++++++++++
> > lib/sbi/objects.mk | 1 +
> > lib/sbi/sbi_hart_protection.c | 110 ++++++++++++++++++++++++++++++
> > lib/sbi/sbi_init.c | 5 ++
> > 4 files changed, 204 insertions(+)
> > create mode 100644 include/sbi/sbi_hart_protection.h
> > create mode 100644 lib/sbi/sbi_hart_protection.c
> >
> > diff --git a/include/sbi/sbi_hart_protection.h b/include/sbi/sbi_hart_protection.h
> > new file mode 100644
> > index 00000000..c9e452db
> > --- /dev/null
> > +++ b/include/sbi/sbi_hart_protection.h
> > @@ -0,0 +1,88 @@
> > +/*
> > + * SPDX-License-Identifier: BSD-2-Clause
> > + *
> > + * Copyright (c) 2025 Ventana Micro Systems Inc.
> > + */
> > +
> > +#ifndef __SBI_HART_PROTECTION_H__
> > +#define __SBI_HART_PROTECTION_H__
> > +
> > +#include <sbi/sbi_types.h>
> > +#include <sbi/sbi_list.h>
> > +
> > +struct sbi_scratch;
> > +
> > +/** Representation of hart protection mechanism */
> > +struct sbi_hart_protection {
> > + /** List head */
> > + struct sbi_dlist head;
> > +
> > + /** Name of the hart protection mechanism */
> > + char name[32];
> > +
> > + /** Ratings of the hart protection mechanism (higher is better) */
> > + unsigned long rating;
> > +
> > + /** Configure protection for current HART (Mandatory) */
> > + int (*configure)(struct sbi_scratch *scratch);
> > +
> > + /** Unconfigure protection for current HART (Mandatory) */
> > + void (*unconfigure)(struct sbi_scratch *scratch);
> > +
> > + /** Create temporary mapping to access address range on current HART (Optional) */
> > + int (*map_range)(struct sbi_scratch *scratch,
> > + unsigned long base, unsigned long size);
> > +
> > + /** Destroy temporary mapping on current HART (Optional) */
> > + int (*unmap_range)(struct sbi_scratch *scratch,
> > + unsigned long base, unsigned long size);
> > +};
> > +
> > +/**
> > + * Get the best hart protection mechanism
> > + *
> > + * @return pointer to best hart protection mechanism
> > + */
> > +struct sbi_hart_protection *sbi_hart_protection_best(void);
> > +
> > +/**
> > + * Register a hart protection mechanism
> > + *
> > + * @return 0 on success and negative error code on failure
> > + */
> > +int sbi_hart_protection_register(struct sbi_hart_protection *hprot);
> > +
> > +/**
> > + * Unregister a hart protection mechanism
> > + *
> > + * @return 0 on success and negative error code on failure
> > + */
> > +int sbi_hart_protection_unregister(struct sbi_hart_protection *hprot);
> > +
> > +/**
> > + * Configure protection for current HART
> > + *
> > + * @return 0 on success and negative error code on failure
> > + */
> > +int sbi_hart_protection_configure(struct sbi_scratch *scratch);
> > +
> > +/**
> > + * Unconfigure protection for current HART
> > + */
> > +void sbi_hart_protection_unconfigure(struct sbi_scratch *scratch);
> > +
> > +/**
> > + * Create temporary mapping to access address range on current HART
> > + *
> > + * @return 0 on success and negative error code on failure
> > + */
> > +int sbi_hart_protection_map_range(unsigned long base, unsigned long size);
> > +
> > +/**
> > + * Destroy temporary mapping to access address range on current HART
> > + *
> > + * @return 0 on success and negative error code on failure
> > + */
> > +int sbi_hart_protection_unmap_range(unsigned long base, unsigned long size);
>
> These functions should document (or take a parameter for) the permissions of the
> temporary mapping. "saddr" sort of documents that for the existing functions.
Sure, I will add parameter documentation.
>
> > +
> > +#endif /* __SBI_HART_PROTECTION_H__ */
> > diff --git a/lib/sbi/objects.mk b/lib/sbi/objects.mk
> > index 8abe1e8e..51588cd1 100644
> > --- a/lib/sbi/objects.mk
> > +++ b/lib/sbi/objects.mk
> > @@ -75,6 +75,7 @@ libsbi-objs-y += sbi_emulate_csr.o
> > libsbi-objs-y += sbi_fifo.o
> > libsbi-objs-y += sbi_fwft.o
> > libsbi-objs-y += sbi_hart.o
> > +libsbi-objs-y += sbi_hart_protection.o
> > libsbi-objs-y += sbi_heap.o
> > libsbi-objs-y += sbi_math.o
> > libsbi-objs-y += sbi_hfence.o
> > diff --git a/lib/sbi/sbi_hart_protection.c b/lib/sbi/sbi_hart_protection.c
> > new file mode 100644
> > index 00000000..831072ad
> > --- /dev/null
> > +++ b/lib/sbi/sbi_hart_protection.c
> > @@ -0,0 +1,110 @@
> > +/*
> > + * SPDX-License-Identifier: BSD-2-Clause
> > + *
> > + * Copyright (c) 2025 Ventana Micro Systems Inc.
> > + */
> > +
> > +#include <sbi/sbi_error.h>
> > +#include <sbi/sbi_hart_protection.h>
> > +#include <sbi/sbi_scratch.h>
> > +
> > +static SBI_LIST_HEAD(hart_protection_list);
> > +
> > +struct sbi_hart_protection *sbi_hart_protection_best(void)
> > +{
> > + if (sbi_list_empty(&hart_protection_list))
> > + return NULL;
> > +
> > + return sbi_list_first_entry(&hart_protection_list, struct sbi_hart_protection, head);
> > +}
> > +
> > +int sbi_hart_protection_register(struct sbi_hart_protection *hprot)
> > +{
> > + struct sbi_hart_protection *pos = NULL;
> > + bool found_pos = false;
> > +
> > + if (!hprot)
> > + return SBI_EINVAL;
> > +
> > + sbi_list_for_each_entry(pos, &hart_protection_list, head) {
> > + if (hprot->rating > pos->rating) {
> > + found_pos = true;
> > + break;
> > + }
> > + }
> > +
> > + if (found_pos)
> > + sbi_list_add_tail(&hprot->head, &pos->head);
> > + else
> > + sbi_list_add_tail(&hprot->head, &hart_protection_list);
> > +
> > + return 0;
> > +}
> > +
> > +int sbi_hart_protection_unregister(struct sbi_hart_protection *hprot)
> > +{
> > + struct sbi_hart_protection *pos = NULL;
> > + bool found_pos = false;
> > +
> > + if (!hprot)
> > + return SBI_EINVAL;
> > +
> > + sbi_list_for_each_entry(pos, &hart_protection_list, head) {
> > + if (hprot == pos) {
> > + found_pos = true;
> > + break;
> > + }
> > + }
> > +
> > + if (!found_pos)
> > + return SBI_ENOENT;
> > +
> > + sbi_list_del(&hprot->head);
> > + return 0;
>
> I don't think this function is needed. ISA extensions aren't going to disappear
> at runtime. Also, what would you expect to happen if the active hart protection
> mechanism is unregistered?
I have added this so that if there is any failure in the function registering
hart protection mechanism then the function can unregister the hart
protection.
>
> > +}
> > +
> > +int sbi_hart_protection_configure(struct sbi_scratch *scratch)
> > +{
> > + struct sbi_hart_protection *hprot = sbi_hart_protection_best();
> > +
> > + if (!hprot)
> > + return SBI_EINVAL;
> > + if (!hprot->configure)
> > + return SBI_ENOSYS;
> > +
> > + return hprot->configure(scratch);
> > +}
>
> If you only ever use the "best" backend, there is no need to maintain a list,
> just a single pointer.
I agree that since we are only using the "best" backend, using list seems
redundant. I have kept list to know all the available hart protection
mechanism plus there is very little overhead since the list is sorted
based on rating.
>
> But then that seems too simplistic. We'll probably want to use (e)PMP and Smmpt
> together, to avoid splitting huge MPT pages to make M-mode carveouts. So we
> would want to loop through all registered protection schemes.
Thinking about this more. It is better to register a composite hart protection
for (e)PMP+Smmpt instead of hart protection abstraction calling multiple
configure/unconfigure and map/unmap_range because this allows Smmpt
code to decide when to use (e)PMP for a domain memregion and when
to use Smmpt for a domain memregion. The (e)PMP+Smmpt and platform
specific hart protection can always share the PMP programing functions
with (e)PMP hart protection.
>
> I'm guessing you plan to use an entirely separate set of APIs for system-level
> protection schemes like WorldGuard and IOPMP?
>
Yes, that's correct we will have separate system protection abstraction for
WorldGuard and IOPMP. The OpenSBI domain framework will use system
protection abstraction differently compared to how it uses hart protection
abstraction.
Regards,
Anup
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] lib: sbi: Introduce hart protection abstraction
2025-12-07 15:58 ` Anup Patel
@ 2025-12-09 7:04 ` Samuel Holland
2025-12-09 8:30 ` Anup Patel
0 siblings, 1 reply; 12+ messages in thread
From: Samuel Holland @ 2025-12-09 7:04 UTC (permalink / raw)
To: Anup Patel; +Cc: Anup Patel, Atish Patra, Andrew Jones, opensbi
Hi Anup,
On 2025-12-08 12:58 AM, Anup Patel wrote:
> On Sun, Dec 7, 2025 at 4:40 PM Samuel Holland <samuel.holland@sifive.com> wrote:
>> On 2025-11-26 8:18 AM, Anup Patel wrote:
>>> Currently, PMP and ePMP are the only hart protection mechanisms
>>> available in OpenSBI but new protection mechanisms (such as Smmpt)
>>> will be added in the near future.
>>>
>>> To allow multiple hart protection mechanisms, introduce hart
>>> protection abstraction and related APIs.
>>>
>>> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
>>> ---
>>> include/sbi/sbi_hart_protection.h | 88 ++++++++++++++++++++++++
>>> lib/sbi/objects.mk | 1 +
>>> lib/sbi/sbi_hart_protection.c | 110 ++++++++++++++++++++++++++++++
>>> lib/sbi/sbi_init.c | 5 ++
>>> 4 files changed, 204 insertions(+)
>>> create mode 100644 include/sbi/sbi_hart_protection.h
>>> create mode 100644 lib/sbi/sbi_hart_protection.c
>>>
>>> diff --git a/include/sbi/sbi_hart_protection.h b/include/sbi/sbi_hart_protection.h
>>> new file mode 100644
>>> index 00000000..c9e452db
>>> --- /dev/null
>>> +++ b/include/sbi/sbi_hart_protection.h
>>> @@ -0,0 +1,88 @@
>>> +/*
>>> + * SPDX-License-Identifier: BSD-2-Clause
>>> + *
>>> + * Copyright (c) 2025 Ventana Micro Systems Inc.
>>> + */
>>> +
>>> +#ifndef __SBI_HART_PROTECTION_H__
>>> +#define __SBI_HART_PROTECTION_H__
>>> +
>>> +#include <sbi/sbi_types.h>
>>> +#include <sbi/sbi_list.h>
>>> +
>>> +struct sbi_scratch;
>>> +
>>> +/** Representation of hart protection mechanism */
>>> +struct sbi_hart_protection {
>>> + /** List head */
>>> + struct sbi_dlist head;
>>> +
>>> + /** Name of the hart protection mechanism */
>>> + char name[32];
>>> +
>>> + /** Ratings of the hart protection mechanism (higher is better) */
>>> + unsigned long rating;
>>> +
>>> + /** Configure protection for current HART (Mandatory) */
>>> + int (*configure)(struct sbi_scratch *scratch);
>>> +
>>> + /** Unconfigure protection for current HART (Mandatory) */
>>> + void (*unconfigure)(struct sbi_scratch *scratch);
>>> +
>>> + /** Create temporary mapping to access address range on current HART (Optional) */
>>> + int (*map_range)(struct sbi_scratch *scratch,
>>> + unsigned long base, unsigned long size);
>>> +
>>> + /** Destroy temporary mapping on current HART (Optional) */
>>> + int (*unmap_range)(struct sbi_scratch *scratch,
>>> + unsigned long base, unsigned long size);
>>> +};
>>> +
>>> +/**
>>> + * Get the best hart protection mechanism
>>> + *
>>> + * @return pointer to best hart protection mechanism
>>> + */
>>> +struct sbi_hart_protection *sbi_hart_protection_best(void);
>>> +
>>> +/**
>>> + * Register a hart protection mechanism
>>> + *
>>> + * @return 0 on success and negative error code on failure
>>> + */
>>> +int sbi_hart_protection_register(struct sbi_hart_protection *hprot);
>>> +
>>> +/**
>>> + * Unregister a hart protection mechanism
>>> + *
>>> + * @return 0 on success and negative error code on failure
>>> + */
>>> +int sbi_hart_protection_unregister(struct sbi_hart_protection *hprot);
>>> +
>>> +/**
>>> + * Configure protection for current HART
>>> + *
>>> + * @return 0 on success and negative error code on failure
>>> + */
>>> +int sbi_hart_protection_configure(struct sbi_scratch *scratch);
>>> +
>>> +/**
>>> + * Unconfigure protection for current HART
>>> + */
>>> +void sbi_hart_protection_unconfigure(struct sbi_scratch *scratch);
>>> +
>>> +/**
>>> + * Create temporary mapping to access address range on current HART
>>> + *
>>> + * @return 0 on success and negative error code on failure
>>> + */
>>> +int sbi_hart_protection_map_range(unsigned long base, unsigned long size);
>>> +
>>> +/**
>>> + * Destroy temporary mapping to access address range on current HART
>>> + *
>>> + * @return 0 on success and negative error code on failure
>>> + */
>>> +int sbi_hart_protection_unmap_range(unsigned long base, unsigned long size);
>>
>> These functions should document (or take a parameter for) the permissions of the
>> temporary mapping. "saddr" sort of documents that for the existing functions.
>
> Sure, I will add parameter documentation.
>
>>
>>> +
>>> +#endif /* __SBI_HART_PROTECTION_H__ */
>>> diff --git a/lib/sbi/objects.mk b/lib/sbi/objects.mk
>>> index 8abe1e8e..51588cd1 100644
>>> --- a/lib/sbi/objects.mk
>>> +++ b/lib/sbi/objects.mk
>>> @@ -75,6 +75,7 @@ libsbi-objs-y += sbi_emulate_csr.o
>>> libsbi-objs-y += sbi_fifo.o
>>> libsbi-objs-y += sbi_fwft.o
>>> libsbi-objs-y += sbi_hart.o
>>> +libsbi-objs-y += sbi_hart_protection.o
>>> libsbi-objs-y += sbi_heap.o
>>> libsbi-objs-y += sbi_math.o
>>> libsbi-objs-y += sbi_hfence.o
>>> diff --git a/lib/sbi/sbi_hart_protection.c b/lib/sbi/sbi_hart_protection.c
>>> new file mode 100644
>>> index 00000000..831072ad
>>> --- /dev/null
>>> +++ b/lib/sbi/sbi_hart_protection.c
>>> @@ -0,0 +1,110 @@
>>> +/*
>>> + * SPDX-License-Identifier: BSD-2-Clause
>>> + *
>>> + * Copyright (c) 2025 Ventana Micro Systems Inc.
>>> + */
>>> +
>>> +#include <sbi/sbi_error.h>
>>> +#include <sbi/sbi_hart_protection.h>
>>> +#include <sbi/sbi_scratch.h>
>>> +
>>> +static SBI_LIST_HEAD(hart_protection_list);
>>> +
>>> +struct sbi_hart_protection *sbi_hart_protection_best(void)
>>> +{
>>> + if (sbi_list_empty(&hart_protection_list))
>>> + return NULL;
>>> +
>>> + return sbi_list_first_entry(&hart_protection_list, struct sbi_hart_protection, head);
>>> +}
>>> +
>>> +int sbi_hart_protection_register(struct sbi_hart_protection *hprot)
>>> +{
>>> + struct sbi_hart_protection *pos = NULL;
>>> + bool found_pos = false;
>>> +
>>> + if (!hprot)
>>> + return SBI_EINVAL;
>>> +
>>> + sbi_list_for_each_entry(pos, &hart_protection_list, head) {
>>> + if (hprot->rating > pos->rating) {
>>> + found_pos = true;
>>> + break;
>>> + }
>>> + }
>>> +
>>> + if (found_pos)
>>> + sbi_list_add_tail(&hprot->head, &pos->head);
>>> + else
>>> + sbi_list_add_tail(&hprot->head, &hart_protection_list);
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +int sbi_hart_protection_unregister(struct sbi_hart_protection *hprot)
>>> +{
>>> + struct sbi_hart_protection *pos = NULL;
>>> + bool found_pos = false;
>>> +
>>> + if (!hprot)
>>> + return SBI_EINVAL;
>>> +
>>> + sbi_list_for_each_entry(pos, &hart_protection_list, head) {
>>> + if (hprot == pos) {
>>> + found_pos = true;
>>> + break;
>>> + }
>>> + }
>>> +
>>> + if (!found_pos)
>>> + return SBI_ENOENT;
>>> +
>>> + sbi_list_del(&hprot->head);
>>> + return 0;
>>
>> I don't think this function is needed. ISA extensions aren't going to disappear
>> at runtime. Also, what would you expect to happen if the active hart protection
>> mechanism is unregistered?
>
> I have added this so that if there is any failure in the function registering
> hart protection mechanism then the function can unregister the hart
> protection.
OK, that makes sense. Then if we are keeping this function, I would suggest
following the pattern of making the cleanup function idempotent/infallible since
the return value would be ignored anyway in favor of the original error.
>>> +}
>>> +
>>> +int sbi_hart_protection_configure(struct sbi_scratch *scratch)
>>> +{
>>> + struct sbi_hart_protection *hprot = sbi_hart_protection_best();
>>> +
>>> + if (!hprot)
>>> + return SBI_EINVAL;
>>> + if (!hprot->configure)
>>> + return SBI_ENOSYS;
>>> +
>>> + return hprot->configure(scratch);
>>> +}
>>
>> If you only ever use the "best" backend, there is no need to maintain a list,
>> just a single pointer.
>
> I agree that since we are only using the "best" backend, using list seems
> redundant. I have kept list to know all the available hart protection
> mechanism plus there is very little overhead since the list is sorted
> based on rating.
>
>>
>> But then that seems too simplistic. We'll probably want to use (e)PMP and Smmpt
>> together, to avoid splitting huge MPT pages to make M-mode carveouts. So we
>> would want to loop through all registered protection schemes.
>
> Thinking about this more. It is better to register a composite hart protection
> for (e)PMP+Smmpt instead of hart protection abstraction calling multiple
> configure/unconfigure and map/unmap_range because this allows Smmpt
> code to decide when to use (e)PMP for a domain memregion and when
> to use Smmpt for a domain memregion. The (e)PMP+Smmpt and platform
> specific hart protection can always share the PMP programing functions
> with (e)PMP hart protection.
In that case, the composite backend would have to know which set of oldpmp vs
smepmp functions to call, which would look suspiciously similar to the code in
sbi_hart.c removed by this patch series. Maybe the composite backend could just
assume ePMP, since hardware with Smmpt is likely to support that?
>> I'm guessing you plan to use an entirely separate set of APIs for system-level
>> protection schemes like WorldGuard and IOPMP?
>>
>
> Yes, that's correct we will have separate system protection abstraction for
> WorldGuard and IOPMP. The OpenSBI domain framework will use system
> protection abstraction differently compared to how it uses hart protection
> abstraction.
OK, makes sense.
Regards,
Samuel
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/5] lib: sbi: Introduce hart protection abstraction
2025-12-09 7:04 ` Samuel Holland
@ 2025-12-09 8:30 ` Anup Patel
0 siblings, 0 replies; 12+ messages in thread
From: Anup Patel @ 2025-12-09 8:30 UTC (permalink / raw)
To: Samuel Holland; +Cc: Anup Patel, Atish Patra, Andrew Jones, opensbi
On Tue, Dec 9, 2025 at 12:34 PM Samuel Holland
<samuel.holland@sifive.com> wrote:
>
> Hi Anup,
>
> On 2025-12-08 12:58 AM, Anup Patel wrote:
> > On Sun, Dec 7, 2025 at 4:40 PM Samuel Holland <samuel.holland@sifive.com> wrote:
> >> On 2025-11-26 8:18 AM, Anup Patel wrote:
> >>> Currently, PMP and ePMP are the only hart protection mechanisms
> >>> available in OpenSBI but new protection mechanisms (such as Smmpt)
> >>> will be added in the near future.
> >>>
> >>> To allow multiple hart protection mechanisms, introduce hart
> >>> protection abstraction and related APIs.
> >>>
> >>> Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> >>> ---
> >>> include/sbi/sbi_hart_protection.h | 88 ++++++++++++++++++++++++
> >>> lib/sbi/objects.mk | 1 +
> >>> lib/sbi/sbi_hart_protection.c | 110 ++++++++++++++++++++++++++++++
> >>> lib/sbi/sbi_init.c | 5 ++
> >>> 4 files changed, 204 insertions(+)
> >>> create mode 100644 include/sbi/sbi_hart_protection.h
> >>> create mode 100644 lib/sbi/sbi_hart_protection.c
> >>>
> >>> diff --git a/include/sbi/sbi_hart_protection.h b/include/sbi/sbi_hart_protection.h
> >>> new file mode 100644
> >>> index 00000000..c9e452db
> >>> --- /dev/null
> >>> +++ b/include/sbi/sbi_hart_protection.h
> >>> @@ -0,0 +1,88 @@
> >>> +/*
> >>> + * SPDX-License-Identifier: BSD-2-Clause
> >>> + *
> >>> + * Copyright (c) 2025 Ventana Micro Systems Inc.
> >>> + */
> >>> +
> >>> +#ifndef __SBI_HART_PROTECTION_H__
> >>> +#define __SBI_HART_PROTECTION_H__
> >>> +
> >>> +#include <sbi/sbi_types.h>
> >>> +#include <sbi/sbi_list.h>
> >>> +
> >>> +struct sbi_scratch;
> >>> +
> >>> +/** Representation of hart protection mechanism */
> >>> +struct sbi_hart_protection {
> >>> + /** List head */
> >>> + struct sbi_dlist head;
> >>> +
> >>> + /** Name of the hart protection mechanism */
> >>> + char name[32];
> >>> +
> >>> + /** Ratings of the hart protection mechanism (higher is better) */
> >>> + unsigned long rating;
> >>> +
> >>> + /** Configure protection for current HART (Mandatory) */
> >>> + int (*configure)(struct sbi_scratch *scratch);
> >>> +
> >>> + /** Unconfigure protection for current HART (Mandatory) */
> >>> + void (*unconfigure)(struct sbi_scratch *scratch);
> >>> +
> >>> + /** Create temporary mapping to access address range on current HART (Optional) */
> >>> + int (*map_range)(struct sbi_scratch *scratch,
> >>> + unsigned long base, unsigned long size);
> >>> +
> >>> + /** Destroy temporary mapping on current HART (Optional) */
> >>> + int (*unmap_range)(struct sbi_scratch *scratch,
> >>> + unsigned long base, unsigned long size);
> >>> +};
> >>> +
> >>> +/**
> >>> + * Get the best hart protection mechanism
> >>> + *
> >>> + * @return pointer to best hart protection mechanism
> >>> + */
> >>> +struct sbi_hart_protection *sbi_hart_protection_best(void);
> >>> +
> >>> +/**
> >>> + * Register a hart protection mechanism
> >>> + *
> >>> + * @return 0 on success and negative error code on failure
> >>> + */
> >>> +int sbi_hart_protection_register(struct sbi_hart_protection *hprot);
> >>> +
> >>> +/**
> >>> + * Unregister a hart protection mechanism
> >>> + *
> >>> + * @return 0 on success and negative error code on failure
> >>> + */
> >>> +int sbi_hart_protection_unregister(struct sbi_hart_protection *hprot);
> >>> +
> >>> +/**
> >>> + * Configure protection for current HART
> >>> + *
> >>> + * @return 0 on success and negative error code on failure
> >>> + */
> >>> +int sbi_hart_protection_configure(struct sbi_scratch *scratch);
> >>> +
> >>> +/**
> >>> + * Unconfigure protection for current HART
> >>> + */
> >>> +void sbi_hart_protection_unconfigure(struct sbi_scratch *scratch);
> >>> +
> >>> +/**
> >>> + * Create temporary mapping to access address range on current HART
> >>> + *
> >>> + * @return 0 on success and negative error code on failure
> >>> + */
> >>> +int sbi_hart_protection_map_range(unsigned long base, unsigned long size);
> >>> +
> >>> +/**
> >>> + * Destroy temporary mapping to access address range on current HART
> >>> + *
> >>> + * @return 0 on success and negative error code on failure
> >>> + */
> >>> +int sbi_hart_protection_unmap_range(unsigned long base, unsigned long size);
> >>
> >> These functions should document (or take a parameter for) the permissions of the
> >> temporary mapping. "saddr" sort of documents that for the existing functions.
> >
> > Sure, I will add parameter documentation.
> >
> >>
> >>> +
> >>> +#endif /* __SBI_HART_PROTECTION_H__ */
> >>> diff --git a/lib/sbi/objects.mk b/lib/sbi/objects.mk
> >>> index 8abe1e8e..51588cd1 100644
> >>> --- a/lib/sbi/objects.mk
> >>> +++ b/lib/sbi/objects.mk
> >>> @@ -75,6 +75,7 @@ libsbi-objs-y += sbi_emulate_csr.o
> >>> libsbi-objs-y += sbi_fifo.o
> >>> libsbi-objs-y += sbi_fwft.o
> >>> libsbi-objs-y += sbi_hart.o
> >>> +libsbi-objs-y += sbi_hart_protection.o
> >>> libsbi-objs-y += sbi_heap.o
> >>> libsbi-objs-y += sbi_math.o
> >>> libsbi-objs-y += sbi_hfence.o
> >>> diff --git a/lib/sbi/sbi_hart_protection.c b/lib/sbi/sbi_hart_protection.c
> >>> new file mode 100644
> >>> index 00000000..831072ad
> >>> --- /dev/null
> >>> +++ b/lib/sbi/sbi_hart_protection.c
> >>> @@ -0,0 +1,110 @@
> >>> +/*
> >>> + * SPDX-License-Identifier: BSD-2-Clause
> >>> + *
> >>> + * Copyright (c) 2025 Ventana Micro Systems Inc.
> >>> + */
> >>> +
> >>> +#include <sbi/sbi_error.h>
> >>> +#include <sbi/sbi_hart_protection.h>
> >>> +#include <sbi/sbi_scratch.h>
> >>> +
> >>> +static SBI_LIST_HEAD(hart_protection_list);
> >>> +
> >>> +struct sbi_hart_protection *sbi_hart_protection_best(void)
> >>> +{
> >>> + if (sbi_list_empty(&hart_protection_list))
> >>> + return NULL;
> >>> +
> >>> + return sbi_list_first_entry(&hart_protection_list, struct sbi_hart_protection, head);
> >>> +}
> >>> +
> >>> +int sbi_hart_protection_register(struct sbi_hart_protection *hprot)
> >>> +{
> >>> + struct sbi_hart_protection *pos = NULL;
> >>> + bool found_pos = false;
> >>> +
> >>> + if (!hprot)
> >>> + return SBI_EINVAL;
> >>> +
> >>> + sbi_list_for_each_entry(pos, &hart_protection_list, head) {
> >>> + if (hprot->rating > pos->rating) {
> >>> + found_pos = true;
> >>> + break;
> >>> + }
> >>> + }
> >>> +
> >>> + if (found_pos)
> >>> + sbi_list_add_tail(&hprot->head, &pos->head);
> >>> + else
> >>> + sbi_list_add_tail(&hprot->head, &hart_protection_list);
> >>> +
> >>> + return 0;
> >>> +}
> >>> +
> >>> +int sbi_hart_protection_unregister(struct sbi_hart_protection *hprot)
> >>> +{
> >>> + struct sbi_hart_protection *pos = NULL;
> >>> + bool found_pos = false;
> >>> +
> >>> + if (!hprot)
> >>> + return SBI_EINVAL;
> >>> +
> >>> + sbi_list_for_each_entry(pos, &hart_protection_list, head) {
> >>> + if (hprot == pos) {
> >>> + found_pos = true;
> >>> + break;
> >>> + }
> >>> + }
> >>> +
> >>> + if (!found_pos)
> >>> + return SBI_ENOENT;
> >>> +
> >>> + sbi_list_del(&hprot->head);
> >>> + return 0;
> >>
> >> I don't think this function is needed. ISA extensions aren't going to disappear
> >> at runtime. Also, what would you expect to happen if the active hart protection
> >> mechanism is unregistered?
> >
> > I have added this so that if there is any failure in the function registering
> > hart protection mechanism then the function can unregister the hart
> > protection.
>
> OK, that makes sense. Then if we are keeping this function, I would suggest
> following the pattern of making the cleanup function idempotent/infallible since
> the return value would be ignored anyway in favor of the original error.
Okay, I will drop the return value from the unregister() function.
>
> >>> +}
> >>> +
> >>> +int sbi_hart_protection_configure(struct sbi_scratch *scratch)
> >>> +{
> >>> + struct sbi_hart_protection *hprot = sbi_hart_protection_best();
> >>> +
> >>> + if (!hprot)
> >>> + return SBI_EINVAL;
> >>> + if (!hprot->configure)
> >>> + return SBI_ENOSYS;
> >>> +
> >>> + return hprot->configure(scratch);
> >>> +}
> >>
> >> If you only ever use the "best" backend, there is no need to maintain a list,
> >> just a single pointer.
> >
> > I agree that since we are only using the "best" backend, using list seems
> > redundant. I have kept list to know all the available hart protection
> > mechanism plus there is very little overhead since the list is sorted
> > based on rating.
> >
> >>
> >> But then that seems too simplistic. We'll probably want to use (e)PMP and Smmpt
> >> together, to avoid splitting huge MPT pages to make M-mode carveouts. So we
> >> would want to loop through all registered protection schemes.
> >
> > Thinking about this more. It is better to register a composite hart protection
> > for (e)PMP+Smmpt instead of hart protection abstraction calling multiple
> > configure/unconfigure and map/unmap_range because this allows Smmpt
> > code to decide when to use (e)PMP for a domain memregion and when
> > to use Smmpt for a domain memregion. The (e)PMP+Smmpt and platform
> > specific hart protection can always share the PMP programing functions
> > with (e)PMP hart protection.
>
> In that case, the composite backend would have to know which set of oldpmp vs
> smepmp functions to call, which would look suspiciously similar to the code in
> sbi_hart.c removed by this patch series. Maybe the composite backend could just
> assume ePMP, since hardware with Smmpt is likely to support that?
The main use of Smepmp in the composite Smepmp+Smmpt hart protection
will be to enforce access checks for M-mode only regions and M-mode+S-mode
shared regions. The oldpmp is not effective in enforce M-mode access checks
so I don't see a composite oldpmp+Smmpt hart protection to be useful.
It certainly makes sense to register Smepmp+Smmpt hart protection only
when both Smepmp and Smmpt are present.
>
> >> I'm guessing you plan to use an entirely separate set of APIs for system-level
> >> protection schemes like WorldGuard and IOPMP?
> >>
> >
> > Yes, that's correct we will have separate system protection abstraction for
> > WorldGuard and IOPMP. The OpenSBI domain framework will use system
> > protection abstraction differently compared to how it uses hart protection
> > abstraction.
>
> OK, makes sense.
>
> Regards,
> Samuel
>
Regards,
Anup
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/5] lib: sbi: Implement hart protection for PMP and ePMP
2025-12-07 11:12 ` Samuel Holland
@ 2025-12-09 12:55 ` Anup Patel
0 siblings, 0 replies; 12+ messages in thread
From: Anup Patel @ 2025-12-09 12:55 UTC (permalink / raw)
To: Samuel Holland; +Cc: Atish Patra, Andrew Jones, Anup Patel, opensbi
On Sun, Dec 7, 2025 at 4:42 PM Samuel Holland <samuel.holland@sifive.com> wrote:
>
> Hi Anup,
>
> On 2025-11-26 8:18 AM, Anup Patel wrote:
> > Implement PMP and ePMP based hart protection abstraction so
> > that usage of sbi_hart_pmp_xyz() functions can be replaced
> > with sbi_hart_protection_xyz() functions.
> >
> > Signed-off-by: Anup Patel <apatel@ventanamicro.com>
> > ---
> > lib/sbi/sbi_hart.c | 209 ++++++++++++++++++++++++++++-----------------
> > 1 file changed, 133 insertions(+), 76 deletions(-)
> >
> > diff --git a/lib/sbi/sbi_hart.c b/lib/sbi/sbi_hart.c
> > index c5a8d248..8869839d 100644
> > --- a/lib/sbi/sbi_hart.c
> > +++ b/lib/sbi/sbi_hart.c
> > @@ -17,6 +17,7 @@
> > #include <sbi/sbi_csr_detect.h>
> > #include <sbi/sbi_error.h>
> > #include <sbi/sbi_hart.h>
> > +#include <sbi/sbi_hart_protection.h>
> > #include <sbi/sbi_math.h>
> > #include <sbi/sbi_platform.h>
> > #include <sbi/sbi_pmu.h>
> > @@ -311,6 +312,30 @@ bool sbi_hart_smepmp_is_fw_region(unsigned int pmp_idx)
> > return bitmap_test(fw_smepmp_ids, pmp_idx) ? true : false;
> > }
> >
> > +static void sbi_hart_pmp_fence(void)
> > +{
> > + /*
> > + * As per section 3.7.2 of privileged specification v1.12,
> > + * virtual address translations can be speculatively performed
> > + * (even before actual access). These, along with PMP traslations,
> > + * can be cached. This can pose a problem with CPU hotplug
> > + * and non-retentive suspend scenario because PMP states are
> > + * not preserved.
> > + * It is advisable to flush the caching structures under such
> > + * conditions.
> > + */
> > + if (misa_extension('S')) {
> > + __asm__ __volatile__("sfence.vma");
> > +
> > + /*
> > + * If hypervisor mode is supported, flush caching
> > + * structures in guest mode too.
> > + */
> > + if (misa_extension('H'))
> > + __sbi_hfence_gvma_all();
> > + }
> > +}
> > +
> > static void sbi_hart_smepmp_set(struct sbi_scratch *scratch,
> > struct sbi_domain *dom,
> > struct sbi_domain_memregion *reg,
> > @@ -343,14 +368,19 @@ static bool is_valid_pmp_idx(unsigned int pmp_count, unsigned int pmp_idx)
> > return false;
> > }
> >
> > -static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
> > - unsigned int pmp_count,
> > - unsigned int pmp_log2gran,
> > - unsigned long pmp_addr_max)
> > +static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch)
> > {
> > struct sbi_domain_memregion *reg;
> > struct sbi_domain *dom = sbi_domain_thishart_ptr();
> > - unsigned int pmp_idx, pmp_flags;
> > + unsigned int pmp_log2gran, pmp_bits;
> > + unsigned int pmp_idx, pmp_count;
> > + unsigned long pmp_addr_max;
> > + unsigned int pmp_flags;
> > +
> > + pmp_count = sbi_hart_pmp_count(scratch);
> > + pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
> > + pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
> > + pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
> >
> > /*
> > * Set the RLB so that, we can write to PMP entries without
> > @@ -430,20 +460,64 @@ static int sbi_hart_smepmp_configure(struct sbi_scratch *scratch,
> > * Keep the RLB bit so that dynamic mappings can be done.
> > */
> >
> > + sbi_hart_pmp_fence();
> > return 0;
> > }
> >
> > -static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
> > - unsigned int pmp_count,
> > - unsigned int pmp_log2gran,
> > - unsigned long pmp_addr_max)
> > +static int sbi_hart_smepmp_map_range(struct sbi_scratch *scratch,
> > + unsigned long addr, unsigned long size)
> > +{
> > + /* shared R/W access for M and S/U mode */
> > + unsigned int pmp_flags = (PMP_W | PMP_X);
> > + unsigned long order, base = 0;
> > +
> > + if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
> > + return SBI_ENOSPC;
> > +
> > + for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
> > + order <= __riscv_xlen; order++) {
> > + if (order < __riscv_xlen) {
> > + base = addr & ~((1UL << order) - 1UL);
> > + if ((base <= addr) &&
> > + (addr < (base + (1UL << order))) &&
> > + (base <= (addr + size - 1UL)) &&
> > + ((addr + size - 1UL) < (base + (1UL << order))))
> > + break;
> > + } else {
> > + return SBI_EFAIL;
> > + }
> > + }
> > +
> > + sbi_platform_pmp_set(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY,
> > + SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW,
> > + pmp_flags, base, order);
> > + pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
> > +
> > + return SBI_OK;
> > +}
> > +
> > +static int sbi_hart_smepmp_unmap_range(struct sbi_scratch *scratch,
> > + unsigned long addr, unsigned long size)
> > +{
> > + sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
> > + return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> > +}
> > +
> > +static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch)
> > {
> > struct sbi_domain_memregion *reg;
> > struct sbi_domain *dom = sbi_domain_thishart_ptr();
> > - unsigned int pmp_idx = 0;
> > + unsigned long pmp_addr, pmp_addr_max;
> > + unsigned int pmp_log2gran, pmp_bits;
> > + unsigned int pmp_idx, pmp_count;
> > unsigned int pmp_flags;
> > - unsigned long pmp_addr;
> >
> > + pmp_count = sbi_hart_pmp_count(scratch);
> > + pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
> > + pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
> > + pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
> > +
> > + pmp_idx = 0;
> > sbi_domain_for_each_memregion(dom, reg) {
> > if (!is_valid_pmp_idx(pmp_count, pmp_idx))
> > return SBI_EFAIL;
> > @@ -478,43 +552,19 @@ static int sbi_hart_oldpmp_configure(struct sbi_scratch *scratch,
> > }
> > }
> >
> > + sbi_hart_pmp_fence();
> > return 0;
> > }
> >
> > int sbi_hart_map_saddr(unsigned long addr, unsigned long size)
> > {
> > - /* shared R/W access for M and S/U mode */
> > - unsigned int pmp_flags = (PMP_W | PMP_X);
> > - unsigned long order, base = 0;
> > struct sbi_scratch *scratch = sbi_scratch_thishart_ptr();
> >
> > /* If Smepmp is not supported no special mapping is required */
> > if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
> > return SBI_OK;
> >
> > - if (is_pmp_entry_mapped(SBI_SMEPMP_RESV_ENTRY))
> > - return SBI_ENOSPC;
> > -
> > - for (order = MAX(sbi_hart_pmp_log2gran(scratch), log2roundup(size));
> > - order <= __riscv_xlen; order++) {
> > - if (order < __riscv_xlen) {
> > - base = addr & ~((1UL << order) - 1UL);
> > - if ((base <= addr) &&
> > - (addr < (base + (1UL << order))) &&
> > - (base <= (addr + size - 1UL)) &&
> > - ((addr + size - 1UL) < (base + (1UL << order))))
> > - break;
> > - } else {
> > - return SBI_EFAIL;
> > - }
> > - }
> > -
> > - sbi_platform_pmp_set(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY,
> > - SBI_DOMAIN_MEMREGION_SHARED_SURW_MRW,
> > - pmp_flags, base, order);
> > - pmp_set(SBI_SMEPMP_RESV_ENTRY, pmp_flags, base, order);
> > -
> > - return SBI_OK;
> > + return sbi_hart_smepmp_map_range(scratch, addr, size);
> > }
> >
> > int sbi_hart_unmap_saddr(void)
> > @@ -524,53 +574,18 @@ int sbi_hart_unmap_saddr(void)
> > if (!sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
> > return SBI_OK;
> >
> > - sbi_platform_pmp_disable(sbi_platform_ptr(scratch), SBI_SMEPMP_RESV_ENTRY);
> > - return pmp_disable(SBI_SMEPMP_RESV_ENTRY);
> > + return sbi_hart_smepmp_unmap_range(scratch, 0, 0);
> > }
> >
> > int sbi_hart_pmp_configure(struct sbi_scratch *scratch)
> > {
> > - int rc;
> > - unsigned int pmp_bits, pmp_log2gran;
> > - unsigned int pmp_count = sbi_hart_pmp_count(scratch);
> > - unsigned long pmp_addr_max;
> > -
> > - if (!pmp_count)
> > + if (!sbi_hart_pmp_count(scratch))
> > return 0;
> >
> > - pmp_log2gran = sbi_hart_pmp_log2gran(scratch);
> > - pmp_bits = sbi_hart_pmp_addrbits(scratch) - 1;
> > - pmp_addr_max = (1UL << pmp_bits) | ((1UL << pmp_bits) - 1);
> > -
> > if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP))
> > - rc = sbi_hart_smepmp_configure(scratch, pmp_count,
> > - pmp_log2gran, pmp_addr_max);
> > + return sbi_hart_smepmp_configure(scratch);
> > else
> > - rc = sbi_hart_oldpmp_configure(scratch, pmp_count,
> > - pmp_log2gran, pmp_addr_max);
> > -
> > - /*
> > - * As per section 3.7.2 of privileged specification v1.12,
> > - * virtual address translations can be speculatively performed
> > - * (even before actual access). These, along with PMP traslations,
> > - * can be cached. This can pose a problem with CPU hotplug
> > - * and non-retentive suspend scenario because PMP states are
> > - * not preserved.
> > - * It is advisable to flush the caching structures under such
> > - * conditions.
> > - */
> > - if (misa_extension('S')) {
> > - __asm__ __volatile__("sfence.vma");
> > -
> > - /*
> > - * If hypervisor mode is supported, flush caching
> > - * structures in guest mode too.
> > - */
> > - if (misa_extension('H'))
> > - __sbi_hfence_gvma_all();
> > - }
> > -
> > - return rc;
> > + return sbi_hart_oldpmp_configure(scratch);
> > }
> >
> > void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
> > @@ -587,6 +602,42 @@ void sbi_hart_pmp_unconfigure(struct sbi_scratch *scratch)
> > }
> > }
> >
> > +static struct sbi_hart_protection pmp_protection = {
> > + .name = "pmp",
> > + .rating = 100,
> > + .configure = sbi_hart_oldpmp_configure,
> > + .unconfigure = sbi_hart_pmp_unconfigure,
> > +};
> > +
> > +static struct sbi_hart_protection epmp_protection = {
> > + .name = "epmp",
> > + .rating = 200,
> > + .configure = sbi_hart_smepmp_configure,
> > + .unconfigure = sbi_hart_pmp_unconfigure,
> > + .map_range = sbi_hart_smepmp_map_range,
> > + .unmap_range = sbi_hart_smepmp_unmap_range,
> > +};
> > +
> > +static int sbi_hart_pmp_init(struct sbi_scratch *scratch)
> > +{
> > + int rc;
> > +
> > + if (!sbi_hart_pmp_count(scratch))
> > + return 0;
> > +
> > + rc = sbi_hart_protection_register(&pmp_protection);
> > + if (rc)
> > + return rc;
> > +
> > + if (sbi_hart_has_extension(scratch, SBI_HART_EXT_SMEPMP)) {
> > + rc = sbi_hart_protection_register(&epmp_protection);
> > + if (rc)
> > + return rc;
> > + }
>
> The only reason I can see for registering both PMP and ePMP is to fall back to
> PMP if ePMP configuration fails (as ePMP requires more slots). But you don't do
> that. We shouldn't register PMP if we know it will not be used.
>
Your suggestion makes sense. I will update.
Regards,
Anup
--
opensbi mailing list
opensbi@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/opensbi
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-12-09 12:55 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-26 14:18 [PATCH 0/5] OpenSBI hart protection abstraction Anup Patel
2025-11-26 14:18 ` [PATCH 1/5] lib: sbi: Introduce sbi_hart_pmp_unconfigure() function Anup Patel
2025-11-26 14:18 ` [PATCH 2/5] lib: sbi: Introduce hart protection abstraction Anup Patel
2025-12-07 11:10 ` Samuel Holland
2025-12-07 15:58 ` Anup Patel
2025-12-09 7:04 ` Samuel Holland
2025-12-09 8:30 ` Anup Patel
2025-11-26 14:18 ` [PATCH 3/5] lib: sbi: Implement hart protection for PMP and ePMP Anup Patel
2025-12-07 11:12 ` Samuel Holland
2025-12-09 12:55 ` Anup Patel
2025-11-26 14:18 ` [PATCH 4/5] lib: sbi: Replace sbi_hart_pmp_xyz() and sbi_hart_map/unmap_addr() Anup Patel
2025-11-26 14:18 ` [PATCH 5/5] lib: sbi: Factor-out PMP programming into separate sources Anup Patel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox