* [PATCH v1 0/2] Make PMP granularity configurable
@ 2025-10-14 8:23 Jay Chang
2025-10-14 8:23 ` [PATCH v1 1/2] target/riscv: " Jay Chang
2025-10-14 8:23 ` [PATCH v1 2/2] target/riscv: Make PMP CSRs conform to WARL constraints Jay Chang
0 siblings, 2 replies; 7+ messages in thread
From: Jay Chang @ 2025-10-14 8:23 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, Jay Chang
This patch series enhances QEMU's RISC-V PMP support to conform with
the RISC-V Privileged Specification regarding PMP granularity and WARL
constraints.
Previously, QEMU always used a fixed minimum PMP granularity of 4 bytes.
This series introduces a configurable "pmp-granularity" parameter, allowing
platforms to specify larger granularity values. In addition, the handling of
pmpcfg and pmpaddr CSRs has been updated to follow WARL constraints. For
example, when NA4 is not valid due to a larger granularity, it is silently
ignored. TOR and NAPOT address ranges are also properly aligned according to
the configured granularity.
A new CPU parameter `pmp-granularity` is now available on the QEMU command
line. For example:
-cpu rv64,g=true,c=true,pmp=true,pmp-granularity=1024
If not provided, the default remains 4 bytes.
---
Patch summary:
1. target/riscv: Make PMP granularity configurable
- Introduce CPU property `pmp-granularity` for platforms to configure
PMP granularity.
- Default remains 4 bytes if unspecified.
2. target/riscv: Make PMP CSRs conform to WARL constraints
- Update pmpcfg and pmpaddr handling to follow WARL semantics.
- Align start and end addresses of TOR regions to PMP granularity.
- Ensure software can read back correct values per the spec.
PATCH v1 update
Add the UL type to prevent bit-width errors.
Jay Chang (2):
target/riscv: Make PMP granularity configurable
target/riscv: Make PMP CSRs conform to WARL constraints
target/riscv/cpu.c | 39 ++++++++++++++++++++++++++
target/riscv/cpu.h | 1 +
target/riscv/cpu_cfg_fields.h.inc | 1 +
target/riscv/pmp.c | 46 +++++++++++++++++++++++++++++++
4 files changed, 87 insertions(+)
--
2.48.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v1 1/2] target/riscv: Make PMP granularity configurable
2025-10-14 8:23 [PATCH v1 0/2] Make PMP granularity configurable Jay Chang
@ 2025-10-14 8:23 ` Jay Chang
2025-10-17 0:56 ` Alistair Francis
2025-10-14 8:23 ` [PATCH v1 2/2] target/riscv: Make PMP CSRs conform to WARL constraints Jay Chang
1 sibling, 1 reply; 7+ messages in thread
From: Jay Chang @ 2025-10-14 8:23 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, Jay Chang, Frank Chang,
Jim Shu
Previously, the PMP granularity in qemu always used a minimum
granularity of 4 bytes, this patch add pmp-granularity to allow
platforms to configure the value.
A new CPU parameter pmp-granularity has been introduced to the QEMU
command line. For example:
-cpu rv64, g=true, c=true, pmp=true, pmp-granularity=1024
If no specific value is provided, the default value is 4 bytes.
Signed-off-by: Jay Chang <jay.chang@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jim Shu <jim.shu@sifive.com>
---
target/riscv/cpu.c | 39 +++++++++++++++++++++++++++++++
target/riscv/cpu.h | 1 +
target/riscv/cpu_cfg_fields.h.inc | 1 +
3 files changed, 41 insertions(+)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index a877018ab0..73d4280d7c 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -1121,6 +1121,7 @@ static void riscv_cpu_init(Object *obj)
cpu->cfg.cbop_blocksize = 64;
cpu->cfg.cboz_blocksize = 64;
cpu->cfg.pmp_regions = 16;
+ cpu->cfg.pmp_granularity = MIN_RISCV_PMP_GRANULARITY;
cpu->env.vext_ver = VEXT_VERSION_1_00_0;
cpu->cfg.max_satp_mode = -1;
@@ -1606,6 +1607,43 @@ static const PropertyInfo prop_num_pmp_regions = {
.set = prop_num_pmp_regions_set,
};
+static void prop_pmp_granularity_set(Object *obj, Visitor *v, const char *name,
+ void *opaque, Error **errp)
+{
+ RISCVCPU *cpu = RISCV_CPU(obj);
+ uint32_t value;
+
+ visit_type_uint32(v, name, &value, errp);
+
+ if ((value < MIN_RISCV_PMP_GRANULARITY) && (value & (value - 1))) {
+ error_setg(errp, "PMP granularity must be a power of 2 and at least %d",
+ MIN_RISCV_PMP_GRANULARITY);
+ return;
+ }
+
+ if (cpu->cfg.pmp_granularity != value && riscv_cpu_is_vendor(obj)) {
+ cpu_set_prop_err(cpu, name, errp);
+ return;
+ }
+
+ cpu_option_add_user_setting(name, value);
+ cpu->cfg.pmp_granularity = value;
+}
+
+static void prop_pmp_granularity_get(Object *obj, Visitor *v, const char *name,
+ void *opaque, Error **errp)
+{
+ uint32_t value = RISCV_CPU(obj)->cfg.pmp_granularity;
+
+ visit_type_uint32(v, name, &value, errp);
+}
+
+static const PropertyInfo prop_pmp_granularity = {
+ .description = "pmp-granularity",
+ .get = prop_pmp_granularity_get,
+ .set = prop_pmp_granularity_set,
+};
+
static int priv_spec_from_str(const char *priv_spec_str)
{
int priv_version = -1;
@@ -2606,6 +2644,7 @@ static const Property riscv_cpu_properties[] = {
{.name = "mmu", .info = &prop_mmu},
{.name = "pmp", .info = &prop_pmp},
{.name = "num-pmp-regions", .info = &prop_num_pmp_regions},
+ {.name = "pmp-granularity", .info = &prop_pmp_granularity},
{.name = "priv_spec", .info = &prop_priv_spec},
{.name = "vext_spec", .info = &prop_vext_spec},
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 2c2266415e..04711f93a2 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -176,6 +176,7 @@ extern RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[];
#define MAX_RISCV_PMPS (64)
#define OLD_MAX_RISCV_PMPS (16)
+#define MIN_RISCV_PMP_GRANULARITY 4
#if !defined(CONFIG_USER_ONLY)
#include "pmp.h"
diff --git a/target/riscv/cpu_cfg_fields.h.inc b/target/riscv/cpu_cfg_fields.h.inc
index e2d116f0df..a154ecdc79 100644
--- a/target/riscv/cpu_cfg_fields.h.inc
+++ b/target/riscv/cpu_cfg_fields.h.inc
@@ -166,6 +166,7 @@ TYPED_FIELD(uint16_t, cbom_blocksize, 0)
TYPED_FIELD(uint16_t, cbop_blocksize, 0)
TYPED_FIELD(uint16_t, cboz_blocksize, 0)
TYPED_FIELD(uint8_t, pmp_regions, 0)
+TYPED_FIELD(uint32_t, pmp_granularity, 0)
TYPED_FIELD(int8_t, max_satp_mode, -1)
--
2.48.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v1 2/2] target/riscv: Make PMP CSRs conform to WARL constraints
2025-10-14 8:23 [PATCH v1 0/2] Make PMP granularity configurable Jay Chang
2025-10-14 8:23 ` [PATCH v1 1/2] target/riscv: " Jay Chang
@ 2025-10-14 8:23 ` Jay Chang
2025-10-17 1:02 ` Alistair Francis
1 sibling, 1 reply; 7+ messages in thread
From: Jay Chang @ 2025-10-14 8:23 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, Jay Chang, Frank Chang,
Jim Shu
This patch ensure pmpcfg and pmpaddr comply with WARL constraints.
When the PMP granularity is greater than 4 bytes, NA4 mode is not valid
per the spec and will be silently ignored.
According to the spec, changing pmpcfg.A only affects the "read" value
of pmpaddr. When G > 2 and pmpcfg.A is NAPOT, bits pmpaddr[G-2:0] read
as all ones. When G > 1 and pmpcfg.A is OFF or TOR, bits pmpaddr[G-1:0]
read as all zeros. This allows software to read back the correct
granularity value.
In addition, when updating the PMP address rule in TOR mode,
the start and end addresses of the PMP region should be aligned
to the PMP granularity. (The current SPEC only state in TOR mode
that bits pmpaddr[G-1:0] do not affect the TOR address-matching logic.)
Signed-off-by: Jay Chang <jay.chang@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jim Shu <jim.shu@sifive.com>
---
target/riscv/pmp.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 72f1372a49..a15265c8d2 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -108,6 +108,17 @@ static int pmp_is_invalid_smepmp_cfg(CPURISCVState *env, uint8_t val)
g_assert_not_reached();
}
}
+/*
+ * Calculate PMP granularity value 'g'
+ *
+ * The granularity value 'g' is defined as log2(granularity) - 2, where
+ * granularity is the minimum alignment requirement for PMP regions in bytes.
+ */
+static inline int pmp_get_granularity_g(CPURISCVState *env)
+{
+ return __builtin_ctz(riscv_cpu_cfg(env)->pmp_granularity >> 2);
+}
+
/*
* Count the number of active rules.
@@ -153,6 +164,15 @@ static bool pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
qemu_log_mask(LOG_GUEST_ERROR,
"ignoring pmpcfg write - invalid\n");
} else {
+ uint8_t a_field = pmp_get_a_field(val);
+ /*
+ * When granularity g >= 1 (i.e., granularity > 4 bytes),
+ * the NA4 (Naturally Aligned 4-byte) mode is not selectable
+ */
+ if ((riscv_cpu_cfg(env)->pmp_granularity >
+ MIN_RISCV_PMP_GRANULARITY) && (a_field == PMP_AMATCH_NA4)) {
+ return false;
+ }
env->pmp_state.pmp[pmp_index].cfg_reg = val;
pmp_update_rule_addr(env, pmp_index);
return true;
@@ -199,6 +219,7 @@ void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index)
target_ulong prev_addr = 0u;
hwaddr sa = 0u;
hwaddr ea = 0u;
+ int g = pmp_get_granularity_g(env);
if (pmp_index >= 1u) {
prev_addr = env->pmp_state.pmp[pmp_index - 1].addr_reg;
@@ -211,6 +232,11 @@ void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index)
break;
case PMP_AMATCH_TOR:
+ /* Bits pmpaddr[G-1:0] do not affect the TOR address-matching logic. */
+ if (g >= 1) {
+ prev_addr &= ~((1UL << g) - 1UL);
+ this_addr &= ~((1UL << g) - 1UL);
+ }
if (prev_addr >= this_addr) {
sa = ea = 0u;
break;
@@ -577,6 +603,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
/*
* Handle a read from a pmpaddr CSR
+ * Change A field of pmpcfg affects the read value of pmpaddr
*/
target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
{
@@ -585,6 +612,25 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
if (addr_index < pmp_regions) {
val = env->pmp_state.pmp[addr_index].addr_reg;
+ int g = pmp_get_granularity_g(env);
+ switch (pmp_get_a_field(env->pmp_state.pmp[addr_index].cfg_reg)) {
+ case PMP_AMATCH_OFF:
+ /* fallthrough */
+ case PMP_AMATCH_TOR:
+ /* Bit [g-1:0] read all zero */
+ if (g >= 1 && g < TARGET_LONG_BITS) {
+ val &= ~((1UL << g) - 1UL);
+ }
+ break;
+ case PMP_AMATCH_NAPOT:
+ /* Bit [g-2:0] read all one */
+ if (g >= 2 && g < TARGET_LONG_BITS) {
+ val |= ((1UL << (g - 1)) - 1UL);
+ }
+ break;
+ default:
+ break;
+ }
trace_pmpaddr_csr_read(env->mhartid, addr_index, val);
} else {
qemu_log_mask(LOG_GUEST_ERROR,
--
2.48.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v1 1/2] target/riscv: Make PMP granularity configurable
2025-10-14 8:23 ` [PATCH v1 1/2] target/riscv: " Jay Chang
@ 2025-10-17 0:56 ` Alistair Francis
0 siblings, 0 replies; 7+ messages in thread
From: Alistair Francis @ 2025-10-17 0:56 UTC (permalink / raw)
To: Jay Chang
Cc: qemu-devel, qemu-riscv, Palmer Dabbelt, Alistair Francis,
Weiwei Li, Daniel Henrique Barboza, Liu Zhiwei, Frank Chang,
Jim Shu
On Tue, Oct 14, 2025 at 6:24 PM Jay Chang <jay.chang@sifive.com> wrote:
>
> Previously, the PMP granularity in qemu always used a minimum
> granularity of 4 bytes, this patch add pmp-granularity to allow
> platforms to configure the value.
>
> A new CPU parameter pmp-granularity has been introduced to the QEMU
> command line. For example:
>
> -cpu rv64, g=true, c=true, pmp=true, pmp-granularity=1024
>
> If no specific value is provided, the default value is 4 bytes.
>
> Signed-off-by: Jay Chang <jay.chang@sifive.com>
> Reviewed-by: Frank Chang <frank.chang@sifive.com>
> Reviewed-by: Jim Shu <jim.shu@sifive.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/cpu.c | 39 +++++++++++++++++++++++++++++++
> target/riscv/cpu.h | 1 +
> target/riscv/cpu_cfg_fields.h.inc | 1 +
> 3 files changed, 41 insertions(+)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index a877018ab0..73d4280d7c 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -1121,6 +1121,7 @@ static void riscv_cpu_init(Object *obj)
> cpu->cfg.cbop_blocksize = 64;
> cpu->cfg.cboz_blocksize = 64;
> cpu->cfg.pmp_regions = 16;
> + cpu->cfg.pmp_granularity = MIN_RISCV_PMP_GRANULARITY;
> cpu->env.vext_ver = VEXT_VERSION_1_00_0;
> cpu->cfg.max_satp_mode = -1;
>
> @@ -1606,6 +1607,43 @@ static const PropertyInfo prop_num_pmp_regions = {
> .set = prop_num_pmp_regions_set,
> };
>
> +static void prop_pmp_granularity_set(Object *obj, Visitor *v, const char *name,
> + void *opaque, Error **errp)
> +{
> + RISCVCPU *cpu = RISCV_CPU(obj);
> + uint32_t value;
> +
> + visit_type_uint32(v, name, &value, errp);
> +
> + if ((value < MIN_RISCV_PMP_GRANULARITY) && (value & (value - 1))) {
> + error_setg(errp, "PMP granularity must be a power of 2 and at least %d",
> + MIN_RISCV_PMP_GRANULARITY);
> + return;
> + }
> +
> + if (cpu->cfg.pmp_granularity != value && riscv_cpu_is_vendor(obj)) {
> + cpu_set_prop_err(cpu, name, errp);
> + return;
> + }
> +
> + cpu_option_add_user_setting(name, value);
> + cpu->cfg.pmp_granularity = value;
> +}
> +
> +static void prop_pmp_granularity_get(Object *obj, Visitor *v, const char *name,
> + void *opaque, Error **errp)
> +{
> + uint32_t value = RISCV_CPU(obj)->cfg.pmp_granularity;
> +
> + visit_type_uint32(v, name, &value, errp);
> +}
> +
> +static const PropertyInfo prop_pmp_granularity = {
> + .description = "pmp-granularity",
> + .get = prop_pmp_granularity_get,
> + .set = prop_pmp_granularity_set,
> +};
> +
> static int priv_spec_from_str(const char *priv_spec_str)
> {
> int priv_version = -1;
> @@ -2606,6 +2644,7 @@ static const Property riscv_cpu_properties[] = {
> {.name = "mmu", .info = &prop_mmu},
> {.name = "pmp", .info = &prop_pmp},
> {.name = "num-pmp-regions", .info = &prop_num_pmp_regions},
> + {.name = "pmp-granularity", .info = &prop_pmp_granularity},
>
> {.name = "priv_spec", .info = &prop_priv_spec},
> {.name = "vext_spec", .info = &prop_vext_spec},
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 2c2266415e..04711f93a2 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -176,6 +176,7 @@ extern RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[];
>
> #define MAX_RISCV_PMPS (64)
> #define OLD_MAX_RISCV_PMPS (16)
> +#define MIN_RISCV_PMP_GRANULARITY 4
>
> #if !defined(CONFIG_USER_ONLY)
> #include "pmp.h"
> diff --git a/target/riscv/cpu_cfg_fields.h.inc b/target/riscv/cpu_cfg_fields.h.inc
> index e2d116f0df..a154ecdc79 100644
> --- a/target/riscv/cpu_cfg_fields.h.inc
> +++ b/target/riscv/cpu_cfg_fields.h.inc
> @@ -166,6 +166,7 @@ TYPED_FIELD(uint16_t, cbom_blocksize, 0)
> TYPED_FIELD(uint16_t, cbop_blocksize, 0)
> TYPED_FIELD(uint16_t, cboz_blocksize, 0)
> TYPED_FIELD(uint8_t, pmp_regions, 0)
> +TYPED_FIELD(uint32_t, pmp_granularity, 0)
>
> TYPED_FIELD(int8_t, max_satp_mode, -1)
>
> --
> 2.48.1
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v1 2/2] target/riscv: Make PMP CSRs conform to WARL constraints
2025-10-14 8:23 ` [PATCH v1 2/2] target/riscv: Make PMP CSRs conform to WARL constraints Jay Chang
@ 2025-10-17 1:02 ` Alistair Francis
2025-10-17 12:29 ` Jay Chang
0 siblings, 1 reply; 7+ messages in thread
From: Alistair Francis @ 2025-10-17 1:02 UTC (permalink / raw)
To: Jay Chang
Cc: qemu-devel, qemu-riscv, Palmer Dabbelt, Alistair Francis,
Weiwei Li, Daniel Henrique Barboza, Liu Zhiwei, Frank Chang,
Jim Shu
On Tue, Oct 14, 2025 at 6:25 PM Jay Chang <jay.chang@sifive.com> wrote:
>
> This patch ensure pmpcfg and pmpaddr comply with WARL constraints.
>
> When the PMP granularity is greater than 4 bytes, NA4 mode is not valid
> per the spec and will be silently ignored.
>
> According to the spec, changing pmpcfg.A only affects the "read" value
> of pmpaddr. When G > 2 and pmpcfg.A is NAPOT, bits pmpaddr[G-2:0] read
> as all ones. When G > 1 and pmpcfg.A is OFF or TOR, bits pmpaddr[G-1:0]
> read as all zeros. This allows software to read back the correct
> granularity value.
>
> In addition, when updating the PMP address rule in TOR mode,
> the start and end addresses of the PMP region should be aligned
> to the PMP granularity. (The current SPEC only state in TOR mode
> that bits pmpaddr[G-1:0] do not affect the TOR address-matching logic.)
>
> Signed-off-by: Jay Chang <jay.chang@sifive.com>
> Reviewed-by: Frank Chang <frank.chang@sifive.com>
> Reviewed-by: Jim Shu <jim.shu@sifive.com>
> ---
> target/riscv/pmp.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 46 insertions(+)
>
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index 72f1372a49..a15265c8d2 100644
> --- a/target/riscv/pmp.c
> +++ b/target/riscv/pmp.c
> @@ -108,6 +108,17 @@ static int pmp_is_invalid_smepmp_cfg(CPURISCVState *env, uint8_t val)
> g_assert_not_reached();
> }
> }
> +/*
> + * Calculate PMP granularity value 'g'
> + *
> + * The granularity value 'g' is defined as log2(granularity) - 2, where
> + * granularity is the minimum alignment requirement for PMP regions in bytes.
> + */
> +static inline int pmp_get_granularity_g(CPURISCVState *env)
> +{
> + return __builtin_ctz(riscv_cpu_cfg(env)->pmp_granularity >> 2);
> +}
> +
>
> /*
> * Count the number of active rules.
> @@ -153,6 +164,15 @@ static bool pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
> qemu_log_mask(LOG_GUEST_ERROR,
> "ignoring pmpcfg write - invalid\n");
> } else {
> + uint8_t a_field = pmp_get_a_field(val);
> + /*
> + * When granularity g >= 1 (i.e., granularity > 4 bytes),
> + * the NA4 (Naturally Aligned 4-byte) mode is not selectable
> + */
> + if ((riscv_cpu_cfg(env)->pmp_granularity >
> + MIN_RISCV_PMP_GRANULARITY) && (a_field == PMP_AMATCH_NA4)) {
> + return false;
> + }
> env->pmp_state.pmp[pmp_index].cfg_reg = val;
> pmp_update_rule_addr(env, pmp_index);
> return true;
> @@ -199,6 +219,7 @@ void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index)
> target_ulong prev_addr = 0u;
> hwaddr sa = 0u;
> hwaddr ea = 0u;
> + int g = pmp_get_granularity_g(env);
>
> if (pmp_index >= 1u) {
> prev_addr = env->pmp_state.pmp[pmp_index - 1].addr_reg;
> @@ -211,6 +232,11 @@ void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index)
> break;
>
> case PMP_AMATCH_TOR:
> + /* Bits pmpaddr[G-1:0] do not affect the TOR address-matching logic. */
> + if (g >= 1) {
> + prev_addr &= ~((1UL << g) - 1UL);
> + this_addr &= ~((1UL << g) - 1UL);
> + }
> if (prev_addr >= this_addr) {
> sa = ea = 0u;
> break;
> @@ -577,6 +603,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
>
> /*
> * Handle a read from a pmpaddr CSR
> + * Change A field of pmpcfg affects the read value of pmpaddr
> */
> target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
> {
> @@ -585,6 +612,25 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
>
> if (addr_index < pmp_regions) {
> val = env->pmp_state.pmp[addr_index].addr_reg;
> + int g = pmp_get_granularity_g(env);
> + switch (pmp_get_a_field(env->pmp_state.pmp[addr_index].cfg_reg)) {
> + case PMP_AMATCH_OFF:
> + /* fallthrough */
> + case PMP_AMATCH_TOR:
> + /* Bit [g-1:0] read all zero */
> + if (g >= 1 && g < TARGET_LONG_BITS) {
> + val &= ~((1UL << g) - 1UL);
> + }
> + break;
> + case PMP_AMATCH_NAPOT:
> + /* Bit [g-2:0] read all one */
> + if (g >= 2 && g < TARGET_LONG_BITS) {
> + val |= ((1UL << (g - 1)) - 1UL);
> + }
ULL instead of UL?
Also should we just ensure a valid value is written? Instead of
writing something invalid and then masking the read
Alistair
> + break;
> + default:
> + break;
> + }
> trace_pmpaddr_csr_read(env->mhartid, addr_index, val);
> } else {
> qemu_log_mask(LOG_GUEST_ERROR,
> --
> 2.48.1
>
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v1 2/2] target/riscv: Make PMP CSRs conform to WARL constraints
2025-10-17 1:02 ` Alistair Francis
@ 2025-10-17 12:29 ` Jay Chang
0 siblings, 0 replies; 7+ messages in thread
From: Jay Chang @ 2025-10-17 12:29 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-devel, qemu-riscv, Palmer Dabbelt, Alistair Francis,
Weiwei Li, Daniel Henrique Barboza, Liu Zhiwei, Frank Chang,
Jim Shu
[-- Attachment #1: Type: text/plain, Size: 6027 bytes --]
I’ll change UL to ULL.
As for the second point, according to the spec:
“Although changing pmpcfgA[1] affects the value read from pmpaddr, it does
not affect the underlying value stored in that register.”
If we modify the value at write time instead of masking on read, it could
cause issues during software context switches.
For example, if pmpaddr is programmed in NAPOT mode and software switches
to TOR mode for context save/restore (i.e., read pmpaddr → write pmpaddr in
TOR mode → switch back to NAPOT), the NAPOT granularity bits pmpaddr[G-2:0]
would be lost because pmpaddr[G-1:0] reads as all zeros in TOR/OFF mode.
This behavior would be incorrect.
On Fri, Oct 17, 2025 at 9:02 AM Alistair Francis <alistair23@gmail.com>
wrote:
> On Tue, Oct 14, 2025 at 6:25 PM Jay Chang <jay.chang@sifive.com> wrote:
> >
> > This patch ensure pmpcfg and pmpaddr comply with WARL constraints.
> >
> > When the PMP granularity is greater than 4 bytes, NA4 mode is not valid
> > per the spec and will be silently ignored.
> >
> > According to the spec, changing pmpcfg.A only affects the "read" value
> > of pmpaddr. When G > 2 and pmpcfg.A is NAPOT, bits pmpaddr[G-2:0] read
> > as all ones. When G > 1 and pmpcfg.A is OFF or TOR, bits pmpaddr[G-1:0]
> > read as all zeros. This allows software to read back the correct
> > granularity value.
> >
> > In addition, when updating the PMP address rule in TOR mode,
> > the start and end addresses of the PMP region should be aligned
> > to the PMP granularity. (The current SPEC only state in TOR mode
> > that bits pmpaddr[G-1:0] do not affect the TOR address-matching logic.)
> >
> > Signed-off-by: Jay Chang <jay.chang@sifive.com>
> > Reviewed-by: Frank Chang <frank.chang@sifive.com>
> > Reviewed-by: Jim Shu <jim.shu@sifive.com>
> > ---
> > target/riscv/pmp.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 46 insertions(+)
> >
> > diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> > index 72f1372a49..a15265c8d2 100644
> > --- a/target/riscv/pmp.c
> > +++ b/target/riscv/pmp.c
> > @@ -108,6 +108,17 @@ static int pmp_is_invalid_smepmp_cfg(CPURISCVState
> *env, uint8_t val)
> > g_assert_not_reached();
> > }
> > }
> > +/*
> > + * Calculate PMP granularity value 'g'
> > + *
> > + * The granularity value 'g' is defined as log2(granularity) - 2, where
> > + * granularity is the minimum alignment requirement for PMP regions in
> bytes.
> > + */
> > +static inline int pmp_get_granularity_g(CPURISCVState *env)
> > +{
> > + return __builtin_ctz(riscv_cpu_cfg(env)->pmp_granularity >> 2);
> > +}
> > +
> >
> > /*
> > * Count the number of active rules.
> > @@ -153,6 +164,15 @@ static bool pmp_write_cfg(CPURISCVState *env,
> uint32_t pmp_index, uint8_t val)
> > qemu_log_mask(LOG_GUEST_ERROR,
> > "ignoring pmpcfg write - invalid\n");
> > } else {
> > + uint8_t a_field = pmp_get_a_field(val);
> > + /*
> > + * When granularity g >= 1 (i.e., granularity > 4 bytes),
> > + * the NA4 (Naturally Aligned 4-byte) mode is not selectable
> > + */
> > + if ((riscv_cpu_cfg(env)->pmp_granularity >
> > + MIN_RISCV_PMP_GRANULARITY) && (a_field ==
> PMP_AMATCH_NA4)) {
> > + return false;
> > + }
> > env->pmp_state.pmp[pmp_index].cfg_reg = val;
> > pmp_update_rule_addr(env, pmp_index);
> > return true;
> > @@ -199,6 +219,7 @@ void pmp_update_rule_addr(CPURISCVState *env,
> uint32_t pmp_index)
> > target_ulong prev_addr = 0u;
> > hwaddr sa = 0u;
> > hwaddr ea = 0u;
> > + int g = pmp_get_granularity_g(env);
> >
> > if (pmp_index >= 1u) {
> > prev_addr = env->pmp_state.pmp[pmp_index - 1].addr_reg;
> > @@ -211,6 +232,11 @@ void pmp_update_rule_addr(CPURISCVState *env,
> uint32_t pmp_index)
> > break;
> >
> > case PMP_AMATCH_TOR:
> > + /* Bits pmpaddr[G-1:0] do not affect the TOR address-matching
> logic. */
> > + if (g >= 1) {
> > + prev_addr &= ~((1UL << g) - 1UL);
> > + this_addr &= ~((1UL << g) - 1UL);
> > + }
> > if (prev_addr >= this_addr) {
> > sa = ea = 0u;
> > break;
> > @@ -577,6 +603,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t
> addr_index,
> >
> > /*
> > * Handle a read from a pmpaddr CSR
> > + * Change A field of pmpcfg affects the read value of pmpaddr
> > */
> > target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
> > {
> > @@ -585,6 +612,25 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env,
> uint32_t addr_index)
> >
> > if (addr_index < pmp_regions) {
> > val = env->pmp_state.pmp[addr_index].addr_reg;
> > + int g = pmp_get_granularity_g(env);
> > + switch
> (pmp_get_a_field(env->pmp_state.pmp[addr_index].cfg_reg)) {
> > + case PMP_AMATCH_OFF:
> > + /* fallthrough */
> > + case PMP_AMATCH_TOR:
> > + /* Bit [g-1:0] read all zero */
> > + if (g >= 1 && g < TARGET_LONG_BITS) {
> > + val &= ~((1UL << g) - 1UL);
> > + }
> > + break;
> > + case PMP_AMATCH_NAPOT:
> > + /* Bit [g-2:0] read all one */
> > + if (g >= 2 && g < TARGET_LONG_BITS) {
> > + val |= ((1UL << (g - 1)) - 1UL);
> > + }
>
> ULL instead of UL?
>
> Also should we just ensure a valid value is written? Instead of
> writing something invalid and then masking the read
>
> Alistair
>
> > + break;
> > + default:
> > + break;
> > + }
> > trace_pmpaddr_csr_read(env->mhartid, addr_index, val);
> > } else {
> > qemu_log_mask(LOG_GUEST_ERROR,
> > --
> > 2.48.1
> >
> >
>
[-- Attachment #2: Type: text/html, Size: 7820 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v1 2/2] target/riscv: Make PMP CSRs conform to WARL constraints
2025-10-22 2:26 [PATCH v1 0/2] Make PMP granularity configurable Jay Chang
@ 2025-10-22 2:26 ` Jay Chang
0 siblings, 0 replies; 7+ messages in thread
From: Jay Chang @ 2025-10-22 2:26 UTC (permalink / raw)
To: qemu-devel, qemu-riscv
Cc: Palmer Dabbelt, Alistair Francis, Weiwei Li,
Daniel Henrique Barboza, Liu Zhiwei, Jay Chang, Frank Chang,
Jim Shu
This patch ensure pmpcfg and pmpaddr comply with WARL constraints.
When the PMP granularity is greater than 4 bytes, NA4 mode is not valid
per the spec and will be silently ignored.
According to the spec, changing pmpcfg.A only affects the "read" value
of pmpaddr. When G > 2 and pmpcfg.A is NAPOT, bits pmpaddr[G-2:0] read
as all ones. When G > 1 and pmpcfg.A is OFF or TOR, bits pmpaddr[G-1:0]
read as all zeros. This allows software to read back the correct
granularity value.
In addition, when updating the PMP address rule in TOR mode,
the start and end addresses of the PMP region should be aligned
to the PMP granularity. (The current SPEC only state in TOR mode
that bits pmpaddr[G-1:0] do not affect the TOR address-matching logic.)
Signed-off-by: Jay Chang <jay.chang@sifive.com>
Reviewed-by: Frank Chang <frank.chang@sifive.com>
Reviewed-by: Jim Shu <jim.shu@sifive.com>
---
target/riscv/pmp.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 72f1372a49..a15265c8d2 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -108,6 +108,17 @@ static int pmp_is_invalid_smepmp_cfg(CPURISCVState *env, uint8_t val)
g_assert_not_reached();
}
}
+/*
+ * Calculate PMP granularity value 'g'
+ *
+ * The granularity value 'g' is defined as log2(granularity) - 2, where
+ * granularity is the minimum alignment requirement for PMP regions in bytes.
+ */
+static inline int pmp_get_granularity_g(CPURISCVState *env)
+{
+ return __builtin_ctz(riscv_cpu_cfg(env)->pmp_granularity >> 2);
+}
+
/*
* Count the number of active rules.
@@ -153,6 +164,15 @@ static bool pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
qemu_log_mask(LOG_GUEST_ERROR,
"ignoring pmpcfg write - invalid\n");
} else {
+ uint8_t a_field = pmp_get_a_field(val);
+ /*
+ * When granularity g >= 1 (i.e., granularity > 4 bytes),
+ * the NA4 (Naturally Aligned 4-byte) mode is not selectable
+ */
+ if ((riscv_cpu_cfg(env)->pmp_granularity >
+ MIN_RISCV_PMP_GRANULARITY) && (a_field == PMP_AMATCH_NA4)) {
+ return false;
+ }
env->pmp_state.pmp[pmp_index].cfg_reg = val;
pmp_update_rule_addr(env, pmp_index);
return true;
@@ -199,6 +219,7 @@ void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index)
target_ulong prev_addr = 0u;
hwaddr sa = 0u;
hwaddr ea = 0u;
+ int g = pmp_get_granularity_g(env);
if (pmp_index >= 1u) {
prev_addr = env->pmp_state.pmp[pmp_index - 1].addr_reg;
@@ -211,6 +232,11 @@ void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index)
break;
case PMP_AMATCH_TOR:
+ /* Bits pmpaddr[G-1:0] do not affect the TOR address-matching logic. */
+ if (g >= 1) {
+ prev_addr &= ~((1UL << g) - 1UL);
+ this_addr &= ~((1UL << g) - 1UL);
+ }
if (prev_addr >= this_addr) {
sa = ea = 0u;
break;
@@ -577,6 +603,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
/*
* Handle a read from a pmpaddr CSR
+ * Change A field of pmpcfg affects the read value of pmpaddr
*/
target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
{
@@ -585,6 +612,25 @@ target_ulong pmpaddr_csr_read(CPURISCVState *env, uint32_t addr_index)
if (addr_index < pmp_regions) {
val = env->pmp_state.pmp[addr_index].addr_reg;
+ int g = pmp_get_granularity_g(env);
+ switch (pmp_get_a_field(env->pmp_state.pmp[addr_index].cfg_reg)) {
+ case PMP_AMATCH_OFF:
+ /* fallthrough */
+ case PMP_AMATCH_TOR:
+ /* Bit [g-1:0] read all zero */
+ if (g >= 1 && g < TARGET_LONG_BITS) {
+ val &= ~((1UL << g) - 1UL);
+ }
+ break;
+ case PMP_AMATCH_NAPOT:
+ /* Bit [g-2:0] read all one */
+ if (g >= 2 && g < TARGET_LONG_BITS) {
+ val |= ((1UL << (g - 1)) - 1UL);
+ }
+ break;
+ default:
+ break;
+ }
trace_pmpaddr_csr_read(env->mhartid, addr_index, val);
} else {
qemu_log_mask(LOG_GUEST_ERROR,
--
2.48.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-10-22 2:28 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-14 8:23 [PATCH v1 0/2] Make PMP granularity configurable Jay Chang
2025-10-14 8:23 ` [PATCH v1 1/2] target/riscv: " Jay Chang
2025-10-17 0:56 ` Alistair Francis
2025-10-14 8:23 ` [PATCH v1 2/2] target/riscv: Make PMP CSRs conform to WARL constraints Jay Chang
2025-10-17 1:02 ` Alistair Francis
2025-10-17 12:29 ` Jay Chang
-- strict thread matches above, loose matches on Subject: below --
2025-10-22 2:26 [PATCH v1 0/2] Make PMP granularity configurable Jay Chang
2025-10-22 2:26 ` [PATCH v1 2/2] target/riscv: Make PMP CSRs conform to WARL constraints Jay Chang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).