* [PATCH v4 0/7] target/riscv: Fix PMP related problem
@ 2023-04-22 13:03 Weiwei Li
2023-04-22 13:03 ` [PATCH v4 1/7] target/riscv: Update pmp_get_tlb_size() Weiwei Li
` (6 more replies)
0 siblings, 7 replies; 11+ messages in thread
From: Weiwei Li @ 2023-04-22 13:03 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
richard.henderson, wangjunqiang, lazyparser, Weiwei Li
This patchset tries to fix the PMP bypass problem issue https://gitlab.com/qemu-project/qemu/-/issues/1542:
TLB will be cached if the matched PMP entry cover the whole page. However PMP entries with higher priority may cover part of the page (but not match the access address), which means different regions in this page may have different permission rights. So it also cannot be cached (patch 1).
Writing to pmpaddr didn't trigger tlb flush (patch 3).
We set the tlb_size to 1 to make the TLB_INVALID_MASK set, and and the next access will again go through tlb_fill. However, this way will not work in tb_gen_code() => get_page_addr_code_hostp(): the TLB host address will be cached, and the following instructions can use this host address directly which may lead to the bypass of PMP related check (patch 5).
The port is available here:
https://github.com/plctlab/plct-qemu/tree/plct-pmp-fix-v4
v4:
Update comments for Patch 1, and move partial check code from Patch 2 to Patch 1
Restore log message change in Patch 2
Update commit message and the way to improve the problem in Patch 6
v3:
Ignore disabled PMP entry in pmp_get_tlb_size() in Patch 1
Drop Patch 5, since tb jmp cache have been flushed in tlb_flush, so flush tb seems unnecessary.
Fix commit message problems in Patch 8 (Patch 7 in new patchset)
v2:
Update commit message for patch 1
Add default tlb_size when pmp is diabled or there is no rules and only get the tlb size when translation success in patch 2
Update get_page_addr_code_hostp instead of probe_access_internal to fix the cached host address for instruction fetch in patch 6
Add patch 7 to make the short up really work in pmp_hart_has_privs
Add patch 8 to use pmp_update_rule_addr() and pmp_update_rule_nums() separately
Weiwei Li (7):
target/riscv: Update pmp_get_tlb_size()
target/riscv: Move pmp_get_tlb_size apart from
get_physical_address_pmp
target/riscv: Flush TLB when pmpaddr is updated
target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes
accel/tcg: Uncache the host address for instruction fetch when tlb
size < 1
target/riscv: Make the short cut really work in pmp_hart_has_privs
target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write
accel/tcg/cputlb.c | 5 +++
target/riscv/cpu_helper.c | 19 +++-----
target/riscv/pmp.c | 91 +++++++++++++++++++++++++++++----------
target/riscv/pmp.h | 3 +-
4 files changed, 80 insertions(+), 38 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v4 1/7] target/riscv: Update pmp_get_tlb_size()
2023-04-22 13:03 [PATCH v4 0/7] target/riscv: Fix PMP related problem Weiwei Li
@ 2023-04-22 13:03 ` Weiwei Li
2023-04-28 2:23 ` LIU Zhiwei
2023-04-22 13:03 ` [PATCH v4 2/7] target/riscv: Move pmp_get_tlb_size apart from get_physical_address_pmp Weiwei Li
` (5 subsequent siblings)
6 siblings, 1 reply; 11+ messages in thread
From: Weiwei Li @ 2023-04-22 13:03 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
richard.henderson, wangjunqiang, lazyparser, Weiwei Li
PMP entries before the matched PMP entry (including the matched PMP entry)
may only cover partial of the TLB page, which may make different regions in
that page allow different RWX privs, such as for PMP0 (0x80000008~0x8000000F,
R) and PMP1 (0x80001000~0x80001FFF, RWX) write access to 0x80000000 will
match PMP1. However we cannot cache the translation result in the TLB since
this will make the write access to 0x80000008 bypass the check of PMP0. So we
should check all of them instead of the matched PMP entry in pmp_get_tlb_size()
and set the tlb_size to 1 in this case.
Set tlb_size to TARGET_PAGE_SIZE if PMP is not support or there is no PMP rules.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
---
target/riscv/cpu_helper.c | 7 ++---
target/riscv/pmp.c | 64 ++++++++++++++++++++++++++++++---------
target/riscv/pmp.h | 3 +-
3 files changed, 52 insertions(+), 22 deletions(-)
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 433ea529b0..075fc0538a 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -703,11 +703,8 @@ static int get_physical_address_pmp(CPURISCVState *env, int *prot,
}
*prot = pmp_priv_to_page_prot(pmp_priv);
- if ((tlb_size != NULL) && pmp_index != MAX_RISCV_PMPS) {
- target_ulong tlb_sa = addr & ~(TARGET_PAGE_SIZE - 1);
- target_ulong tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
-
- *tlb_size = pmp_get_tlb_size(env, pmp_index, tlb_sa, tlb_ea);
+ if (tlb_size != NULL) {
+ *tlb_size = pmp_get_tlb_size(env, addr);
}
return TRANSLATE_SUCCESS;
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 1f5aca42e8..ad20a319c1 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -601,28 +601,62 @@ target_ulong mseccfg_csr_read(CPURISCVState *env)
}
/*
- * Calculate the TLB size if the start address or the end address of
- * PMP entry is presented in the TLB page.
+ * Calculate the TLB size. If the PMP rules may make different regions in
+ * the TLB page of 'addr' allow different RWX privs, set the size to 1
+ * (to make the translation result uncached in the TLB and only be used for
+ * a single translation). Set the size to TARGET_PAGE_SIZE otherwise.
*/
-target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
- target_ulong tlb_sa, target_ulong tlb_ea)
+target_ulong pmp_get_tlb_size(CPURISCVState *env, target_ulong addr)
{
- target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
- target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
+ target_ulong pmp_sa;
+ target_ulong pmp_ea;
+ target_ulong tlb_sa = addr & ~(TARGET_PAGE_SIZE - 1);
+ target_ulong tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
+ int i;
- if (pmp_sa <= tlb_sa && pmp_ea >= tlb_ea) {
+ /*
+ * If PMP is not supported or there is no PMP rule, which means the allowed
+ * RWX privs of the page will not affected by PMP or PMP will provide the
+ * same option (disallow accesses or allow default RWX privs) for all
+ * addresses, set the size to TARGET_PAGE_SIZE.
+ */
+ if (!riscv_cpu_cfg(env)->pmp || !pmp_get_num_rules(env)) {
return TARGET_PAGE_SIZE;
- } else {
+ }
+
+ for (i = 0; i < MAX_RISCV_PMPS; i++) {
+ if (pmp_get_a_field(env->pmp_state.pmp[i].cfg_reg) == PMP_AMATCH_OFF) {
+ continue;
+ }
+
+ pmp_sa = env->pmp_state.addr[i].sa;
+ pmp_ea = env->pmp_state.addr[i].ea;
+
/*
- * At this point we have a tlb_size that is the smallest possible size
- * That fits within a TARGET_PAGE_SIZE and the PMP region.
- *
- * If the size is less then TARGET_PAGE_SIZE we drop the size to 1.
- * This means the result isn't cached in the TLB and is only used for
- * a single translation.
+ * Only the first PMP entry that covers (whole or partial of) the TLB
+ * page really matters:
+ * If it can cover the whole page, set the size to TARGET_PAGE_SIZE.
+ * The following PMP entries have lower priority and will not affect
+ * the allowed RWX privs of the page.
+ * If it only cover partial of the TLB page, set the size to 1 since
+ * the allowed RWX privs for the covered region may be different from
+ * other region of the page.
*/
- return 1;
+ if (pmp_sa <= tlb_sa && pmp_ea >= tlb_ea) {
+ return TARGET_PAGE_SIZE;
+ } else if ((pmp_sa >= tlb_sa && pmp_sa <= tlb_ea) ||
+ (pmp_ea >= tlb_sa && pmp_ea <= tlb_ea)) {
+ return 1;
+ }
}
+
+ /*
+ * If no PMP entry covers any region of the TLB page, similar to the above
+ * case that there is no PMP rule, PMP will provide the same option
+ * (disallow accesses or allow default RWX privs) for the whole page,
+ * set the size to TARGET_PAGE_SIZE.
+ */
+ return TARGET_PAGE_SIZE;
}
/*
diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
index b296ea1fc6..0a7e24750b 100644
--- a/target/riscv/pmp.h
+++ b/target/riscv/pmp.h
@@ -76,8 +76,7 @@ int pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
target_ulong size, pmp_priv_t privs,
pmp_priv_t *allowed_privs,
target_ulong mode);
-target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
- target_ulong tlb_sa, target_ulong tlb_ea);
+target_ulong pmp_get_tlb_size(CPURISCVState *env, target_ulong addr);
void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index);
void pmp_update_rule_nums(CPURISCVState *env);
uint32_t pmp_get_num_rules(CPURISCVState *env);
--
2.25.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 2/7] target/riscv: Move pmp_get_tlb_size apart from get_physical_address_pmp
2023-04-22 13:03 [PATCH v4 0/7] target/riscv: Fix PMP related problem Weiwei Li
2023-04-22 13:03 ` [PATCH v4 1/7] target/riscv: Update pmp_get_tlb_size() Weiwei Li
@ 2023-04-22 13:03 ` Weiwei Li
2023-04-28 2:30 ` LIU Zhiwei
2023-04-22 13:03 ` [PATCH v4 3/7] target/riscv: Flush TLB when pmpaddr is updated Weiwei Li
` (4 subsequent siblings)
6 siblings, 1 reply; 11+ messages in thread
From: Weiwei Li @ 2023-04-22 13:03 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
richard.henderson, wangjunqiang, lazyparser, Weiwei Li
pmp_get_tlb_size can be separated from get_physical_address_pmp and is only
needed when ret == TRANSLATE_SUCCESS.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
---
target/riscv/cpu_helper.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index 075fc0538a..83c9699a6d 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -676,14 +676,11 @@ void riscv_cpu_set_mode(CPURISCVState *env, target_ulong newpriv)
*
* @env: CPURISCVState
* @prot: The returned protection attributes
- * @tlb_size: TLB page size containing addr. It could be modified after PMP
- * permission checking. NULL if not set TLB page for addr.
* @addr: The physical address to be checked permission
* @access_type: The type of MMU access
* @mode: Indicates current privilege level.
*/
-static int get_physical_address_pmp(CPURISCVState *env, int *prot,
- target_ulong *tlb_size, hwaddr addr,
+static int get_physical_address_pmp(CPURISCVState *env, int *prot, hwaddr addr,
int size, MMUAccessType access_type,
int mode)
{
@@ -703,9 +700,6 @@ static int get_physical_address_pmp(CPURISCVState *env, int *prot,
}
*prot = pmp_priv_to_page_prot(pmp_priv);
- if (tlb_size != NULL) {
- *tlb_size = pmp_get_tlb_size(env, addr);
- }
return TRANSLATE_SUCCESS;
}
@@ -905,7 +899,7 @@ restart:
}
int pmp_prot;
- int pmp_ret = get_physical_address_pmp(env, &pmp_prot, NULL, pte_addr,
+ int pmp_ret = get_physical_address_pmp(env, &pmp_prot, pte_addr,
sizeof(target_ulong),
MMU_DATA_LOAD, PRV_S);
if (pmp_ret != TRANSLATE_SUCCESS) {
@@ -1300,8 +1294,9 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
prot &= prot2;
if (ret == TRANSLATE_SUCCESS) {
- ret = get_physical_address_pmp(env, &prot_pmp, &tlb_size, pa,
+ ret = get_physical_address_pmp(env, &prot_pmp, pa,
size, access_type, mode);
+ tlb_size = pmp_get_tlb_size(env, pa);
qemu_log_mask(CPU_LOG_MMU,
"%s PMP address=" HWADDR_FMT_plx " ret %d prot"
@@ -1333,8 +1328,9 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
__func__, address, ret, pa, prot);
if (ret == TRANSLATE_SUCCESS) {
- ret = get_physical_address_pmp(env, &prot_pmp, &tlb_size, pa,
+ ret = get_physical_address_pmp(env, &prot_pmp, pa,
size, access_type, mode);
+ tlb_size = pmp_get_tlb_size(env, pa);
qemu_log_mask(CPU_LOG_MMU,
"%s PMP address=" HWADDR_FMT_plx " ret %d prot"
--
2.25.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 3/7] target/riscv: Flush TLB when pmpaddr is updated
2023-04-22 13:03 [PATCH v4 0/7] target/riscv: Fix PMP related problem Weiwei Li
2023-04-22 13:03 ` [PATCH v4 1/7] target/riscv: Update pmp_get_tlb_size() Weiwei Li
2023-04-22 13:03 ` [PATCH v4 2/7] target/riscv: Move pmp_get_tlb_size apart from get_physical_address_pmp Weiwei Li
@ 2023-04-22 13:03 ` Weiwei Li
2023-04-22 13:03 ` [PATCH v4 4/7] target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes Weiwei Li
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Weiwei Li @ 2023-04-22 13:03 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
richard.henderson, wangjunqiang, lazyparser, Weiwei Li
TLB should be flushed not only for pmpcfg csr changes, but also for
pmpaddr csr changes.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
---
target/riscv/pmp.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index ad20a319c1..9ae3bfea22 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -537,6 +537,7 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
if (!pmp_is_locked(env, addr_index)) {
env->pmp_state.pmp[addr_index].addr_reg = val;
pmp_update_rule(env, addr_index);
+ tlb_flush(env_cpu(env));
} else {
qemu_log_mask(LOG_GUEST_ERROR,
"ignoring pmpaddr write - locked\n");
--
2.25.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 4/7] target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes
2023-04-22 13:03 [PATCH v4 0/7] target/riscv: Fix PMP related problem Weiwei Li
` (2 preceding siblings ...)
2023-04-22 13:03 ` [PATCH v4 3/7] target/riscv: Flush TLB when pmpaddr is updated Weiwei Li
@ 2023-04-22 13:03 ` Weiwei Li
2023-04-22 13:03 ` [PATCH v4 5/7] accel/tcg: Uncache the host address for instruction fetch when tlb size < 1 Weiwei Li
` (2 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Weiwei Li @ 2023-04-22 13:03 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
richard.henderson, wangjunqiang, lazyparser, Weiwei Li
TLB needn't be flushed when pmpcfg/pmpaddr don't changes.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
---
target/riscv/pmp.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 9ae3bfea22..0cef9e3e1d 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -26,7 +26,7 @@
#include "trace.h"
#include "exec/exec-all.h"
-static void pmp_write_cfg(CPURISCVState *env, uint32_t addr_index,
+static bool pmp_write_cfg(CPURISCVState *env, uint32_t addr_index,
uint8_t val);
static uint8_t pmp_read_cfg(CPURISCVState *env, uint32_t addr_index);
static void pmp_update_rule(CPURISCVState *env, uint32_t pmp_index);
@@ -83,7 +83,7 @@ static inline uint8_t pmp_read_cfg(CPURISCVState *env, uint32_t pmp_index)
* Accessor to set the cfg reg for a specific PMP/HART
* Bounds checks and relevant lock bit.
*/
-static void pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
+static bool pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
{
if (pmp_index < MAX_RISCV_PMPS) {
bool locked = true;
@@ -119,14 +119,17 @@ static void pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
if (locked) {
qemu_log_mask(LOG_GUEST_ERROR, "ignoring pmpcfg write - locked\n");
- } else {
+ } else if (env->pmp_state.pmp[pmp_index].cfg_reg != val) {
env->pmp_state.pmp[pmp_index].cfg_reg = val;
pmp_update_rule(env, pmp_index);
+ return true;
}
} else {
qemu_log_mask(LOG_GUEST_ERROR,
"ignoring pmpcfg write - out of bounds\n");
}
+
+ return false;
}
static void pmp_decode_napot(target_ulong a, target_ulong *sa,
@@ -477,16 +480,19 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
int i;
uint8_t cfg_val;
int pmpcfg_nums = 2 << riscv_cpu_mxl(env);
+ bool modified = false;
trace_pmpcfg_csr_write(env->mhartid, reg_index, val);
for (i = 0; i < pmpcfg_nums; i++) {
cfg_val = (val >> 8 * i) & 0xff;
- pmp_write_cfg(env, (reg_index * 4) + i, cfg_val);
+ modified |= pmp_write_cfg(env, (reg_index * 4) + i, cfg_val);
}
/* If PMP permission of any addr has been changed, flush TLB pages. */
- tlb_flush(env_cpu(env));
+ if (modified) {
+ tlb_flush(env_cpu(env));
+ }
}
@@ -535,9 +541,11 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
}
if (!pmp_is_locked(env, addr_index)) {
- env->pmp_state.pmp[addr_index].addr_reg = val;
- pmp_update_rule(env, addr_index);
- tlb_flush(env_cpu(env));
+ if (env->pmp_state.pmp[addr_index].addr_reg != val) {
+ env->pmp_state.pmp[addr_index].addr_reg = val;
+ pmp_update_rule(env, addr_index);
+ tlb_flush(env_cpu(env));
+ }
} else {
qemu_log_mask(LOG_GUEST_ERROR,
"ignoring pmpaddr write - locked\n");
--
2.25.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 5/7] accel/tcg: Uncache the host address for instruction fetch when tlb size < 1
2023-04-22 13:03 [PATCH v4 0/7] target/riscv: Fix PMP related problem Weiwei Li
` (3 preceding siblings ...)
2023-04-22 13:03 ` [PATCH v4 4/7] target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes Weiwei Li
@ 2023-04-22 13:03 ` Weiwei Li
2023-04-28 22:10 ` Richard Henderson
2023-04-22 13:03 ` [PATCH v4 6/7] target/riscv: Make the short cut really work in pmp_hart_has_privs Weiwei Li
2023-04-22 13:03 ` [PATCH v4 7/7] target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write Weiwei Li
6 siblings, 1 reply; 11+ messages in thread
From: Weiwei Li @ 2023-04-22 13:03 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
richard.henderson, wangjunqiang, lazyparser, Weiwei Li
When PMP entry overlap part of the page, we'll set the tlb_size to 1, which
will make the address in tlb entry set with TLB_INVALID_MASK, and the next
access will again go through tlb_fill.However, this way will not work in
tb_gen_code() => get_page_addr_code_hostp(): the TLB host address will be
cached, and the following instructions can use this host address directly
which may lead to the bypass of PMP related check.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1542.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index e984a98dc4..efa0cb67c9 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1696,6 +1696,11 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
if (p == NULL) {
return -1;
}
+
+ if (full->lg_page_size < TARGET_PAGE_BITS) {
+ return -1;
+ }
+
if (hostp) {
*hostp = p;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 6/7] target/riscv: Make the short cut really work in pmp_hart_has_privs
2023-04-22 13:03 [PATCH v4 0/7] target/riscv: Fix PMP related problem Weiwei Li
` (4 preceding siblings ...)
2023-04-22 13:03 ` [PATCH v4 5/7] accel/tcg: Uncache the host address for instruction fetch when tlb size < 1 Weiwei Li
@ 2023-04-22 13:03 ` Weiwei Li
2023-04-22 13:03 ` [PATCH v4 7/7] target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write Weiwei Li
6 siblings, 0 replies; 11+ messages in thread
From: Weiwei Li @ 2023-04-22 13:03 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
richard.henderson, wangjunqiang, lazyparser, Weiwei Li
Return the result directly for short cut, since we needn't do the
following check on the PMP entries if there is no PMP rules.
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
---
target/riscv/pmp.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index 0cef9e3e1d..b0f1b0a715 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -319,6 +319,7 @@ int pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
allowed_privs, mode)) {
ret = MAX_RISCV_PMPS;
}
+ return ret;
}
if (size == 0) {
--
2.25.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v4 7/7] target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write
2023-04-22 13:03 [PATCH v4 0/7] target/riscv: Fix PMP related problem Weiwei Li
` (5 preceding siblings ...)
2023-04-22 13:03 ` [PATCH v4 6/7] target/riscv: Make the short cut really work in pmp_hart_has_privs Weiwei Li
@ 2023-04-22 13:03 ` Weiwei Li
6 siblings, 0 replies; 11+ messages in thread
From: Weiwei Li @ 2023-04-22 13:03 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
richard.henderson, wangjunqiang, lazyparser, Weiwei Li
Use pmp_update_rule_addr() and pmp_update_rule_nums() separately to
update rule nums only once for each pmpcfg_csr_write. Then we can also
move tlb_flush into pmp_update_rule_nums().
Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
---
target/riscv/pmp.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
index b0f1b0a715..5b765a9807 100644
--- a/target/riscv/pmp.c
+++ b/target/riscv/pmp.c
@@ -121,7 +121,7 @@ static bool pmp_write_cfg(CPURISCVState *env, uint32_t pmp_index, uint8_t val)
qemu_log_mask(LOG_GUEST_ERROR, "ignoring pmpcfg write - locked\n");
} else if (env->pmp_state.pmp[pmp_index].cfg_reg != val) {
env->pmp_state.pmp[pmp_index].cfg_reg = val;
- pmp_update_rule(env, pmp_index);
+ pmp_update_rule_addr(env, pmp_index);
return true;
}
} else {
@@ -207,6 +207,8 @@ void pmp_update_rule_nums(CPURISCVState *env)
env->pmp_state.num_rules++;
}
}
+
+ tlb_flush(env_cpu(env));
}
/*
@@ -492,7 +494,7 @@ void pmpcfg_csr_write(CPURISCVState *env, uint32_t reg_index,
/* If PMP permission of any addr has been changed, flush TLB pages. */
if (modified) {
- tlb_flush(env_cpu(env));
+ pmp_update_rule_nums(env);
}
}
@@ -545,7 +547,6 @@ void pmpaddr_csr_write(CPURISCVState *env, uint32_t addr_index,
if (env->pmp_state.pmp[addr_index].addr_reg != val) {
env->pmp_state.pmp[addr_index].addr_reg = val;
pmp_update_rule(env, addr_index);
- tlb_flush(env_cpu(env));
}
} else {
qemu_log_mask(LOG_GUEST_ERROR,
--
2.25.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v4 1/7] target/riscv: Update pmp_get_tlb_size()
2023-04-22 13:03 ` [PATCH v4 1/7] target/riscv: Update pmp_get_tlb_size() Weiwei Li
@ 2023-04-28 2:23 ` LIU Zhiwei
0 siblings, 0 replies; 11+ messages in thread
From: LIU Zhiwei @ 2023-04-28 2:23 UTC (permalink / raw)
To: Weiwei Li, qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, richard.henderson,
wangjunqiang, lazyparser
On 2023/4/22 21:03, Weiwei Li wrote:
> PMP entries before the matched PMP entry (including the matched PMP entry)
> may only cover partial of the TLB page, which may make different regions in
> that page allow different RWX privs, such as for PMP0 (0x80000008~0x8000000F,
> R) and PMP1 (0x80001000~0x80001FFF, RWX) write access to 0x80000000 will
> match PMP1.
Typo here.
Otherwise,
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Zhiwei
> However we cannot cache the translation result in the TLB since
> this will make the write access to 0x80000008 bypass the check of PMP0. So we
> should check all of them instead of the matched PMP entry in pmp_get_tlb_size()
> and set the tlb_size to 1 in this case.
> Set tlb_size to TARGET_PAGE_SIZE if PMP is not support or there is no PMP rules.
>
> Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
> ---
> target/riscv/cpu_helper.c | 7 ++---
> target/riscv/pmp.c | 64 ++++++++++++++++++++++++++++++---------
> target/riscv/pmp.h | 3 +-
> 3 files changed, 52 insertions(+), 22 deletions(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 433ea529b0..075fc0538a 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -703,11 +703,8 @@ static int get_physical_address_pmp(CPURISCVState *env, int *prot,
> }
>
> *prot = pmp_priv_to_page_prot(pmp_priv);
> - if ((tlb_size != NULL) && pmp_index != MAX_RISCV_PMPS) {
> - target_ulong tlb_sa = addr & ~(TARGET_PAGE_SIZE - 1);
> - target_ulong tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
> -
> - *tlb_size = pmp_get_tlb_size(env, pmp_index, tlb_sa, tlb_ea);
> + if (tlb_size != NULL) {
> + *tlb_size = pmp_get_tlb_size(env, addr);
> }
>
> return TRANSLATE_SUCCESS;
> diff --git a/target/riscv/pmp.c b/target/riscv/pmp.c
> index 1f5aca42e8..ad20a319c1 100644
> --- a/target/riscv/pmp.c
> +++ b/target/riscv/pmp.c
> @@ -601,28 +601,62 @@ target_ulong mseccfg_csr_read(CPURISCVState *env)
> }
>
> /*
> - * Calculate the TLB size if the start address or the end address of
> - * PMP entry is presented in the TLB page.
> + * Calculate the TLB size. If the PMP rules may make different regions in
> + * the TLB page of 'addr' allow different RWX privs, set the size to 1
> + * (to make the translation result uncached in the TLB and only be used for
> + * a single translation). Set the size to TARGET_PAGE_SIZE otherwise.
> */
> -target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
> - target_ulong tlb_sa, target_ulong tlb_ea)
> +target_ulong pmp_get_tlb_size(CPURISCVState *env, target_ulong addr)
> {
> - target_ulong pmp_sa = env->pmp_state.addr[pmp_index].sa;
> - target_ulong pmp_ea = env->pmp_state.addr[pmp_index].ea;
> + target_ulong pmp_sa;
> + target_ulong pmp_ea;
> + target_ulong tlb_sa = addr & ~(TARGET_PAGE_SIZE - 1);
> + target_ulong tlb_ea = tlb_sa + TARGET_PAGE_SIZE - 1;
> + int i;
>
> - if (pmp_sa <= tlb_sa && pmp_ea >= tlb_ea) {
> + /*
> + * If PMP is not supported or there is no PMP rule, which means the allowed
> + * RWX privs of the page will not affected by PMP or PMP will provide the
> + * same option (disallow accesses or allow default RWX privs) for all
> + * addresses, set the size to TARGET_PAGE_SIZE.
> + */
> + if (!riscv_cpu_cfg(env)->pmp || !pmp_get_num_rules(env)) {
> return TARGET_PAGE_SIZE;
> - } else {
> + }
> +
> + for (i = 0; i < MAX_RISCV_PMPS; i++) {
> + if (pmp_get_a_field(env->pmp_state.pmp[i].cfg_reg) == PMP_AMATCH_OFF) {
> + continue;
> + }
> +
> + pmp_sa = env->pmp_state.addr[i].sa;
> + pmp_ea = env->pmp_state.addr[i].ea;
> +
> /*
> - * At this point we have a tlb_size that is the smallest possible size
> - * That fits within a TARGET_PAGE_SIZE and the PMP region.
> - *
> - * If the size is less then TARGET_PAGE_SIZE we drop the size to 1.
> - * This means the result isn't cached in the TLB and is only used for
> - * a single translation.
> + * Only the first PMP entry that covers (whole or partial of) the TLB
> + * page really matters:
> + * If it can cover the whole page, set the size to TARGET_PAGE_SIZE.
> + * The following PMP entries have lower priority and will not affect
> + * the allowed RWX privs of the page.
> + * If it only cover partial of the TLB page, set the size to 1 since
> + * the allowed RWX privs for the covered region may be different from
> + * other region of the page.
> */
> - return 1;
> + if (pmp_sa <= tlb_sa && pmp_ea >= tlb_ea) {
> + return TARGET_PAGE_SIZE;
> + } else if ((pmp_sa >= tlb_sa && pmp_sa <= tlb_ea) ||
> + (pmp_ea >= tlb_sa && pmp_ea <= tlb_ea)) {
> + return 1;
> + }
> }
> +
> + /*
> + * If no PMP entry covers any region of the TLB page, similar to the above
> + * case that there is no PMP rule, PMP will provide the same option
> + * (disallow accesses or allow default RWX privs) for the whole page,
> + * set the size to TARGET_PAGE_SIZE.
> + */
> + return TARGET_PAGE_SIZE;
> }
>
> /*
> diff --git a/target/riscv/pmp.h b/target/riscv/pmp.h
> index b296ea1fc6..0a7e24750b 100644
> --- a/target/riscv/pmp.h
> +++ b/target/riscv/pmp.h
> @@ -76,8 +76,7 @@ int pmp_hart_has_privs(CPURISCVState *env, target_ulong addr,
> target_ulong size, pmp_priv_t privs,
> pmp_priv_t *allowed_privs,
> target_ulong mode);
> -target_ulong pmp_get_tlb_size(CPURISCVState *env, int pmp_index,
> - target_ulong tlb_sa, target_ulong tlb_ea);
> +target_ulong pmp_get_tlb_size(CPURISCVState *env, target_ulong addr);
> void pmp_update_rule_addr(CPURISCVState *env, uint32_t pmp_index);
> void pmp_update_rule_nums(CPURISCVState *env);
> uint32_t pmp_get_num_rules(CPURISCVState *env);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v4 2/7] target/riscv: Move pmp_get_tlb_size apart from get_physical_address_pmp
2023-04-22 13:03 ` [PATCH v4 2/7] target/riscv: Move pmp_get_tlb_size apart from get_physical_address_pmp Weiwei Li
@ 2023-04-28 2:30 ` LIU Zhiwei
0 siblings, 0 replies; 11+ messages in thread
From: LIU Zhiwei @ 2023-04-28 2:30 UTC (permalink / raw)
To: Weiwei Li, qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, richard.henderson,
wangjunqiang, lazyparser
On 2023/4/22 21:03, Weiwei Li wrote:
> pmp_get_tlb_size can be separated from get_physical_address_pmp and is only
> needed when ret == TRANSLATE_SUCCESS.
>
> Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
> ---
> target/riscv/cpu_helper.c | 16 ++++++----------
> 1 file changed, 6 insertions(+), 10 deletions(-)
>
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index 075fc0538a..83c9699a6d 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -676,14 +676,11 @@ void riscv_cpu_set_mode(CPURISCVState *env, target_ulong newpriv)
> *
> * @env: CPURISCVState
> * @prot: The returned protection attributes
> - * @tlb_size: TLB page size containing addr. It could be modified after PMP
> - * permission checking. NULL if not set TLB page for addr.
> * @addr: The physical address to be checked permission
> * @access_type: The type of MMU access
> * @mode: Indicates current privilege level.
> */
> -static int get_physical_address_pmp(CPURISCVState *env, int *prot,
> - target_ulong *tlb_size, hwaddr addr,
> +static int get_physical_address_pmp(CPURISCVState *env, int *prot, hwaddr addr,
> int size, MMUAccessType access_type,
> int mode)
> {
> @@ -703,9 +700,6 @@ static int get_physical_address_pmp(CPURISCVState *env, int *prot,
> }
>
> *prot = pmp_priv_to_page_prot(pmp_priv);
> - if (tlb_size != NULL) {
> - *tlb_size = pmp_get_tlb_size(env, addr);
> - }
>
> return TRANSLATE_SUCCESS;
> }
> @@ -905,7 +899,7 @@ restart:
> }
>
> int pmp_prot;
> - int pmp_ret = get_physical_address_pmp(env, &pmp_prot, NULL, pte_addr,
> + int pmp_ret = get_physical_address_pmp(env, &pmp_prot, pte_addr,
> sizeof(target_ulong),
> MMU_DATA_LOAD, PRV_S);
> if (pmp_ret != TRANSLATE_SUCCESS) {
> @@ -1300,8 +1294,9 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> prot &= prot2;
>
> if (ret == TRANSLATE_SUCCESS) {
> - ret = get_physical_address_pmp(env, &prot_pmp, &tlb_size, pa,
> + ret = get_physical_address_pmp(env, &prot_pmp, pa,
> size, access_type, mode);
> + tlb_size = pmp_get_tlb_size(env, pa);
>
> qemu_log_mask(CPU_LOG_MMU,
> "%s PMP address=" HWADDR_FMT_plx " ret %d prot"
> @@ -1333,8 +1328,9 @@ bool riscv_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
> __func__, address, ret, pa, prot);
>
> if (ret == TRANSLATE_SUCCESS) {
> - ret = get_physical_address_pmp(env, &prot_pmp, &tlb_size, pa,
> + ret = get_physical_address_pmp(env, &prot_pmp, pa,
> size, access_type, mode);
> + tlb_size = pmp_get_tlb_size(env, pa);
>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Zhiwei
> qemu_log_mask(CPU_LOG_MMU,
> "%s PMP address=" HWADDR_FMT_plx " ret %d prot"
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v4 5/7] accel/tcg: Uncache the host address for instruction fetch when tlb size < 1
2023-04-22 13:03 ` [PATCH v4 5/7] accel/tcg: Uncache the host address for instruction fetch when tlb size < 1 Weiwei Li
@ 2023-04-28 22:10 ` Richard Henderson
0 siblings, 0 replies; 11+ messages in thread
From: Richard Henderson @ 2023-04-28 22:10 UTC (permalink / raw)
To: Weiwei Li, qemu-riscv, qemu-devel
Cc: palmer, alistair.francis, bin.meng, dbarboza, zhiwei_liu,
wangjunqiang, lazyparser
On 4/22/23 14:03, Weiwei Li wrote:
> When PMP entry overlap part of the page, we'll set the tlb_size to 1, which
> will make the address in tlb entry set with TLB_INVALID_MASK, and the next
> access will again go through tlb_fill.However, this way will not work in
> tb_gen_code() => get_page_addr_code_hostp(): the TLB host address will be
> cached, and the following instructions can use this host address directly
> which may lead to the bypass of PMP related check.
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1542.
>
> Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
> Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> accel/tcg/cputlb.c | 5 +++++
> 1 file changed, 5 insertions(+)
Queueing this one patch to tcg-next.
r~
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index e984a98dc4..efa0cb67c9 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1696,6 +1696,11 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
> if (p == NULL) {
> return -1;
> }
> +
> + if (full->lg_page_size < TARGET_PAGE_BITS) {
> + return -1;
> + }
> +
> if (hostp) {
> *hostp = p;
> }
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-04-28 22:11 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-22 13:03 [PATCH v4 0/7] target/riscv: Fix PMP related problem Weiwei Li
2023-04-22 13:03 ` [PATCH v4 1/7] target/riscv: Update pmp_get_tlb_size() Weiwei Li
2023-04-28 2:23 ` LIU Zhiwei
2023-04-22 13:03 ` [PATCH v4 2/7] target/riscv: Move pmp_get_tlb_size apart from get_physical_address_pmp Weiwei Li
2023-04-28 2:30 ` LIU Zhiwei
2023-04-22 13:03 ` [PATCH v4 3/7] target/riscv: Flush TLB when pmpaddr is updated Weiwei Li
2023-04-22 13:03 ` [PATCH v4 4/7] target/riscv: Flush TLB only when pmpcfg/pmpaddr really changes Weiwei Li
2023-04-22 13:03 ` [PATCH v4 5/7] accel/tcg: Uncache the host address for instruction fetch when tlb size < 1 Weiwei Li
2023-04-28 22:10 ` Richard Henderson
2023-04-22 13:03 ` [PATCH v4 6/7] target/riscv: Make the short cut really work in pmp_hart_has_privs Weiwei Li
2023-04-22 13:03 ` [PATCH v4 7/7] target/riscv: Separate pmp_update_rule() in pmpcfg_csr_write Weiwei Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).