* [PATCH v5 0/5] Add ISA extension smcntrpmf support
@ 2024-02-28 18:51 Atish Patra
2024-02-28 18:51 ` [PATCH v5 1/5] target/riscv: Fix the predicate functions for mhpmeventhX CSRs Atish Patra
` (4 more replies)
0 siblings, 5 replies; 15+ messages in thread
From: Atish Patra @ 2024-02-28 18:51 UTC (permalink / raw)
Cc: Alistair Francis, Bin Meng, Daniel Henrique Barboza, Liu Zhiwei,
Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li, kaiwenxue1
This patch series adds the support for RISC-V ISA extension smcntrpmf (cycle and
privilege mode filtering) [1]. It is based on Kevin's earlier work but improves
it by actually implement privilege mode filtering by tracking the privilege
mode switches. This enables the privilege mode filtering for mhpmcounters as
well. However, Smcntrpmf/Sscofpmf must be enabled to leverage this. This series
also modified to report the raw instruction count instead of virtual cpu time
based on the instruction count when icount is enabled. The former seems to be
the preferred approach for instruction count for other architectures as well.
Please let me know if anybody thinks that's incorrect.
The series is also available at
Changes from v4->v5:
1. Rebased on top of master(158a054c4d1a).
2. Fixed a bug for VS<->HS transition.
Changes from v3->v4:
1. Fixed the ordering of the ISA extension names in isa_edata_arr.
2. Added RB tags.
Changes from v2->v3:
1. Fixed the rebasing error in PATCH2.
2. Added RB tags.
3. Addressed other review comments.
Changes from v1->v2:
1. Implemented actual mode filtering for both icount and host ticks mode.
1. Addressed comments in v1.
2. Added Kevin's personal email address.
[1] https://github.com/riscv/riscv-smcntrpmf
[2] https://github.com/atishp04/qemu/tree/smcntrpmf_v5
Atish Patra (2):
target/riscv: Fix the predicate functions for mhpmeventhX CSRs
target/riscv: Implement privilege mode filtering for cycle/instret
Kaiwen Xue (3):
target/riscv: Add cycle & instret privilege mode filtering properties
target/riscv: Add cycle & instret privilege mode filtering definitions
target/riscv: Add cycle & instret privilege mode filtering support
target/riscv/cpu.c | 2 +
target/riscv/cpu.h | 17 +++
target/riscv/cpu_bits.h | 34 ++++++
target/riscv/cpu_cfg.h | 1 +
target/riscv/cpu_helper.c | 17 ++-
target/riscv/csr.c | 251 ++++++++++++++++++++++++++++++--------
target/riscv/pmu.c | 64 ++++++++++
target/riscv/pmu.h | 2 +
8 files changed, 335 insertions(+), 53 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v5 1/5] target/riscv: Fix the predicate functions for mhpmeventhX CSRs
2024-02-28 18:51 [PATCH v5 0/5] Add ISA extension smcntrpmf support Atish Patra
@ 2024-02-28 18:51 ` Atish Patra
2024-03-05 7:03 ` LIU Zhiwei
2024-02-28 18:51 ` [PATCH v5 2/5] target/riscv: Add cycle & instret privilege mode filtering properties Atish Patra
` (3 subsequent siblings)
4 siblings, 1 reply; 15+ messages in thread
From: Atish Patra @ 2024-02-28 18:51 UTC (permalink / raw)
Cc: Daniel Henrique Barboza, Alistair Francis, Atish Patra, Bin Meng,
Liu Zhiwei, Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li,
kaiwenxue1
mhpmeventhX CSRs are available for RV32. The predicate function
should check that first before checking sscofpmf extension.
Fixes: 14664483457b ("target/riscv: Add sscofpmf extension support")
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/csr.c | 67 ++++++++++++++++++++++++++--------------------
1 file changed, 38 insertions(+), 29 deletions(-)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index d4e8ac13b90c..a3d979c4c72c 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -227,6 +227,15 @@ static RISCVException sscofpmf(CPURISCVState *env, int csrno)
return RISCV_EXCP_NONE;
}
+static RISCVException sscofpmf_32(CPURISCVState *env, int csrno)
+{
+ if (riscv_cpu_mxl(env) != MXL_RV32) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return sscofpmf(env, csrno);
+}
+
static RISCVException any(CPURISCVState *env, int csrno)
{
return RISCV_EXCP_NONE;
@@ -5035,91 +5044,91 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_MHPMEVENT31] = { "mhpmevent31", any, read_mhpmevent,
write_mhpmevent },
- [CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
- [CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf, read_mhpmeventh,
+ [CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v5 2/5] target/riscv: Add cycle & instret privilege mode filtering properties
2024-02-28 18:51 [PATCH v5 0/5] Add ISA extension smcntrpmf support Atish Patra
2024-02-28 18:51 ` [PATCH v5 1/5] target/riscv: Fix the predicate functions for mhpmeventhX CSRs Atish Patra
@ 2024-02-28 18:51 ` Atish Patra
2024-03-04 17:24 ` Daniel Henrique Barboza
2024-03-05 7:01 ` LIU Zhiwei
2024-02-28 18:51 ` [PATCH v5 3/5] target/riscv: Add cycle & instret privilege mode filtering definitions Atish Patra
` (2 subsequent siblings)
4 siblings, 2 replies; 15+ messages in thread
From: Atish Patra @ 2024-02-28 18:51 UTC (permalink / raw)
Cc: Atish Patra, Alistair Francis, Bin Meng, Daniel Henrique Barboza,
Liu Zhiwei, Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li,
kaiwenxue1
From: Kaiwen Xue <kaiwenx@rivosinc.com>
This adds the properties for ISA extension smcntrpmf. Patches
implementing it will follow.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
---
target/riscv/cpu.c | 2 ++
target/riscv/cpu_cfg.h | 1 +
2 files changed, 3 insertions(+)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 1b8d001d237f..f9d3c80597fc 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -169,6 +169,7 @@ const RISCVIsaExtData isa_edata_arr[] = {
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
+ ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, ext_smepmp),
ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
@@ -1447,6 +1448,7 @@ const char *riscv_get_misa_ext_description(uint32_t bit)
const RISCVCPUMultiExtConfig riscv_cpu_extensions[] = {
/* Defaults for standard extensions */
MULTI_EXT_CFG_BOOL("sscofpmf", ext_sscofpmf, false),
+ MULTI_EXT_CFG_BOOL("smcntrpmf", ext_smcntrpmf, false),
MULTI_EXT_CFG_BOOL("zifencei", ext_zifencei, true),
MULTI_EXT_CFG_BOOL("zicsr", ext_zicsr, true),
MULTI_EXT_CFG_BOOL("zihintntl", ext_zihintntl, true),
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index 833bf5821708..0828841445c5 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -73,6 +73,7 @@ struct RISCVCPUConfig {
bool ext_zihpm;
bool ext_smstateen;
bool ext_sstc;
+ bool ext_smcntrpmf;
bool ext_svadu;
bool ext_svinval;
bool ext_svnapot;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v5 3/5] target/riscv: Add cycle & instret privilege mode filtering definitions
2024-02-28 18:51 [PATCH v5 0/5] Add ISA extension smcntrpmf support Atish Patra
2024-02-28 18:51 ` [PATCH v5 1/5] target/riscv: Fix the predicate functions for mhpmeventhX CSRs Atish Patra
2024-02-28 18:51 ` [PATCH v5 2/5] target/riscv: Add cycle & instret privilege mode filtering properties Atish Patra
@ 2024-02-28 18:51 ` Atish Patra
2024-02-28 18:51 ` [PATCH v5 4/5] target/riscv: Add cycle & instret privilege mode filtering support Atish Patra
2024-02-28 18:51 ` [PATCH v5 5/5] target/riscv: Implement privilege mode filtering for cycle/instret Atish Patra
4 siblings, 0 replies; 15+ messages in thread
From: Atish Patra @ 2024-02-28 18:51 UTC (permalink / raw)
Cc: Daniel Henrique Barboza, Atish Patra, Alistair Francis, Bin Meng,
Liu Zhiwei, Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li,
kaiwenxue1
From: Kaiwen Xue <kaiwenx@rivosinc.com>
This adds the definitions for ISA extension smcntrpmf.
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.h | 6 ++++++
target/riscv/cpu_bits.h | 29 +++++++++++++++++++++++++++++
2 files changed, 35 insertions(+)
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index f52dce78baa0..174e8ba8e847 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -338,6 +338,12 @@ struct CPUArchState {
target_ulong mcountinhibit;
+ /* PMU cycle & instret privilege mode filtering */
+ target_ulong mcyclecfg;
+ target_ulong mcyclecfgh;
+ target_ulong minstretcfg;
+ target_ulong minstretcfgh;
+
/* PMU counter state */
PMUCTRState pmu_ctrs[RV_MAX_MHPMCOUNTERS];
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
index fc2068ee4dcf..e866c60a400c 100644
--- a/target/riscv/cpu_bits.h
+++ b/target/riscv/cpu_bits.h
@@ -394,6 +394,10 @@
/* Machine counter-inhibit register */
#define CSR_MCOUNTINHIBIT 0x320
+/* Machine counter configuration registers */
+#define CSR_MCYCLECFG 0x321
+#define CSR_MINSTRETCFG 0x322
+
#define CSR_MHPMEVENT3 0x323
#define CSR_MHPMEVENT4 0x324
#define CSR_MHPMEVENT5 0x325
@@ -424,6 +428,9 @@
#define CSR_MHPMEVENT30 0x33e
#define CSR_MHPMEVENT31 0x33f
+#define CSR_MCYCLECFGH 0x721
+#define CSR_MINSTRETCFGH 0x722
+
#define CSR_MHPMEVENT3H 0x723
#define CSR_MHPMEVENT4H 0x724
#define CSR_MHPMEVENT5H 0x725
@@ -878,6 +885,28 @@ typedef enum RISCVException {
/* PMU related bits */
#define MIE_LCOFIE (1 << IRQ_PMU_OVF)
+#define MCYCLECFG_BIT_MINH BIT_ULL(62)
+#define MCYCLECFGH_BIT_MINH BIT(30)
+#define MCYCLECFG_BIT_SINH BIT_ULL(61)
+#define MCYCLECFGH_BIT_SINH BIT(29)
+#define MCYCLECFG_BIT_UINH BIT_ULL(60)
+#define MCYCLECFGH_BIT_UINH BIT(28)
+#define MCYCLECFG_BIT_VSINH BIT_ULL(59)
+#define MCYCLECFGH_BIT_VSINH BIT(27)
+#define MCYCLECFG_BIT_VUINH BIT_ULL(58)
+#define MCYCLECFGH_BIT_VUINH BIT(26)
+
+#define MINSTRETCFG_BIT_MINH BIT_ULL(62)
+#define MINSTRETCFGH_BIT_MINH BIT(30)
+#define MINSTRETCFG_BIT_SINH BIT_ULL(61)
+#define MINSTRETCFGH_BIT_SINH BIT(29)
+#define MINSTRETCFG_BIT_UINH BIT_ULL(60)
+#define MINSTRETCFGH_BIT_UINH BIT(28)
+#define MINSTRETCFG_BIT_VSINH BIT_ULL(59)
+#define MINSTRETCFGH_BIT_VSINH BIT(27)
+#define MINSTRETCFG_BIT_VUINH BIT_ULL(58)
+#define MINSTRETCFGH_BIT_VUINH BIT(26)
+
#define MHPMEVENT_BIT_OF BIT_ULL(63)
#define MHPMEVENTH_BIT_OF BIT(31)
#define MHPMEVENT_BIT_MINH BIT_ULL(62)
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v5 4/5] target/riscv: Add cycle & instret privilege mode filtering support
2024-02-28 18:51 [PATCH v5 0/5] Add ISA extension smcntrpmf support Atish Patra
` (2 preceding siblings ...)
2024-02-28 18:51 ` [PATCH v5 3/5] target/riscv: Add cycle & instret privilege mode filtering definitions Atish Patra
@ 2024-02-28 18:51 ` Atish Patra
2024-02-28 18:51 ` [PATCH v5 5/5] target/riscv: Implement privilege mode filtering for cycle/instret Atish Patra
4 siblings, 0 replies; 15+ messages in thread
From: Atish Patra @ 2024-02-28 18:51 UTC (permalink / raw)
Cc: Atish Patra, Daniel Henrique Barboza, Alistair Francis, Bin Meng,
Liu Zhiwei, Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li,
kaiwenxue1
From: Kaiwen Xue <kaiwenx@rivosinc.com>
QEMU only calculates dummy cycles and instructions, so there is no
actual means to stop the icount in QEMU. Hence this patch merely adds
the functionality of accessing the cfg registers, and cause no actual
effects on the counting of cycle and instret counters.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
---
target/riscv/csr.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 88 insertions(+)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index a3d979c4c72c..ff9bac537593 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -236,6 +236,24 @@ static RISCVException sscofpmf_32(CPURISCVState *env, int csrno)
return sscofpmf(env, csrno);
}
+static RISCVException smcntrpmf(CPURISCVState *env, int csrno)
+{
+ if (!riscv_cpu_cfg(env)->ext_smcntrpmf) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException smcntrpmf_32(CPURISCVState *env, int csrno)
+{
+ if (riscv_cpu_mxl(env) != MXL_RV32) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return smcntrpmf(env, csrno);
+}
+
static RISCVException any(CPURISCVState *env, int csrno)
{
return RISCV_EXCP_NONE;
@@ -826,6 +844,62 @@ static RISCVException read_hpmcounterh(CPURISCVState *env, int csrno,
#else /* CONFIG_USER_ONLY */
+static RISCVException read_mcyclecfg(CPURISCVState *env, int csrno,
+ target_ulong *val)
+{
+ *val = env->mcyclecfg;
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException write_mcyclecfg(CPURISCVState *env, int csrno,
+ target_ulong val)
+{
+ env->mcyclecfg = val;
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException read_mcyclecfgh(CPURISCVState *env, int csrno,
+ target_ulong *val)
+{
+ *val = env->mcyclecfgh;
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException write_mcyclecfgh(CPURISCVState *env, int csrno,
+ target_ulong val)
+{
+ env->mcyclecfgh = val;
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException read_minstretcfg(CPURISCVState *env, int csrno,
+ target_ulong *val)
+{
+ *val = env->minstretcfg;
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException write_minstretcfg(CPURISCVState *env, int csrno,
+ target_ulong val)
+{
+ env->minstretcfg = val;
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException read_minstretcfgh(CPURISCVState *env, int csrno,
+ target_ulong *val)
+{
+ *val = env->minstretcfgh;
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException write_minstretcfgh(CPURISCVState *env, int csrno,
+ target_ulong val)
+{
+ env->minstretcfgh = val;
+ return RISCV_EXCP_NONE;
+}
+
static RISCVException read_mhpmevent(CPURISCVState *env, int csrno,
target_ulong *val)
{
@@ -4985,6 +5059,13 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
write_mcountinhibit,
.min_priv_ver = PRIV_VERSION_1_11_0 },
+ [CSR_MCYCLECFG] = { "mcyclecfg", smcntrpmf, read_mcyclecfg,
+ write_mcyclecfg,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_MINSTRETCFG] = { "minstretcfg", smcntrpmf, read_minstretcfg,
+ write_minstretcfg,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+
[CSR_MHPMEVENT3] = { "mhpmevent3", any, read_mhpmevent,
write_mhpmevent },
[CSR_MHPMEVENT4] = { "mhpmevent4", any, read_mhpmevent,
@@ -5044,6 +5125,13 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_MHPMEVENT31] = { "mhpmevent31", any, read_mhpmevent,
write_mhpmevent },
+ [CSR_MCYCLECFGH] = { "mcyclecfgh", smcntrpmf_32, read_mcyclecfgh,
+ write_mcyclecfgh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_MINSTRETCFGH] = { "minstretcfgh", smcntrpmf_32, read_minstretcfgh,
+ write_minstretcfgh,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+
[CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf_32, read_mhpmeventh,
write_mhpmeventh,
.min_priv_ver = PRIV_VERSION_1_12_0 },
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH v5 5/5] target/riscv: Implement privilege mode filtering for cycle/instret
2024-02-28 18:51 [PATCH v5 0/5] Add ISA extension smcntrpmf support Atish Patra
` (3 preceding siblings ...)
2024-02-28 18:51 ` [PATCH v5 4/5] target/riscv: Add cycle & instret privilege mode filtering support Atish Patra
@ 2024-02-28 18:51 ` Atish Patra
2024-03-05 6:47 ` LIU Zhiwei
4 siblings, 1 reply; 15+ messages in thread
From: Atish Patra @ 2024-02-28 18:51 UTC (permalink / raw)
Cc: Daniel Henrique Barboza, Atish Patra, Alistair Francis, Bin Meng,
Liu Zhiwei, Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li,
kaiwenxue1
Privilege mode filtering can also be emulated for cycle/instret by
tracking host_ticks/icount during each privilege mode switch. This
patch implements that for both cycle/instret and mhpmcounters. The
first one requires Smcntrpmf while the other one requires Sscofpmf
to be enabled.
The cycle/instret are still computed using host ticks when icount
is not enabled. Otherwise, they are computed using raw icount which
is more accurate in icount mode.
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.h | 11 +++++
target/riscv/cpu_bits.h | 5 ++
target/riscv/cpu_helper.c | 17 ++++++-
target/riscv/csr.c | 96 ++++++++++++++++++++++++++++++---------
target/riscv/pmu.c | 64 ++++++++++++++++++++++++++
target/riscv/pmu.h | 2 +
6 files changed, 171 insertions(+), 24 deletions(-)
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 174e8ba8e847..9e21d7f7d635 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -157,6 +157,15 @@ typedef struct PMUCTRState {
target_ulong irq_overflow_left;
} PMUCTRState;
+typedef struct PMUFixedCtrState {
+ /* Track cycle and icount for each privilege mode */
+ uint64_t counter[4];
+ uint64_t counter_prev[4];
+ /* Track cycle and icount for each privilege mode when V = 1*/
+ uint64_t counter_virt[2];
+ uint64_t counter_virt_prev[2];
+} PMUFixedCtrState;
+
struct CPUArchState {
target_ulong gpr[32];
target_ulong gprh[32]; /* 64 top bits of the 128-bit registers */
@@ -353,6 +362,8 @@ struct CPUArchState {
/* PMU event selector configured values for RV32 */
target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
+ PMUFixedCtrState pmu_fixed_ctrs[2];
+
target_ulong sscratch;
target_ulong mscratch;
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
index e866c60a400c..5fe349e313dc 100644
--- a/target/riscv/cpu_bits.h
+++ b/target/riscv/cpu_bits.h
@@ -920,6 +920,11 @@ typedef enum RISCVException {
#define MHPMEVENT_BIT_VUINH BIT_ULL(58)
#define MHPMEVENTH_BIT_VUINH BIT(26)
+#define MHPMEVENT_FILTER_MASK (MHPMEVENT_BIT_MINH | \
+ MHPMEVENT_BIT_SINH | \
+ MHPMEVENT_BIT_UINH | \
+ MHPMEVENT_BIT_VSINH | \
+ MHPMEVENT_BIT_VUINH)
#define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
#define MHPMEVENT_IDX_MASK 0xFFFFF
#define MHPMEVENT_SSCOF_RESVD 16
diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
index d462d95ee165..33965d843d46 100644
--- a/target/riscv/cpu_helper.c
+++ b/target/riscv/cpu_helper.c
@@ -718,8 +718,21 @@ void riscv_cpu_set_mode(CPURISCVState *env, target_ulong newpriv)
{
g_assert(newpriv <= PRV_M && newpriv != PRV_RESERVED);
- if (icount_enabled() && newpriv != env->priv) {
- riscv_itrigger_update_priv(env);
+ /*
+ * Invoke cycle/instret update between priv mode changes or
+ * VS->HS mode transition is SPV bit must be set
+ * HS->VS mode transition where virt_enabled must be set
+ * In both cases, priv will S mode only.
+ */
+ if (newpriv != env->priv ||
+ (env->priv == PRV_S && newpriv == PRV_S &&
+ (env->virt_enabled || get_field(env->hstatus, HSTATUS_SPV)))) {
+ if (icount_enabled()) {
+ riscv_itrigger_update_priv(env);
+ riscv_pmu_icount_update_priv(env, newpriv);
+ } else {
+ riscv_pmu_cycle_update_priv(env, newpriv);
+ }
}
/* tlb_flush is unnecessary as mode is contained in mmu_idx */
env->priv = newpriv;
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index ff9bac537593..482e212c5f74 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -788,32 +788,16 @@ static RISCVException write_vcsr(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
+#if defined(CONFIG_USER_ONLY)
/* User Timers and Counters */
static target_ulong get_ticks(bool shift)
{
- int64_t val;
- target_ulong result;
-
-#if !defined(CONFIG_USER_ONLY)
- if (icount_enabled()) {
- val = icount_get();
- } else {
- val = cpu_get_host_ticks();
- }
-#else
- val = cpu_get_host_ticks();
-#endif
-
- if (shift) {
- result = val >> 32;
- } else {
- result = val;
- }
+ int64_t val = cpu_get_host_ticks();
+ target_ulong result = shift ? val >> 32 : val;
return result;
}
-#if defined(CONFIG_USER_ONLY)
static RISCVException read_time(CPURISCVState *env, int csrno,
target_ulong *val)
{
@@ -952,6 +936,71 @@ static RISCVException write_mhpmeventh(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
+static target_ulong riscv_pmu_ctr_get_fixed_counters_val(CPURISCVState *env,
+ int counter_idx,
+ bool upper_half)
+{
+ uint64_t curr_val = 0;
+ target_ulong result = 0;
+ uint64_t *counter_arr = icount_enabled() ? env->pmu_fixed_ctrs[1].counter :
+ env->pmu_fixed_ctrs[0].counter;
+ uint64_t *counter_arr_virt = icount_enabled() ?
+ env->pmu_fixed_ctrs[1].counter_virt :
+ env->pmu_fixed_ctrs[0].counter_virt;
+ uint64_t cfg_val = 0;
+
+ if (counter_idx == 0) {
+ cfg_val = upper_half ? ((uint64_t)env->mcyclecfgh << 32) :
+ env->mcyclecfg;
+ } else if (counter_idx == 2) {
+ cfg_val = upper_half ? ((uint64_t)env->minstretcfgh << 32) :
+ env->minstretcfg;
+ } else {
+ cfg_val = upper_half ?
+ ((uint64_t)env->mhpmeventh_val[counter_idx] << 32) :
+ env->mhpmevent_val[counter_idx];
+ cfg_val &= MHPMEVENT_FILTER_MASK;
+ }
+
+ if (!cfg_val) {
+ if (icount_enabled()) {
+ curr_val = icount_get_raw();
+ } else {
+ curr_val = cpu_get_host_ticks();
+ }
+ goto done;
+ }
+
+ if (!(cfg_val & MCYCLECFG_BIT_MINH)) {
+ curr_val += counter_arr[PRV_M];
+ }
+
+ if (!(cfg_val & MCYCLECFG_BIT_SINH)) {
+ curr_val += counter_arr[PRV_S];
+ }
+
+ if (!(cfg_val & MCYCLECFG_BIT_UINH)) {
+ curr_val += counter_arr[PRV_U];
+ }
+
+ if (!(cfg_val & MCYCLECFG_BIT_VSINH)) {
+ curr_val += counter_arr_virt[PRV_S];
+ }
+
+ if (!(cfg_val & MCYCLECFG_BIT_VUINH)) {
+ curr_val += counter_arr_virt[PRV_U];
+ }
+
+done:
+ if (riscv_cpu_mxl(env) == MXL_RV32) {
+ result = upper_half ? curr_val >> 32 : curr_val;
+ } else {
+ result = curr_val;
+ }
+
+ return result;
+}
+
static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
target_ulong val)
{
@@ -962,7 +1011,8 @@ static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
counter->mhpmcounter_val = val;
if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
- counter->mhpmcounter_prev = get_ticks(false);
+ counter->mhpmcounter_prev = riscv_pmu_ctr_get_fixed_counters_val(env,
+ ctr_idx, false);
if (ctr_idx > 2) {
if (riscv_cpu_mxl(env) == MXL_RV32) {
mhpmctr_val = mhpmctr_val |
@@ -990,7 +1040,8 @@ static RISCVException write_mhpmcounterh(CPURISCVState *env, int csrno,
mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
- counter->mhpmcounterh_prev = get_ticks(true);
+ counter->mhpmcounterh_prev = riscv_pmu_ctr_get_fixed_counters_val(env,
+ ctr_idx, true);
if (ctr_idx > 2) {
riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
}
@@ -1031,7 +1082,8 @@ static RISCVException riscv_pmu_read_ctr(CPURISCVState *env, target_ulong *val,
*/
if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
- *val = get_ticks(upper_half) - ctr_prev + ctr_val;
+ *val = riscv_pmu_ctr_get_fixed_counters_val(env, ctr_idx, upper_half) -
+ ctr_prev + ctr_val;
} else {
*val = ctr_val;
}
diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
index 0e7d58b8a5c2..37309ff64cb6 100644
--- a/target/riscv/pmu.c
+++ b/target/riscv/pmu.c
@@ -19,6 +19,7 @@
#include "qemu/osdep.h"
#include "qemu/log.h"
#include "qemu/error-report.h"
+#include "qemu/timer.h"
#include "cpu.h"
#include "pmu.h"
#include "sysemu/cpu-timers.h"
@@ -176,6 +177,69 @@ static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
return 0;
}
+void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong newpriv)
+{
+ uint64_t delta;
+ uint64_t *counter_arr;
+ uint64_t *counter_arr_prev;
+ uint64_t current_icount = icount_get_raw();
+
+ if (env->virt_enabled) {
+ counter_arr = env->pmu_fixed_ctrs[1].counter_virt;
+ counter_arr_prev = env->pmu_fixed_ctrs[1].counter_virt_prev;
+ } else {
+ counter_arr = env->pmu_fixed_ctrs[1].counter;
+ counter_arr_prev = env->pmu_fixed_ctrs[1].counter_prev;
+ }
+
+ if (newpriv != env->priv) {
+ delta = current_icount - counter_arr_prev[env->priv];
+ counter_arr_prev[newpriv] = current_icount;
+ } else {
+ delta = current_icount - counter_arr_prev[env->priv];
+ if (env->virt_enabled) {
+ /* HS->VS transition.The previous value should correspond to HS. */
+ env->pmu_fixed_ctrs[1].counter_prev[PRV_S] = current_icount;
+ } else if (get_field(env->hstatus, HSTATUS_SPV)) {
+ /* VS->HS transition.The previous value should correspond to VS. */
+ env->pmu_fixed_ctrs[1].counter_virt_prev[PRV_S] = current_icount;
+ }
+ }
+
+ counter_arr[env->priv] += delta;
+}
+
+void riscv_pmu_cycle_update_priv(CPURISCVState *env, target_ulong newpriv)
+{
+ uint64_t delta;
+ uint64_t *counter_arr;
+ uint64_t *counter_arr_prev;
+ uint64_t current_host_ticks = cpu_get_host_ticks();
+
+ if (env->virt_enabled) {
+ counter_arr = env->pmu_fixed_ctrs[0].counter_virt;
+ counter_arr_prev = env->pmu_fixed_ctrs[0].counter_virt_prev;
+ } else {
+ counter_arr = env->pmu_fixed_ctrs[0].counter;
+ counter_arr_prev = env->pmu_fixed_ctrs[0].counter_prev;
+ }
+
+ if (newpriv != env->priv) {
+ delta = current_host_ticks - counter_arr_prev[env->priv];
+ counter_arr_prev[newpriv] = current_host_ticks;
+ } else {
+ delta = current_host_ticks - counter_arr_prev[env->priv];
+ if (env->virt_enabled) {
+ env->pmu_fixed_ctrs[0].counter_prev[PRV_S] = current_host_ticks;
+ } else if (get_field(env->hstatus, HSTATUS_SPV)) {
+ env->pmu_fixed_ctrs[0].counter_virt_prev[PRV_S] =
+ current_host_ticks;
+ }
+ }
+
+ counter_arr[env->priv] += delta;
+}
+
int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx)
{
uint32_t ctr_idx;
diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
index 505fc850d38e..50de6031a730 100644
--- a/target/riscv/pmu.h
+++ b/target/riscv/pmu.h
@@ -31,3 +31,5 @@ int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
void riscv_pmu_generate_fdt_node(void *fdt, uint32_t cmask, char *pmu_name);
int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
uint32_t ctr_idx);
+void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong newpriv);
+void riscv_pmu_cycle_update_priv(CPURISCVState *env, target_ulong newpriv);
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH v5 2/5] target/riscv: Add cycle & instret privilege mode filtering properties
2024-02-28 18:51 ` [PATCH v5 2/5] target/riscv: Add cycle & instret privilege mode filtering properties Atish Patra
@ 2024-03-04 17:24 ` Daniel Henrique Barboza
2024-03-05 7:01 ` LIU Zhiwei
1 sibling, 0 replies; 15+ messages in thread
From: Daniel Henrique Barboza @ 2024-03-04 17:24 UTC (permalink / raw)
To: Atish Patra
Cc: Alistair Francis, Bin Meng, Liu Zhiwei, Palmer Dabbelt,
qemu-devel, qemu-riscv, Weiwei Li, kaiwenxue1
On 2/28/24 15:51, Atish Patra wrote:
> From: Kaiwen Xue <kaiwenx@rivosinc.com>
>
> This adds the properties for ISA extension smcntrpmf. Patches
> implementing it will follow.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> ---
Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> target/riscv/cpu.c | 2 ++
> target/riscv/cpu_cfg.h | 1 +
> 2 files changed, 3 insertions(+)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 1b8d001d237f..f9d3c80597fc 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -169,6 +169,7 @@ const RISCVIsaExtData isa_edata_arr[] = {
> ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
> ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
> ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
> + ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
> ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, ext_smepmp),
> ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
> ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
> @@ -1447,6 +1448,7 @@ const char *riscv_get_misa_ext_description(uint32_t bit)
> const RISCVCPUMultiExtConfig riscv_cpu_extensions[] = {
> /* Defaults for standard extensions */
> MULTI_EXT_CFG_BOOL("sscofpmf", ext_sscofpmf, false),
> + MULTI_EXT_CFG_BOOL("smcntrpmf", ext_smcntrpmf, false),
> MULTI_EXT_CFG_BOOL("zifencei", ext_zifencei, true),
> MULTI_EXT_CFG_BOOL("zicsr", ext_zicsr, true),
> MULTI_EXT_CFG_BOOL("zihintntl", ext_zihintntl, true),
> diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
> index 833bf5821708..0828841445c5 100644
> --- a/target/riscv/cpu_cfg.h
> +++ b/target/riscv/cpu_cfg.h
> @@ -73,6 +73,7 @@ struct RISCVCPUConfig {
> bool ext_zihpm;
> bool ext_smstateen;
> bool ext_sstc;
> + bool ext_smcntrpmf;
> bool ext_svadu;
> bool ext_svinval;
> bool ext_svnapot;
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] target/riscv: Implement privilege mode filtering for cycle/instret
2024-02-28 18:51 ` [PATCH v5 5/5] target/riscv: Implement privilege mode filtering for cycle/instret Atish Patra
@ 2024-03-05 6:47 ` LIU Zhiwei
2024-03-07 9:25 ` Atish Patra
0 siblings, 1 reply; 15+ messages in thread
From: LIU Zhiwei @ 2024-03-05 6:47 UTC (permalink / raw)
To: Atish Patra
Cc: Daniel Henrique Barboza, Alistair Francis, Bin Meng,
Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li, kaiwenxue1
On 2024/2/29 2:51, Atish Patra wrote:
> Privilege mode filtering can also be emulated for cycle/instret by
> tracking host_ticks/icount during each privilege mode switch. This
> patch implements that for both cycle/instret and mhpmcounters. The
> first one requires Smcntrpmf while the other one requires Sscofpmf
> to be enabled.
>
> The cycle/instret are still computed using host ticks when icount
> is not enabled. Otherwise, they are computed using raw icount which
> is more accurate in icount mode.
>
> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/cpu.h | 11 +++++
> target/riscv/cpu_bits.h | 5 ++
> target/riscv/cpu_helper.c | 17 ++++++-
> target/riscv/csr.c | 96 ++++++++++++++++++++++++++++++---------
> target/riscv/pmu.c | 64 ++++++++++++++++++++++++++
> target/riscv/pmu.h | 2 +
> 6 files changed, 171 insertions(+), 24 deletions(-)
>
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 174e8ba8e847..9e21d7f7d635 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -157,6 +157,15 @@ typedef struct PMUCTRState {
> target_ulong irq_overflow_left;
> } PMUCTRState;
>
> +typedef struct PMUFixedCtrState {
> + /* Track cycle and icount for each privilege mode */
> + uint64_t counter[4];
> + uint64_t counter_prev[4];
> + /* Track cycle and icount for each privilege mode when V = 1*/
> + uint64_t counter_virt[2];
> + uint64_t counter_virt_prev[2];
> +} PMUFixedCtrState;
> +
> struct CPUArchState {
> target_ulong gpr[32];
> target_ulong gprh[32]; /* 64 top bits of the 128-bit registers */
> @@ -353,6 +362,8 @@ struct CPUArchState {
> /* PMU event selector configured values for RV32 */
> target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
>
> + PMUFixedCtrState pmu_fixed_ctrs[2];
> +
> target_ulong sscratch;
> target_ulong mscratch;
>
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index e866c60a400c..5fe349e313dc 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -920,6 +920,11 @@ typedef enum RISCVException {
> #define MHPMEVENT_BIT_VUINH BIT_ULL(58)
> #define MHPMEVENTH_BIT_VUINH BIT(26)
>
> +#define MHPMEVENT_FILTER_MASK (MHPMEVENT_BIT_MINH | \
> + MHPMEVENT_BIT_SINH | \
> + MHPMEVENT_BIT_UINH | \
> + MHPMEVENT_BIT_VSINH | \
> + MHPMEVENT_BIT_VUINH)
> #define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
> #define MHPMEVENT_IDX_MASK 0xFFFFF
> #define MHPMEVENT_SSCOF_RESVD 16
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index d462d95ee165..33965d843d46 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -718,8 +718,21 @@ void riscv_cpu_set_mode(CPURISCVState *env, target_ulong newpriv)
> {
> g_assert(newpriv <= PRV_M && newpriv != PRV_RESERVED);
>
> - if (icount_enabled() && newpriv != env->priv) {
> - riscv_itrigger_update_priv(env);
> + /*
> + * Invoke cycle/instret update between priv mode changes or
> + * VS->HS mode transition is SPV bit must be set
> + * HS->VS mode transition where virt_enabled must be set
> + * In both cases, priv will S mode only.
> + */
> + if (newpriv != env->priv ||
> + (env->priv == PRV_S && newpriv == PRV_S &&
> + (env->virt_enabled || get_field(env->hstatus, HSTATUS_SPV)))) {
> + if (icount_enabled()) {
> + riscv_itrigger_update_priv(env);
> + riscv_pmu_icount_update_priv(env, newpriv);
> + } else {
> + riscv_pmu_cycle_update_priv(env, newpriv);
> + }
> }
> /* tlb_flush is unnecessary as mode is contained in mmu_idx */
> env->priv = newpriv;
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index ff9bac537593..482e212c5f74 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -788,32 +788,16 @@ static RISCVException write_vcsr(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
>
> +#if defined(CONFIG_USER_ONLY)
> /* User Timers and Counters */
> static target_ulong get_ticks(bool shift)
> {
> - int64_t val;
> - target_ulong result;
> -
> -#if !defined(CONFIG_USER_ONLY)
> - if (icount_enabled()) {
> - val = icount_get();
> - } else {
> - val = cpu_get_host_ticks();
> - }
> -#else
> - val = cpu_get_host_ticks();
> -#endif
> -
> - if (shift) {
> - result = val >> 32;
> - } else {
> - result = val;
> - }
> + int64_t val = cpu_get_host_ticks();
> + target_ulong result = shift ? val >> 32 : val;
>
> return result;
> }
>
> -#if defined(CONFIG_USER_ONLY)
> static RISCVException read_time(CPURISCVState *env, int csrno,
> target_ulong *val)
> {
> @@ -952,6 +936,71 @@ static RISCVException write_mhpmeventh(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
>
> +static target_ulong riscv_pmu_ctr_get_fixed_counters_val(CPURISCVState *env,
> + int counter_idx,
> + bool upper_half)
> +{
> + uint64_t curr_val = 0;
> + target_ulong result = 0;
> + uint64_t *counter_arr = icount_enabled() ? env->pmu_fixed_ctrs[1].counter :
> + env->pmu_fixed_ctrs[0].counter;
> + uint64_t *counter_arr_virt = icount_enabled() ?
> + env->pmu_fixed_ctrs[1].counter_virt :
> + env->pmu_fixed_ctrs[0].counter_virt;
> + uint64_t cfg_val = 0;
> +
> + if (counter_idx == 0) {
> + cfg_val = upper_half ? ((uint64_t)env->mcyclecfgh << 32) :
> + env->mcyclecfg;
> + } else if (counter_idx == 2) {
> + cfg_val = upper_half ? ((uint64_t)env->minstretcfgh << 32) :
> + env->minstretcfg;
> + } else {
> + cfg_val = upper_half ?
> + ((uint64_t)env->mhpmeventh_val[counter_idx] << 32) :
> + env->mhpmevent_val[counter_idx];
> + cfg_val &= MHPMEVENT_FILTER_MASK;
> + }
> +
> + if (!cfg_val) {
> + if (icount_enabled()) {
> + curr_val = icount_get_raw();
> + } else {
> + curr_val = cpu_get_host_ticks();
> + }
> + goto done;
> + }
> +
> + if (!(cfg_val & MCYCLECFG_BIT_MINH)) {
> + curr_val += counter_arr[PRV_M];
> + }
> +
> + if (!(cfg_val & MCYCLECFG_BIT_SINH)) {
> + curr_val += counter_arr[PRV_S];
> + }
> +
> + if (!(cfg_val & MCYCLECFG_BIT_UINH)) {
> + curr_val += counter_arr[PRV_U];
> + }
> +
> + if (!(cfg_val & MCYCLECFG_BIT_VSINH)) {
> + curr_val += counter_arr_virt[PRV_S];
> + }
> +
> + if (!(cfg_val & MCYCLECFG_BIT_VUINH)) {
> + curr_val += counter_arr_virt[PRV_U];
> + }
> +
> +done:
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + result = upper_half ? curr_val >> 32 : curr_val;
> + } else {
> + result = curr_val;
> + }
> +
> + return result;
> +}
> +
> static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
> target_ulong val)
> {
> @@ -962,7 +1011,8 @@ static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
> counter->mhpmcounter_val = val;
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> - counter->mhpmcounter_prev = get_ticks(false);
> + counter->mhpmcounter_prev = riscv_pmu_ctr_get_fixed_counters_val(env,
> + ctr_idx, false);
> if (ctr_idx > 2) {
> if (riscv_cpu_mxl(env) == MXL_RV32) {
> mhpmctr_val = mhpmctr_val |
> @@ -990,7 +1040,8 @@ static RISCVException write_mhpmcounterh(CPURISCVState *env, int csrno,
> mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> - counter->mhpmcounterh_prev = get_ticks(true);
> + counter->mhpmcounterh_prev = riscv_pmu_ctr_get_fixed_counters_val(env,
> + ctr_idx, true);
> if (ctr_idx > 2) {
> riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> }
> @@ -1031,7 +1082,8 @@ static RISCVException riscv_pmu_read_ctr(CPURISCVState *env, target_ulong *val,
> */
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> - *val = get_ticks(upper_half) - ctr_prev + ctr_val;
> + *val = riscv_pmu_ctr_get_fixed_counters_val(env, ctr_idx, upper_half) -
> + ctr_prev + ctr_val;
> } else {
> *val = ctr_val;
> }
> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> index 0e7d58b8a5c2..37309ff64cb6 100644
> --- a/target/riscv/pmu.c
> +++ b/target/riscv/pmu.c
> @@ -19,6 +19,7 @@
> #include "qemu/osdep.h"
> #include "qemu/log.h"
> #include "qemu/error-report.h"
> +#include "qemu/timer.h"
> #include "cpu.h"
> #include "pmu.h"
> #include "sysemu/cpu-timers.h"
> @@ -176,6 +177,69 @@ static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx)
> return 0;
> }
>
> +void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong newpriv)
> +{
> + uint64_t delta;
> + uint64_t *counter_arr;
> + uint64_t *counter_arr_prev;
> + uint64_t current_icount = icount_get_raw();
> +
> + if (env->virt_enabled) {
> + counter_arr = env->pmu_fixed_ctrs[1].counter_virt;
> + counter_arr_prev = env->pmu_fixed_ctrs[1].counter_virt_prev;
> + } else {
> + counter_arr = env->pmu_fixed_ctrs[1].counter;
> + counter_arr_prev = env->pmu_fixed_ctrs[1].counter_prev;
> + }
> +
> + if (newpriv != env->priv) {
> + delta = current_icount - counter_arr_prev[env->priv];
> + counter_arr_prev[newpriv] = current_icount;
> + } else {
> + delta = current_icount - counter_arr_prev[env->priv];
> + if (env->virt_enabled) {
> + /* HS->VS transition.The previous value should correspond to HS. */
Hi Atish,
Dose env->virt_enabled ensure HS->VS transition?
I think VS->VS also keeps env->virt_enabled to be true, where the
previous value should correspond to VS.
Thanks,
Zhiwei
> + env->pmu_fixed_ctrs[1].counter_prev[PRV_S] = current_icount;
> + } else if (get_field(env->hstatus, HSTATUS_SPV)) {
> + /* VS->HS transition.The previous value should correspond to VS. */
> + env->pmu_fixed_ctrs[1].counter_virt_prev[PRV_S] = current_icount;
> + }
> + }
> +
> + counter_arr[env->priv] += delta;
> +}
> +
> +void riscv_pmu_cycle_update_priv(CPURISCVState *env, target_ulong newpriv)
> +{
> + uint64_t delta;
> + uint64_t *counter_arr;
> + uint64_t *counter_arr_prev;
> + uint64_t current_host_ticks = cpu_get_host_ticks();
> +
> + if (env->virt_enabled) {
> + counter_arr = env->pmu_fixed_ctrs[0].counter_virt;
> + counter_arr_prev = env->pmu_fixed_ctrs[0].counter_virt_prev;
> + } else {
> + counter_arr = env->pmu_fixed_ctrs[0].counter;
> + counter_arr_prev = env->pmu_fixed_ctrs[0].counter_prev;
> + }
> +
> + if (newpriv != env->priv) {
> + delta = current_host_ticks - counter_arr_prev[env->priv];
> + counter_arr_prev[newpriv] = current_host_ticks;
> + } else {
> + delta = current_host_ticks - counter_arr_prev[env->priv];
> + if (env->virt_enabled) {
> + env->pmu_fixed_ctrs[0].counter_prev[PRV_S] = current_host_ticks;
> + } else if (get_field(env->hstatus, HSTATUS_SPV)) {
> + env->pmu_fixed_ctrs[0].counter_virt_prev[PRV_S] =
> + current_host_ticks;
> + }
> + }
> +
> + counter_arr[env->priv] += delta;
> +}
> +
> int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx)
> {
> uint32_t ctr_idx;
> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> index 505fc850d38e..50de6031a730 100644
> --- a/target/riscv/pmu.h
> +++ b/target/riscv/pmu.h
> @@ -31,3 +31,5 @@ int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_idx);
> void riscv_pmu_generate_fdt_node(void *fdt, uint32_t cmask, char *pmu_name);
> int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> uint32_t ctr_idx);
> +void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong newpriv);
> +void riscv_pmu_cycle_update_priv(CPURISCVState *env, target_ulong newpriv);
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 2/5] target/riscv: Add cycle & instret privilege mode filtering properties
2024-02-28 18:51 ` [PATCH v5 2/5] target/riscv: Add cycle & instret privilege mode filtering properties Atish Patra
2024-03-04 17:24 ` Daniel Henrique Barboza
@ 2024-03-05 7:01 ` LIU Zhiwei
2024-03-07 9:27 ` Atish Patra
1 sibling, 1 reply; 15+ messages in thread
From: LIU Zhiwei @ 2024-03-05 7:01 UTC (permalink / raw)
To: Atish Patra
Cc: Alistair Francis, Bin Meng, Daniel Henrique Barboza,
Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li, kaiwenxue1
On 2024/2/29 2:51, Atish Patra wrote:
> From: Kaiwen Xue <kaiwenx@rivosinc.com>
>
> This adds the properties for ISA extension smcntrpmf. Patches
> implementing it will follow.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> ---
> target/riscv/cpu.c | 2 ++
> target/riscv/cpu_cfg.h | 1 +
> 2 files changed, 3 insertions(+)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 1b8d001d237f..f9d3c80597fc 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -169,6 +169,7 @@ const RISCVIsaExtData isa_edata_arr[] = {
> ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
> ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
> ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
> + ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
> ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, ext_smepmp),
> ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
> ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
> @@ -1447,6 +1448,7 @@ const char *riscv_get_misa_ext_description(uint32_t bit)
> const RISCVCPUMultiExtConfig riscv_cpu_extensions[] = {
> /* Defaults for standard extensions */
> MULTI_EXT_CFG_BOOL("sscofpmf", ext_sscofpmf, false),
> + MULTI_EXT_CFG_BOOL("smcntrpmf", ext_smcntrpmf, false),
> MULTI_EXT_CFG_BOOL("zifencei", ext_zifencei, true),
> MULTI_EXT_CFG_BOOL("zicsr", ext_zicsr, true),
> MULTI_EXT_CFG_BOOL("zihintntl", ext_zihintntl, true),
We should not add the configure option for users before the feature has
been implemented for bitsect reasons.
Thanks,
Zhiwei
> diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
> index 833bf5821708..0828841445c5 100644
> --- a/target/riscv/cpu_cfg.h
> +++ b/target/riscv/cpu_cfg.h
> @@ -73,6 +73,7 @@ struct RISCVCPUConfig {
> bool ext_zihpm;
> bool ext_smstateen;
> bool ext_sstc;
> + bool ext_smcntrpmf;
> bool ext_svadu;
> bool ext_svinval;
> bool ext_svnapot;
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 1/5] target/riscv: Fix the predicate functions for mhpmeventhX CSRs
2024-02-28 18:51 ` [PATCH v5 1/5] target/riscv: Fix the predicate functions for mhpmeventhX CSRs Atish Patra
@ 2024-03-05 7:03 ` LIU Zhiwei
0 siblings, 0 replies; 15+ messages in thread
From: LIU Zhiwei @ 2024-03-05 7:03 UTC (permalink / raw)
To: Atish Patra
Cc: Daniel Henrique Barboza, Alistair Francis, Bin Meng,
Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li, kaiwenxue1
On 2024/2/29 2:51, Atish Patra wrote:
> mhpmeventhX CSRs are available for RV32. The predicate function
> should check that first before checking sscofpmf extension.
>
> Fixes: 14664483457b ("target/riscv: Add sscofpmf extension support")
> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Zhiwei
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/csr.c | 67 ++++++++++++++++++++++++++--------------------
> 1 file changed, 38 insertions(+), 29 deletions(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index d4e8ac13b90c..a3d979c4c72c 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -227,6 +227,15 @@ static RISCVException sscofpmf(CPURISCVState *env, int csrno)
> return RISCV_EXCP_NONE;
> }
>
> +static RISCVException sscofpmf_32(CPURISCVState *env, int csrno)
> +{
> + if (riscv_cpu_mxl(env) != MXL_RV32) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return sscofpmf(env, csrno);
> +}
> +
> static RISCVException any(CPURISCVState *env, int csrno)
> {
> return RISCV_EXCP_NONE;
> @@ -5035,91 +5044,91 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_MHPMEVENT31] = { "mhpmevent31", any, read_mhpmevent,
> write_mhpmevent },
>
> - [CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT3H] = { "mhpmevent3h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT4H] = { "mhpmevent4h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT5H] = { "mhpmevent5h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT6H] = { "mhpmevent6h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT7H] = { "mhpmevent7h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT8H] = { "mhpmevent8h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT9H] = { "mhpmevent9h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT10H] = { "mhpmevent10h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT11H] = { "mhpmevent11h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT12H] = { "mhpmevent12h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT13H] = { "mhpmevent13h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT14H] = { "mhpmevent14h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT15H] = { "mhpmevent15h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT16H] = { "mhpmevent16h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT17H] = { "mhpmevent17h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT18H] = { "mhpmevent18h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT19H] = { "mhpmevent19h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT20H] = { "mhpmevent20h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT21H] = { "mhpmevent21h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT22H] = { "mhpmevent22h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT23H] = { "mhpmevent23h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT24H] = { "mhpmevent24h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT25H] = { "mhpmevent25h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT26H] = { "mhpmevent26h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT27H] = { "mhpmevent27h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT28H] = { "mhpmevent28h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT29H] = { "mhpmevent29h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT30H] = { "mhpmevent30h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
> - [CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf, read_mhpmeventh,
> + [CSR_MHPMEVENT31H] = { "mhpmevent31h", sscofpmf_32, read_mhpmeventh,
> write_mhpmeventh,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] target/riscv: Implement privilege mode filtering for cycle/instret
2024-03-05 6:47 ` LIU Zhiwei
@ 2024-03-07 9:25 ` Atish Patra
2024-03-20 4:54 ` Alistair Francis
0 siblings, 1 reply; 15+ messages in thread
From: Atish Patra @ 2024-03-07 9:25 UTC (permalink / raw)
To: LIU Zhiwei
Cc: Daniel Henrique Barboza, Alistair Francis, Bin Meng,
Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li, kaiwenxue1
On 3/4/24 22:47, LIU Zhiwei wrote:
>
> On 2024/2/29 2:51, Atish Patra wrote:
>> Privilege mode filtering can also be emulated for cycle/instret by
>> tracking host_ticks/icount during each privilege mode switch. This
>> patch implements that for both cycle/instret and mhpmcounters. The
>> first one requires Smcntrpmf while the other one requires Sscofpmf
>> to be enabled.
>>
>> The cycle/instret are still computed using host ticks when icount
>> is not enabled. Otherwise, they are computed using raw icount which
>> is more accurate in icount mode.
>>
>> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
>> Signed-off-by: Atish Patra <atishp@rivosinc.com>
>> ---
>> target/riscv/cpu.h | 11 +++++
>> target/riscv/cpu_bits.h | 5 ++
>> target/riscv/cpu_helper.c | 17 ++++++-
>> target/riscv/csr.c | 96 ++++++++++++++++++++++++++++++---------
>> target/riscv/pmu.c | 64 ++++++++++++++++++++++++++
>> target/riscv/pmu.h | 2 +
>> 6 files changed, 171 insertions(+), 24 deletions(-)
>>
>> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
>> index 174e8ba8e847..9e21d7f7d635 100644
>> --- a/target/riscv/cpu.h
>> +++ b/target/riscv/cpu.h
>> @@ -157,6 +157,15 @@ typedef struct PMUCTRState {
>> target_ulong irq_overflow_left;
>> } PMUCTRState;
>> +typedef struct PMUFixedCtrState {
>> + /* Track cycle and icount for each privilege mode */
>> + uint64_t counter[4];
>> + uint64_t counter_prev[4];
>> + /* Track cycle and icount for each privilege mode when V = 1*/
>> + uint64_t counter_virt[2];
>> + uint64_t counter_virt_prev[2];
>> +} PMUFixedCtrState;
>> +
>> struct CPUArchState {
>> target_ulong gpr[32];
>> target_ulong gprh[32]; /* 64 top bits of the 128-bit registers */
>> @@ -353,6 +362,8 @@ struct CPUArchState {
>> /* PMU event selector configured values for RV32 */
>> target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
>> + PMUFixedCtrState pmu_fixed_ctrs[2];
>> +
>> target_ulong sscratch;
>> target_ulong mscratch;
>> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
>> index e866c60a400c..5fe349e313dc 100644
>> --- a/target/riscv/cpu_bits.h
>> +++ b/target/riscv/cpu_bits.h
>> @@ -920,6 +920,11 @@ typedef enum RISCVException {
>> #define MHPMEVENT_BIT_VUINH BIT_ULL(58)
>> #define MHPMEVENTH_BIT_VUINH BIT(26)
>> +#define MHPMEVENT_FILTER_MASK (MHPMEVENT_BIT_MINH | \
>> + MHPMEVENT_BIT_SINH | \
>> + MHPMEVENT_BIT_UINH | \
>> + MHPMEVENT_BIT_VSINH | \
>> + MHPMEVENT_BIT_VUINH)
>> #define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
>> #define MHPMEVENT_IDX_MASK 0xFFFFF
>> #define MHPMEVENT_SSCOF_RESVD 16
>> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
>> index d462d95ee165..33965d843d46 100644
>> --- a/target/riscv/cpu_helper.c
>> +++ b/target/riscv/cpu_helper.c
>> @@ -718,8 +718,21 @@ void riscv_cpu_set_mode(CPURISCVState *env,
>> target_ulong newpriv)
>> {
>> g_assert(newpriv <= PRV_M && newpriv != PRV_RESERVED);
>> - if (icount_enabled() && newpriv != env->priv) {
>> - riscv_itrigger_update_priv(env);
>> + /*
>> + * Invoke cycle/instret update between priv mode changes or
>> + * VS->HS mode transition is SPV bit must be set
>> + * HS->VS mode transition where virt_enabled must be set
>> + * In both cases, priv will S mode only.
>> + */
>> + if (newpriv != env->priv ||
>> + (env->priv == PRV_S && newpriv == PRV_S &&
>> + (env->virt_enabled || get_field(env->hstatus, HSTATUS_SPV)))) {
>> + if (icount_enabled()) {
>> + riscv_itrigger_update_priv(env);
>> + riscv_pmu_icount_update_priv(env, newpriv);
>> + } else {
>> + riscv_pmu_cycle_update_priv(env, newpriv);
>> + }
>> }
>> /* tlb_flush is unnecessary as mode is contained in mmu_idx */
>> env->priv = newpriv;
>> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
>> index ff9bac537593..482e212c5f74 100644
>> --- a/target/riscv/csr.c
>> +++ b/target/riscv/csr.c
>> @@ -788,32 +788,16 @@ static RISCVException write_vcsr(CPURISCVState
>> *env, int csrno,
>> return RISCV_EXCP_NONE;
>> }
>> +#if defined(CONFIG_USER_ONLY)
>> /* User Timers and Counters */
>> static target_ulong get_ticks(bool shift)
>> {
>> - int64_t val;
>> - target_ulong result;
>> -
>> -#if !defined(CONFIG_USER_ONLY)
>> - if (icount_enabled()) {
>> - val = icount_get();
>> - } else {
>> - val = cpu_get_host_ticks();
>> - }
>> -#else
>> - val = cpu_get_host_ticks();
>> -#endif
>> -
>> - if (shift) {
>> - result = val >> 32;
>> - } else {
>> - result = val;
>> - }
>> + int64_t val = cpu_get_host_ticks();
>> + target_ulong result = shift ? val >> 32 : val;
>> return result;
>> }
>> -#if defined(CONFIG_USER_ONLY)
>> static RISCVException read_time(CPURISCVState *env, int csrno,
>> target_ulong *val)
>> {
>> @@ -952,6 +936,71 @@ static RISCVException
>> write_mhpmeventh(CPURISCVState *env, int csrno,
>> return RISCV_EXCP_NONE;
>> }
>> +static target_ulong
>> riscv_pmu_ctr_get_fixed_counters_val(CPURISCVState *env,
>> + int
>> counter_idx,
>> + bool
>> upper_half)
>> +{
>> + uint64_t curr_val = 0;
>> + target_ulong result = 0;
>> + uint64_t *counter_arr = icount_enabled() ?
>> env->pmu_fixed_ctrs[1].counter :
>> + env->pmu_fixed_ctrs[0].counter;
>> + uint64_t *counter_arr_virt = icount_enabled() ?
>> + env->pmu_fixed_ctrs[1].counter_virt :
>> + env->pmu_fixed_ctrs[0].counter_virt;
>> + uint64_t cfg_val = 0;
>> +
>> + if (counter_idx == 0) {
>> + cfg_val = upper_half ? ((uint64_t)env->mcyclecfgh << 32) :
>> + env->mcyclecfg;
>> + } else if (counter_idx == 2) {
>> + cfg_val = upper_half ? ((uint64_t)env->minstretcfgh << 32) :
>> + env->minstretcfg;
>> + } else {
>> + cfg_val = upper_half ?
>> + ((uint64_t)env->mhpmeventh_val[counter_idx] << 32) :
>> + env->mhpmevent_val[counter_idx];
>> + cfg_val &= MHPMEVENT_FILTER_MASK;
>> + }
>> +
>> + if (!cfg_val) {
>> + if (icount_enabled()) {
>> + curr_val = icount_get_raw();
>> + } else {
>> + curr_val = cpu_get_host_ticks();
>> + }
>> + goto done;
>> + }
>> +
>> + if (!(cfg_val & MCYCLECFG_BIT_MINH)) {
>> + curr_val += counter_arr[PRV_M];
>> + }
>> +
>> + if (!(cfg_val & MCYCLECFG_BIT_SINH)) {
>> + curr_val += counter_arr[PRV_S];
>> + }
>> +
>> + if (!(cfg_val & MCYCLECFG_BIT_UINH)) {
>> + curr_val += counter_arr[PRV_U];
>> + }
>> +
>> + if (!(cfg_val & MCYCLECFG_BIT_VSINH)) {
>> + curr_val += counter_arr_virt[PRV_S];
>> + }
>> +
>> + if (!(cfg_val & MCYCLECFG_BIT_VUINH)) {
>> + curr_val += counter_arr_virt[PRV_U];
>> + }
>> +
>> +done:
>> + if (riscv_cpu_mxl(env) == MXL_RV32) {
>> + result = upper_half ? curr_val >> 32 : curr_val;
>> + } else {
>> + result = curr_val;
>> + }
>> +
>> + return result;
>> +}
>> +
>> static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
>> target_ulong val)
>> {
>> @@ -962,7 +1011,8 @@ static RISCVException
>> write_mhpmcounter(CPURISCVState *env, int csrno,
>> counter->mhpmcounter_val = val;
>> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>> - counter->mhpmcounter_prev = get_ticks(false);
>> + counter->mhpmcounter_prev =
>> riscv_pmu_ctr_get_fixed_counters_val(env,
>> + ctr_idx, false);
>> if (ctr_idx > 2) {
>> if (riscv_cpu_mxl(env) == MXL_RV32) {
>> mhpmctr_val = mhpmctr_val |
>> @@ -990,7 +1040,8 @@ static RISCVException
>> write_mhpmcounterh(CPURISCVState *env, int csrno,
>> mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
>> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>> - counter->mhpmcounterh_prev = get_ticks(true);
>> + counter->mhpmcounterh_prev =
>> riscv_pmu_ctr_get_fixed_counters_val(env,
>> + ctr_idx, true);
>> if (ctr_idx > 2) {
>> riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
>> }
>> @@ -1031,7 +1082,8 @@ static RISCVException
>> riscv_pmu_read_ctr(CPURISCVState *env, target_ulong *val,
>> */
>> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>> - *val = get_ticks(upper_half) - ctr_prev + ctr_val;
>> + *val = riscv_pmu_ctr_get_fixed_counters_val(env, ctr_idx,
>> upper_half) -
>> + ctr_prev + ctr_val;
>> } else {
>> *val = ctr_val;
>> }
>> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
>> index 0e7d58b8a5c2..37309ff64cb6 100644
>> --- a/target/riscv/pmu.c
>> +++ b/target/riscv/pmu.c
>> @@ -19,6 +19,7 @@
>> #include "qemu/osdep.h"
>> #include "qemu/log.h"
>> #include "qemu/error-report.h"
>> +#include "qemu/timer.h"
>> #include "cpu.h"
>> #include "pmu.h"
>> #include "sysemu/cpu-timers.h"
>> @@ -176,6 +177,69 @@ static int riscv_pmu_incr_ctr_rv64(RISCVCPU
>> *cpu, uint32_t ctr_idx)
>> return 0;
>> }
>> +void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong
>> newpriv)
>> +{
>> + uint64_t delta;
>> + uint64_t *counter_arr;
>> + uint64_t *counter_arr_prev;
>> + uint64_t current_icount = icount_get_raw();
>> +
>> + if (env->virt_enabled) {
>> + counter_arr = env->pmu_fixed_ctrs[1].counter_virt;
>> + counter_arr_prev = env->pmu_fixed_ctrs[1].counter_virt_prev;
>> + } else {
>> + counter_arr = env->pmu_fixed_ctrs[1].counter;
>> + counter_arr_prev = env->pmu_fixed_ctrs[1].counter_prev;
>> + }
>> +
>> + if (newpriv != env->priv) {
>> + delta = current_icount - counter_arr_prev[env->priv];
>> + counter_arr_prev[newpriv] = current_icount;
>> + } else {
>> + delta = current_icount - counter_arr_prev[env->priv];
>> + if (env->virt_enabled) {
>> + /* HS->VS transition.The previous value should
>> correspond to HS. */
>
> Hi Atish,
>
> Dose env->virt_enabled ensure HS->VS transition?
>
As per my understanding, riscv_cpu_set_virt_enabled always invoked
before riscv_cpu_set_mode.
That means env->virt_enabled must be enabled before we enter into this
function. Let me know if I missed
an scenario.
> I think VS->VS also keeps env->virt_enabled to be true, where the
> previous value should correspond to VS.
>
yeah. good point. We should check HSTATUS_SPV here as well.
> Thanks,
> Zhiwei
>
>> + env->pmu_fixed_ctrs[1].counter_prev[PRV_S] = current_icount;
>> + } else if (get_field(env->hstatus, HSTATUS_SPV)) {
>> + /* VS->HS transition.The previous value should
>> correspond to VS. */
>> + env->pmu_fixed_ctrs[1].counter_virt_prev[PRV_S] =
>> current_icount;
>> + }
>> + }
>> +
>> + counter_arr[env->priv] += delta;
>> +}
>> +
>> +void riscv_pmu_cycle_update_priv(CPURISCVState *env, target_ulong
>> newpriv)
>> +{
>> + uint64_t delta;
>> + uint64_t *counter_arr;
>> + uint64_t *counter_arr_prev;
>> + uint64_t current_host_ticks = cpu_get_host_ticks();
>> +
>> + if (env->virt_enabled) {
>> + counter_arr = env->pmu_fixed_ctrs[0].counter_virt;
>> + counter_arr_prev = env->pmu_fixed_ctrs[0].counter_virt_prev;
>> + } else {
>> + counter_arr = env->pmu_fixed_ctrs[0].counter;
>> + counter_arr_prev = env->pmu_fixed_ctrs[0].counter_prev;
>> + }
>> +
>> + if (newpriv != env->priv) {
>> + delta = current_host_ticks - counter_arr_prev[env->priv];
>> + counter_arr_prev[newpriv] = current_host_ticks;
>> + } else {
>> + delta = current_host_ticks - counter_arr_prev[env->priv];
>> + if (env->virt_enabled) {
>> + env->pmu_fixed_ctrs[0].counter_prev[PRV_S] =
>> current_host_ticks;
>> + } else if (get_field(env->hstatus, HSTATUS_SPV)) {
>> + env->pmu_fixed_ctrs[0].counter_virt_prev[PRV_S] =
>> + current_host_ticks;
>> + }
>> + }
>> +
>> + counter_arr[env->priv] += delta;
>> +}
>> +
>> int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx
>> event_idx)
>> {
>> uint32_t ctr_idx;
>> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
>> index 505fc850d38e..50de6031a730 100644
>> --- a/target/riscv/pmu.h
>> +++ b/target/riscv/pmu.h
>> @@ -31,3 +31,5 @@ int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum
>> riscv_pmu_event_idx event_idx);
>> void riscv_pmu_generate_fdt_node(void *fdt, uint32_t cmask, char
>> *pmu_name);
>> int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
>> uint32_t ctr_idx);
>> +void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong
>> newpriv);
>> +void riscv_pmu_cycle_update_priv(CPURISCVState *env, target_ulong
>> newpriv);
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 2/5] target/riscv: Add cycle & instret privilege mode filtering properties
2024-03-05 7:01 ` LIU Zhiwei
@ 2024-03-07 9:27 ` Atish Patra
0 siblings, 0 replies; 15+ messages in thread
From: Atish Patra @ 2024-03-07 9:27 UTC (permalink / raw)
To: LIU Zhiwei
Cc: Alistair Francis, Bin Meng, Daniel Henrique Barboza,
Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li, kaiwenxue1
On 3/4/24 23:01, LIU Zhiwei wrote:
>
> On 2024/2/29 2:51, Atish Patra wrote:
>> From: Kaiwen Xue <kaiwenx@rivosinc.com>
>>
>> This adds the properties for ISA extension smcntrpmf. Patches
>> implementing it will follow.
>>
>> Signed-off-by: Atish Patra <atishp@rivosinc.com>
>> Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
>> ---
>> target/riscv/cpu.c | 2 ++
>> target/riscv/cpu_cfg.h | 1 +
>> 2 files changed, 3 insertions(+)
>>
>> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
>> index 1b8d001d237f..f9d3c80597fc 100644
>> --- a/target/riscv/cpu.c
>> +++ b/target/riscv/cpu.c
>> @@ -169,6 +169,7 @@ const RISCVIsaExtData isa_edata_arr[] = {
>> ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
>> ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
>> ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
>> + ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
>> ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, ext_smepmp),
>> ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
>> ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
>> @@ -1447,6 +1448,7 @@ const char
>> *riscv_get_misa_ext_description(uint32_t bit)
>> const RISCVCPUMultiExtConfig riscv_cpu_extensions[] = {
>> /* Defaults for standard extensions */
>> MULTI_EXT_CFG_BOOL("sscofpmf", ext_sscofpmf, false),
>> + MULTI_EXT_CFG_BOOL("smcntrpmf", ext_smcntrpmf, false),
>> MULTI_EXT_CFG_BOOL("zifencei", ext_zifencei, true),
>> MULTI_EXT_CFG_BOOL("zicsr", ext_zicsr, true),
>> MULTI_EXT_CFG_BOOL("zihintntl", ext_zihintntl, true),
>
> We should not add the configure option for users before the feature
> has been implemented for bitsect reasons.
>
ok. I will move it to the patch where the feature is actually implemented.
> Thanks,
> Zhiwei
>
>> diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
>> index 833bf5821708..0828841445c5 100644
>> --- a/target/riscv/cpu_cfg.h
>> +++ b/target/riscv/cpu_cfg.h
>> @@ -73,6 +73,7 @@ struct RISCVCPUConfig {
>> bool ext_zihpm;
>> bool ext_smstateen;
>> bool ext_sstc;
>> + bool ext_smcntrpmf;
>> bool ext_svadu;
>> bool ext_svinval;
>> bool ext_svnapot;
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] target/riscv: Implement privilege mode filtering for cycle/instret
2024-03-07 9:25 ` Atish Patra
@ 2024-03-20 4:54 ` Alistair Francis
2024-03-20 7:21 ` Atish Patra
0 siblings, 1 reply; 15+ messages in thread
From: Alistair Francis @ 2024-03-20 4:54 UTC (permalink / raw)
To: Atish Patra
Cc: LIU Zhiwei, Daniel Henrique Barboza, Alistair Francis, Bin Meng,
Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li, kaiwenxue1
On Thu, Mar 7, 2024 at 7:26 PM Atish Patra <atishp@rivosinc.com> wrote:
>
>
> On 3/4/24 22:47, LIU Zhiwei wrote:
> >
> > On 2024/2/29 2:51, Atish Patra wrote:
> >> Privilege mode filtering can also be emulated for cycle/instret by
> >> tracking host_ticks/icount during each privilege mode switch. This
> >> patch implements that for both cycle/instret and mhpmcounters. The
> >> first one requires Smcntrpmf while the other one requires Sscofpmf
> >> to be enabled.
> >>
> >> The cycle/instret are still computed using host ticks when icount
> >> is not enabled. Otherwise, they are computed using raw icount which
> >> is more accurate in icount mode.
> >>
> >> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> >> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> >> ---
> >> target/riscv/cpu.h | 11 +++++
> >> target/riscv/cpu_bits.h | 5 ++
> >> target/riscv/cpu_helper.c | 17 ++++++-
> >> target/riscv/csr.c | 96 ++++++++++++++++++++++++++++++---------
> >> target/riscv/pmu.c | 64 ++++++++++++++++++++++++++
> >> target/riscv/pmu.h | 2 +
> >> 6 files changed, 171 insertions(+), 24 deletions(-)
> >>
> >> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> >> index 174e8ba8e847..9e21d7f7d635 100644
> >> --- a/target/riscv/cpu.h
> >> +++ b/target/riscv/cpu.h
> >> @@ -157,6 +157,15 @@ typedef struct PMUCTRState {
> >> target_ulong irq_overflow_left;
> >> } PMUCTRState;
> >> +typedef struct PMUFixedCtrState {
> >> + /* Track cycle and icount for each privilege mode */
> >> + uint64_t counter[4];
> >> + uint64_t counter_prev[4];
> >> + /* Track cycle and icount for each privilege mode when V = 1*/
> >> + uint64_t counter_virt[2];
> >> + uint64_t counter_virt_prev[2];
> >> +} PMUFixedCtrState;
> >> +
> >> struct CPUArchState {
> >> target_ulong gpr[32];
> >> target_ulong gprh[32]; /* 64 top bits of the 128-bit registers */
> >> @@ -353,6 +362,8 @@ struct CPUArchState {
> >> /* PMU event selector configured values for RV32 */
> >> target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
> >> + PMUFixedCtrState pmu_fixed_ctrs[2];
> >> +
> >> target_ulong sscratch;
> >> target_ulong mscratch;
> >> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> >> index e866c60a400c..5fe349e313dc 100644
> >> --- a/target/riscv/cpu_bits.h
> >> +++ b/target/riscv/cpu_bits.h
> >> @@ -920,6 +920,11 @@ typedef enum RISCVException {
> >> #define MHPMEVENT_BIT_VUINH BIT_ULL(58)
> >> #define MHPMEVENTH_BIT_VUINH BIT(26)
> >> +#define MHPMEVENT_FILTER_MASK (MHPMEVENT_BIT_MINH | \
> >> + MHPMEVENT_BIT_SINH | \
> >> + MHPMEVENT_BIT_UINH | \
> >> + MHPMEVENT_BIT_VSINH | \
> >> + MHPMEVENT_BIT_VUINH)
> >> #define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
> >> #define MHPMEVENT_IDX_MASK 0xFFFFF
> >> #define MHPMEVENT_SSCOF_RESVD 16
> >> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> >> index d462d95ee165..33965d843d46 100644
> >> --- a/target/riscv/cpu_helper.c
> >> +++ b/target/riscv/cpu_helper.c
> >> @@ -718,8 +718,21 @@ void riscv_cpu_set_mode(CPURISCVState *env,
> >> target_ulong newpriv)
> >> {
> >> g_assert(newpriv <= PRV_M && newpriv != PRV_RESERVED);
> >> - if (icount_enabled() && newpriv != env->priv) {
> >> - riscv_itrigger_update_priv(env);
> >> + /*
> >> + * Invoke cycle/instret update between priv mode changes or
> >> + * VS->HS mode transition is SPV bit must be set
> >> + * HS->VS mode transition where virt_enabled must be set
> >> + * In both cases, priv will S mode only.
> >> + */
> >> + if (newpriv != env->priv ||
> >> + (env->priv == PRV_S && newpriv == PRV_S &&
> >> + (env->virt_enabled || get_field(env->hstatus, HSTATUS_SPV)))) {
> >> + if (icount_enabled()) {
> >> + riscv_itrigger_update_priv(env);
> >> + riscv_pmu_icount_update_priv(env, newpriv);
> >> + } else {
> >> + riscv_pmu_cycle_update_priv(env, newpriv);
> >> + }
> >> }
> >> /* tlb_flush is unnecessary as mode is contained in mmu_idx */
> >> env->priv = newpriv;
> >> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> >> index ff9bac537593..482e212c5f74 100644
> >> --- a/target/riscv/csr.c
> >> +++ b/target/riscv/csr.c
> >> @@ -788,32 +788,16 @@ static RISCVException write_vcsr(CPURISCVState
> >> *env, int csrno,
> >> return RISCV_EXCP_NONE;
> >> }
> >> +#if defined(CONFIG_USER_ONLY)
> >> /* User Timers and Counters */
> >> static target_ulong get_ticks(bool shift)
> >> {
> >> - int64_t val;
> >> - target_ulong result;
> >> -
> >> -#if !defined(CONFIG_USER_ONLY)
> >> - if (icount_enabled()) {
> >> - val = icount_get();
> >> - } else {
> >> - val = cpu_get_host_ticks();
> >> - }
> >> -#else
> >> - val = cpu_get_host_ticks();
> >> -#endif
> >> -
> >> - if (shift) {
> >> - result = val >> 32;
> >> - } else {
> >> - result = val;
> >> - }
> >> + int64_t val = cpu_get_host_ticks();
> >> + target_ulong result = shift ? val >> 32 : val;
> >> return result;
> >> }
> >> -#if defined(CONFIG_USER_ONLY)
> >> static RISCVException read_time(CPURISCVState *env, int csrno,
> >> target_ulong *val)
> >> {
> >> @@ -952,6 +936,71 @@ static RISCVException
> >> write_mhpmeventh(CPURISCVState *env, int csrno,
> >> return RISCV_EXCP_NONE;
> >> }
> >> +static target_ulong
> >> riscv_pmu_ctr_get_fixed_counters_val(CPURISCVState *env,
> >> + int
> >> counter_idx,
> >> + bool
> >> upper_half)
> >> +{
> >> + uint64_t curr_val = 0;
> >> + target_ulong result = 0;
> >> + uint64_t *counter_arr = icount_enabled() ?
> >> env->pmu_fixed_ctrs[1].counter :
> >> + env->pmu_fixed_ctrs[0].counter;
> >> + uint64_t *counter_arr_virt = icount_enabled() ?
> >> + env->pmu_fixed_ctrs[1].counter_virt :
> >> + env->pmu_fixed_ctrs[0].counter_virt;
> >> + uint64_t cfg_val = 0;
> >> +
> >> + if (counter_idx == 0) {
> >> + cfg_val = upper_half ? ((uint64_t)env->mcyclecfgh << 32) :
> >> + env->mcyclecfg;
> >> + } else if (counter_idx == 2) {
> >> + cfg_val = upper_half ? ((uint64_t)env->minstretcfgh << 32) :
> >> + env->minstretcfg;
> >> + } else {
> >> + cfg_val = upper_half ?
> >> + ((uint64_t)env->mhpmeventh_val[counter_idx] << 32) :
> >> + env->mhpmevent_val[counter_idx];
> >> + cfg_val &= MHPMEVENT_FILTER_MASK;
> >> + }
> >> +
> >> + if (!cfg_val) {
> >> + if (icount_enabled()) {
> >> + curr_val = icount_get_raw();
> >> + } else {
> >> + curr_val = cpu_get_host_ticks();
> >> + }
> >> + goto done;
> >> + }
> >> +
> >> + if (!(cfg_val & MCYCLECFG_BIT_MINH)) {
> >> + curr_val += counter_arr[PRV_M];
> >> + }
> >> +
> >> + if (!(cfg_val & MCYCLECFG_BIT_SINH)) {
> >> + curr_val += counter_arr[PRV_S];
> >> + }
> >> +
> >> + if (!(cfg_val & MCYCLECFG_BIT_UINH)) {
> >> + curr_val += counter_arr[PRV_U];
> >> + }
> >> +
> >> + if (!(cfg_val & MCYCLECFG_BIT_VSINH)) {
> >> + curr_val += counter_arr_virt[PRV_S];
> >> + }
> >> +
> >> + if (!(cfg_val & MCYCLECFG_BIT_VUINH)) {
> >> + curr_val += counter_arr_virt[PRV_U];
> >> + }
> >> +
> >> +done:
> >> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> >> + result = upper_half ? curr_val >> 32 : curr_val;
> >> + } else {
> >> + result = curr_val;
> >> + }
> >> +
> >> + return result;
> >> +}
> >> +
> >> static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
> >> target_ulong val)
> >> {
> >> @@ -962,7 +1011,8 @@ static RISCVException
> >> write_mhpmcounter(CPURISCVState *env, int csrno,
> >> counter->mhpmcounter_val = val;
> >> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> >> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> >> - counter->mhpmcounter_prev = get_ticks(false);
> >> + counter->mhpmcounter_prev =
> >> riscv_pmu_ctr_get_fixed_counters_val(env,
> >> + ctr_idx, false);
> >> if (ctr_idx > 2) {
> >> if (riscv_cpu_mxl(env) == MXL_RV32) {
> >> mhpmctr_val = mhpmctr_val |
> >> @@ -990,7 +1040,8 @@ static RISCVException
> >> write_mhpmcounterh(CPURISCVState *env, int csrno,
> >> mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
> >> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> >> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> >> - counter->mhpmcounterh_prev = get_ticks(true);
> >> + counter->mhpmcounterh_prev =
> >> riscv_pmu_ctr_get_fixed_counters_val(env,
> >> + ctr_idx, true);
> >> if (ctr_idx > 2) {
> >> riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> >> }
> >> @@ -1031,7 +1082,8 @@ static RISCVException
> >> riscv_pmu_read_ctr(CPURISCVState *env, target_ulong *val,
> >> */
> >> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> >> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> >> - *val = get_ticks(upper_half) - ctr_prev + ctr_val;
> >> + *val = riscv_pmu_ctr_get_fixed_counters_val(env, ctr_idx,
> >> upper_half) -
> >> + ctr_prev + ctr_val;
> >> } else {
> >> *val = ctr_val;
> >> }
> >> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> >> index 0e7d58b8a5c2..37309ff64cb6 100644
> >> --- a/target/riscv/pmu.c
> >> +++ b/target/riscv/pmu.c
> >> @@ -19,6 +19,7 @@
> >> #include "qemu/osdep.h"
> >> #include "qemu/log.h"
> >> #include "qemu/error-report.h"
> >> +#include "qemu/timer.h"
> >> #include "cpu.h"
> >> #include "pmu.h"
> >> #include "sysemu/cpu-timers.h"
> >> @@ -176,6 +177,69 @@ static int riscv_pmu_incr_ctr_rv64(RISCVCPU
> >> *cpu, uint32_t ctr_idx)
> >> return 0;
> >> }
> >> +void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong
> >> newpriv)
> >> +{
> >> + uint64_t delta;
> >> + uint64_t *counter_arr;
> >> + uint64_t *counter_arr_prev;
> >> + uint64_t current_icount = icount_get_raw();
> >> +
> >> + if (env->virt_enabled) {
> >> + counter_arr = env->pmu_fixed_ctrs[1].counter_virt;
> >> + counter_arr_prev = env->pmu_fixed_ctrs[1].counter_virt_prev;
> >> + } else {
> >> + counter_arr = env->pmu_fixed_ctrs[1].counter;
> >> + counter_arr_prev = env->pmu_fixed_ctrs[1].counter_prev;
> >> + }
> >> +
> >> + if (newpriv != env->priv) {
> >> + delta = current_icount - counter_arr_prev[env->priv];
> >> + counter_arr_prev[newpriv] = current_icount;
> >> + } else {
> >> + delta = current_icount - counter_arr_prev[env->priv];
> >> + if (env->virt_enabled) {
> >> + /* HS->VS transition.The previous value should
> >> correspond to HS. */
> >
> > Hi Atish,
> >
> > Dose env->virt_enabled ensure HS->VS transition?
> >
> As per my understanding, riscv_cpu_set_virt_enabled always invoked
> before riscv_cpu_set_mode.
In helper_mret() we call riscv_cpu_set_mode() before
riscv_cpu_set_virt_enabled().
I don't think there is any requirement on which order we call the functions
>
> That means env->virt_enabled must be enabled before we enter into this
> function. Let me know if I missed
env->virt_enabled will be true if we are in HS or VS mode, but you
don't know the transition from just env->virt_enabled being set.
Alistair
>
> an scenario.
>
>
> > I think VS->VS also keeps env->virt_enabled to be true, where the
> > previous value should correspond to VS.
> >
> yeah. good point. We should check HSTATUS_SPV here as well.
>
>
> > Thanks,
> > Zhiwei
> >
> >> + env->pmu_fixed_ctrs[1].counter_prev[PRV_S] = current_icount;
> >> + } else if (get_field(env->hstatus, HSTATUS_SPV)) {
> >> + /* VS->HS transition.The previous value should
> >> correspond to VS. */
> >> + env->pmu_fixed_ctrs[1].counter_virt_prev[PRV_S] =
> >> current_icount;
> >> + }
> >> + }
> >> +
> >> + counter_arr[env->priv] += delta;
> >> +}
> >> +
> >> +void riscv_pmu_cycle_update_priv(CPURISCVState *env, target_ulong
> >> newpriv)
> >> +{
> >> + uint64_t delta;
> >> + uint64_t *counter_arr;
> >> + uint64_t *counter_arr_prev;
> >> + uint64_t current_host_ticks = cpu_get_host_ticks();
> >> +
> >> + if (env->virt_enabled) {
> >> + counter_arr = env->pmu_fixed_ctrs[0].counter_virt;
> >> + counter_arr_prev = env->pmu_fixed_ctrs[0].counter_virt_prev;
> >> + } else {
> >> + counter_arr = env->pmu_fixed_ctrs[0].counter;
> >> + counter_arr_prev = env->pmu_fixed_ctrs[0].counter_prev;
> >> + }
> >> +
> >> + if (newpriv != env->priv) {
> >> + delta = current_host_ticks - counter_arr_prev[env->priv];
> >> + counter_arr_prev[newpriv] = current_host_ticks;
> >> + } else {
> >> + delta = current_host_ticks - counter_arr_prev[env->priv];
> >> + if (env->virt_enabled) {
> >> + env->pmu_fixed_ctrs[0].counter_prev[PRV_S] =
> >> current_host_ticks;
> >> + } else if (get_field(env->hstatus, HSTATUS_SPV)) {
> >> + env->pmu_fixed_ctrs[0].counter_virt_prev[PRV_S] =
> >> + current_host_ticks;
> >> + }
> >> + }
> >> +
> >> + counter_arr[env->priv] += delta;
> >> +}
> >> +
> >> int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx
> >> event_idx)
> >> {
> >> uint32_t ctr_idx;
> >> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> >> index 505fc850d38e..50de6031a730 100644
> >> --- a/target/riscv/pmu.h
> >> +++ b/target/riscv/pmu.h
> >> @@ -31,3 +31,5 @@ int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum
> >> riscv_pmu_event_idx event_idx);
> >> void riscv_pmu_generate_fdt_node(void *fdt, uint32_t cmask, char
> >> *pmu_name);
> >> int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> >> uint32_t ctr_idx);
> >> +void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong
> >> newpriv);
> >> +void riscv_pmu_cycle_update_priv(CPURISCVState *env, target_ulong
> >> newpriv);
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] target/riscv: Implement privilege mode filtering for cycle/instret
2024-03-20 4:54 ` Alistair Francis
@ 2024-03-20 7:21 ` Atish Patra
2024-03-20 8:06 ` Alistair Francis
0 siblings, 1 reply; 15+ messages in thread
From: Atish Patra @ 2024-03-20 7:21 UTC (permalink / raw)
To: Alistair Francis
Cc: LIU Zhiwei, Daniel Henrique Barboza, Alistair Francis, Bin Meng,
Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li, kaiwenxue1
[-- Attachment #1: Type: text/plain, Size: 15028 bytes --]
On 3/19/24 21:54, Alistair Francis wrote:
> On Thu, Mar 7, 2024 at 7:26 PM Atish Patra<atishp@rivosinc.com> wrote:
>>
>> On 3/4/24 22:47, LIU Zhiwei wrote:
>>> On 2024/2/29 2:51, Atish Patra wrote:
>>>> Privilege mode filtering can also be emulated for cycle/instret by
>>>> tracking host_ticks/icount during each privilege mode switch. This
>>>> patch implements that for both cycle/instret and mhpmcounters. The
>>>> first one requires Smcntrpmf while the other one requires Sscofpmf
>>>> to be enabled.
>>>>
>>>> The cycle/instret are still computed using host ticks when icount
>>>> is not enabled. Otherwise, they are computed using raw icount which
>>>> is more accurate in icount mode.
>>>>
>>>> Reviewed-by: Daniel Henrique Barboza<dbarboza@ventanamicro.com>
>>>> Signed-off-by: Atish Patra<atishp@rivosinc.com>
>>>> ---
>>>> target/riscv/cpu.h | 11 +++++
>>>> target/riscv/cpu_bits.h | 5 ++
>>>> target/riscv/cpu_helper.c | 17 ++++++-
>>>> target/riscv/csr.c | 96 ++++++++++++++++++++++++++++++---------
>>>> target/riscv/pmu.c | 64 ++++++++++++++++++++++++++
>>>> target/riscv/pmu.h | 2 +
>>>> 6 files changed, 171 insertions(+), 24 deletions(-)
>>>>
>>>> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
>>>> index 174e8ba8e847..9e21d7f7d635 100644
>>>> --- a/target/riscv/cpu.h
>>>> +++ b/target/riscv/cpu.h
>>>> @@ -157,6 +157,15 @@ typedef struct PMUCTRState {
>>>> target_ulong irq_overflow_left;
>>>> } PMUCTRState;
>>>> +typedef struct PMUFixedCtrState {
>>>> + /* Track cycle and icount for each privilege mode */
>>>> + uint64_t counter[4];
>>>> + uint64_t counter_prev[4];
>>>> + /* Track cycle and icount for each privilege mode when V = 1*/
>>>> + uint64_t counter_virt[2];
>>>> + uint64_t counter_virt_prev[2];
>>>> +} PMUFixedCtrState;
>>>> +
>>>> struct CPUArchState {
>>>> target_ulong gpr[32];
>>>> target_ulong gprh[32]; /* 64 top bits of the 128-bit registers */
>>>> @@ -353,6 +362,8 @@ struct CPUArchState {
>>>> /* PMU event selector configured values for RV32 */
>>>> target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
>>>> + PMUFixedCtrState pmu_fixed_ctrs[2];
>>>> +
>>>> target_ulong sscratch;
>>>> target_ulong mscratch;
>>>> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
>>>> index e866c60a400c..5fe349e313dc 100644
>>>> --- a/target/riscv/cpu_bits.h
>>>> +++ b/target/riscv/cpu_bits.h
>>>> @@ -920,6 +920,11 @@ typedef enum RISCVException {
>>>> #define MHPMEVENT_BIT_VUINH BIT_ULL(58)
>>>> #define MHPMEVENTH_BIT_VUINH BIT(26)
>>>> +#define MHPMEVENT_FILTER_MASK (MHPMEVENT_BIT_MINH | \
>>>> + MHPMEVENT_BIT_SINH | \
>>>> + MHPMEVENT_BIT_UINH | \
>>>> + MHPMEVENT_BIT_VSINH | \
>>>> + MHPMEVENT_BIT_VUINH)
>>>> #define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
>>>> #define MHPMEVENT_IDX_MASK 0xFFFFF
>>>> #define MHPMEVENT_SSCOF_RESVD 16
>>>> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
>>>> index d462d95ee165..33965d843d46 100644
>>>> --- a/target/riscv/cpu_helper.c
>>>> +++ b/target/riscv/cpu_helper.c
>>>> @@ -718,8 +718,21 @@ void riscv_cpu_set_mode(CPURISCVState *env,
>>>> target_ulong newpriv)
>>>> {
>>>> g_assert(newpriv <= PRV_M && newpriv != PRV_RESERVED);
>>>> - if (icount_enabled() && newpriv != env->priv) {
>>>> - riscv_itrigger_update_priv(env);
>>>> + /*
>>>> + * Invoke cycle/instret update between priv mode changes or
>>>> + * VS->HS mode transition is SPV bit must be set
>>>> + * HS->VS mode transition where virt_enabled must be set
>>>> + * In both cases, priv will S mode only.
>>>> + */
>>>> + if (newpriv != env->priv ||
>>>> + (env->priv == PRV_S && newpriv == PRV_S &&
>>>> + (env->virt_enabled || get_field(env->hstatus, HSTATUS_SPV)))) {
>>>> + if (icount_enabled()) {
>>>> + riscv_itrigger_update_priv(env);
>>>> + riscv_pmu_icount_update_priv(env, newpriv);
>>>> + } else {
>>>> + riscv_pmu_cycle_update_priv(env, newpriv);
>>>> + }
>>>> }
>>>> /* tlb_flush is unnecessary as mode is contained in mmu_idx */
>>>> env->priv = newpriv;
>>>> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
>>>> index ff9bac537593..482e212c5f74 100644
>>>> --- a/target/riscv/csr.c
>>>> +++ b/target/riscv/csr.c
>>>> @@ -788,32 +788,16 @@ static RISCVException write_vcsr(CPURISCVState
>>>> *env, int csrno,
>>>> return RISCV_EXCP_NONE;
>>>> }
>>>> +#if defined(CONFIG_USER_ONLY)
>>>> /* User Timers and Counters */
>>>> static target_ulong get_ticks(bool shift)
>>>> {
>>>> - int64_t val;
>>>> - target_ulong result;
>>>> -
>>>> -#if !defined(CONFIG_USER_ONLY)
>>>> - if (icount_enabled()) {
>>>> - val = icount_get();
>>>> - } else {
>>>> - val = cpu_get_host_ticks();
>>>> - }
>>>> -#else
>>>> - val = cpu_get_host_ticks();
>>>> -#endif
>>>> -
>>>> - if (shift) {
>>>> - result = val >> 32;
>>>> - } else {
>>>> - result = val;
>>>> - }
>>>> + int64_t val = cpu_get_host_ticks();
>>>> + target_ulong result = shift ? val >> 32 : val;
>>>> return result;
>>>> }
>>>> -#if defined(CONFIG_USER_ONLY)
>>>> static RISCVException read_time(CPURISCVState *env, int csrno,
>>>> target_ulong *val)
>>>> {
>>>> @@ -952,6 +936,71 @@ static RISCVException
>>>> write_mhpmeventh(CPURISCVState *env, int csrno,
>>>> return RISCV_EXCP_NONE;
>>>> }
>>>> +static target_ulong
>>>> riscv_pmu_ctr_get_fixed_counters_val(CPURISCVState *env,
>>>> + int
>>>> counter_idx,
>>>> + bool
>>>> upper_half)
>>>> +{
>>>> + uint64_t curr_val = 0;
>>>> + target_ulong result = 0;
>>>> + uint64_t *counter_arr = icount_enabled() ?
>>>> env->pmu_fixed_ctrs[1].counter :
>>>> + env->pmu_fixed_ctrs[0].counter;
>>>> + uint64_t *counter_arr_virt = icount_enabled() ?
>>>> + env->pmu_fixed_ctrs[1].counter_virt :
>>>> + env->pmu_fixed_ctrs[0].counter_virt;
>>>> + uint64_t cfg_val = 0;
>>>> +
>>>> + if (counter_idx == 0) {
>>>> + cfg_val = upper_half ? ((uint64_t)env->mcyclecfgh << 32) :
>>>> + env->mcyclecfg;
>>>> + } else if (counter_idx == 2) {
>>>> + cfg_val = upper_half ? ((uint64_t)env->minstretcfgh << 32) :
>>>> + env->minstretcfg;
>>>> + } else {
>>>> + cfg_val = upper_half ?
>>>> + ((uint64_t)env->mhpmeventh_val[counter_idx] << 32) :
>>>> + env->mhpmevent_val[counter_idx];
>>>> + cfg_val &= MHPMEVENT_FILTER_MASK;
>>>> + }
>>>> +
>>>> + if (!cfg_val) {
>>>> + if (icount_enabled()) {
>>>> + curr_val = icount_get_raw();
>>>> + } else {
>>>> + curr_val = cpu_get_host_ticks();
>>>> + }
>>>> + goto done;
>>>> + }
>>>> +
>>>> + if (!(cfg_val & MCYCLECFG_BIT_MINH)) {
>>>> + curr_val += counter_arr[PRV_M];
>>>> + }
>>>> +
>>>> + if (!(cfg_val & MCYCLECFG_BIT_SINH)) {
>>>> + curr_val += counter_arr[PRV_S];
>>>> + }
>>>> +
>>>> + if (!(cfg_val & MCYCLECFG_BIT_UINH)) {
>>>> + curr_val += counter_arr[PRV_U];
>>>> + }
>>>> +
>>>> + if (!(cfg_val & MCYCLECFG_BIT_VSINH)) {
>>>> + curr_val += counter_arr_virt[PRV_S];
>>>> + }
>>>> +
>>>> + if (!(cfg_val & MCYCLECFG_BIT_VUINH)) {
>>>> + curr_val += counter_arr_virt[PRV_U];
>>>> + }
>>>> +
>>>> +done:
>>>> + if (riscv_cpu_mxl(env) == MXL_RV32) {
>>>> + result = upper_half ? curr_val >> 32 : curr_val;
>>>> + } else {
>>>> + result = curr_val;
>>>> + }
>>>> +
>>>> + return result;
>>>> +}
>>>> +
>>>> static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
>>>> target_ulong val)
>>>> {
>>>> @@ -962,7 +1011,8 @@ static RISCVException
>>>> write_mhpmcounter(CPURISCVState *env, int csrno,
>>>> counter->mhpmcounter_val = val;
>>>> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>>>> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>>>> - counter->mhpmcounter_prev = get_ticks(false);
>>>> + counter->mhpmcounter_prev =
>>>> riscv_pmu_ctr_get_fixed_counters_val(env,
>>>> + ctr_idx, false);
>>>> if (ctr_idx > 2) {
>>>> if (riscv_cpu_mxl(env) == MXL_RV32) {
>>>> mhpmctr_val = mhpmctr_val |
>>>> @@ -990,7 +1040,8 @@ static RISCVException
>>>> write_mhpmcounterh(CPURISCVState *env, int csrno,
>>>> mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
>>>> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>>>> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>>>> - counter->mhpmcounterh_prev = get_ticks(true);
>>>> + counter->mhpmcounterh_prev =
>>>> riscv_pmu_ctr_get_fixed_counters_val(env,
>>>> + ctr_idx, true);
>>>> if (ctr_idx > 2) {
>>>> riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
>>>> }
>>>> @@ -1031,7 +1082,8 @@ static RISCVException
>>>> riscv_pmu_read_ctr(CPURISCVState *env, target_ulong *val,
>>>> */
>>>> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
>>>> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
>>>> - *val = get_ticks(upper_half) - ctr_prev + ctr_val;
>>>> + *val = riscv_pmu_ctr_get_fixed_counters_val(env, ctr_idx,
>>>> upper_half) -
>>>> + ctr_prev + ctr_val;
>>>> } else {
>>>> *val = ctr_val;
>>>> }
>>>> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
>>>> index 0e7d58b8a5c2..37309ff64cb6 100644
>>>> --- a/target/riscv/pmu.c
>>>> +++ b/target/riscv/pmu.c
>>>> @@ -19,6 +19,7 @@
>>>> #include "qemu/osdep.h"
>>>> #include "qemu/log.h"
>>>> #include "qemu/error-report.h"
>>>> +#include "qemu/timer.h"
>>>> #include "cpu.h"
>>>> #include "pmu.h"
>>>> #include "sysemu/cpu-timers.h"
>>>> @@ -176,6 +177,69 @@ static int riscv_pmu_incr_ctr_rv64(RISCVCPU
>>>> *cpu, uint32_t ctr_idx)
>>>> return 0;
>>>> }
>>>> +void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong
>>>> newpriv)
>>>> +{
>>>> + uint64_t delta;
>>>> + uint64_t *counter_arr;
>>>> + uint64_t *counter_arr_prev;
>>>> + uint64_t current_icount = icount_get_raw();
>>>> +
>>>> + if (env->virt_enabled) {
>>>> + counter_arr = env->pmu_fixed_ctrs[1].counter_virt;
>>>> + counter_arr_prev = env->pmu_fixed_ctrs[1].counter_virt_prev;
>>>> + } else {
>>>> + counter_arr = env->pmu_fixed_ctrs[1].counter;
>>>> + counter_arr_prev = env->pmu_fixed_ctrs[1].counter_prev;
>>>> + }
>>>> +
>>>> + if (newpriv != env->priv) {
>>>> + delta = current_icount - counter_arr_prev[env->priv];
>>>> + counter_arr_prev[newpriv] = current_icount;
>>>> + } else {
>>>> + delta = current_icount - counter_arr_prev[env->priv];
>>>> + if (env->virt_enabled) {
>>>> + /* HS->VS transition.The previous value should
>>>> correspond to HS. */
>>> Hi Atish,
>>>
>>> Dose env->virt_enabled ensure HS->VS transition?
>>>
>> As per my understanding, riscv_cpu_set_virt_enabled always invoked
>> before riscv_cpu_set_mode.
> In helper_mret() we call riscv_cpu_set_mode() before
> riscv_cpu_set_virt_enabled().
Ahh yes. I missed the helper_mret condition.
> I don't think there is any requirement on which order we call the functions
Indeed.helper_mret and helper_sret invokes them in different order.
>> That means env->virt_enabled must be enabled before we enter into this
>> function. Let me know if I missed
> env->virt_enabled will be true if we are in HS or VS mode, but you
> don't know the transition from just env->virt_enabled being set.
Hmm. In riscv_cpu_do_interrupt(), virt_enabled is set to 0 if a trap is
taken to HS mode
from VS mode. Am I missing something ?
> Alistair
>
>> an scenario.
>>
>>
>>> I think VS->VS also keeps env->virt_enabled to be true, where the
>>> previous value should correspond to VS.
>>>
>> yeah. good point. We should check HSTATUS_SPV here as well.
>>
>>
>>> Thanks,
>>> Zhiwei
>>>
>>>> + env->pmu_fixed_ctrs[1].counter_prev[PRV_S] = current_icount;
>>>> + } else if (get_field(env->hstatus, HSTATUS_SPV)) {
>>>> + /* VS->HS transition.The previous value should
>>>> correspond to VS. */
>>>> + env->pmu_fixed_ctrs[1].counter_virt_prev[PRV_S] =
>>>> current_icount;
>>>> + }
>>>> + }
>>>> +
>>>> + counter_arr[env->priv] += delta;
>>>> +}
>>>> +
>>>> +void riscv_pmu_cycle_update_priv(CPURISCVState *env, target_ulong
>>>> newpriv)
>>>> +{
>>>> + uint64_t delta;
>>>> + uint64_t *counter_arr;
>>>> + uint64_t *counter_arr_prev;
>>>> + uint64_t current_host_ticks = cpu_get_host_ticks();
>>>> +
>>>> + if (env->virt_enabled) {
>>>> + counter_arr = env->pmu_fixed_ctrs[0].counter_virt;
>>>> + counter_arr_prev = env->pmu_fixed_ctrs[0].counter_virt_prev;
>>>> + } else {
>>>> + counter_arr = env->pmu_fixed_ctrs[0].counter;
>>>> + counter_arr_prev = env->pmu_fixed_ctrs[0].counter_prev;
>>>> + }
>>>> +
>>>> + if (newpriv != env->priv) {
>>>> + delta = current_host_ticks - counter_arr_prev[env->priv];
>>>> + counter_arr_prev[newpriv] = current_host_ticks;
>>>> + } else {
>>>> + delta = current_host_ticks - counter_arr_prev[env->priv];
>>>> + if (env->virt_enabled) {
>>>> + env->pmu_fixed_ctrs[0].counter_prev[PRV_S] =
>>>> current_host_ticks;
>>>> + } else if (get_field(env->hstatus, HSTATUS_SPV)) {
>>>> + env->pmu_fixed_ctrs[0].counter_virt_prev[PRV_S] =
>>>> + current_host_ticks;
>>>> + }
>>>> + }
>>>> +
>>>> + counter_arr[env->priv] += delta;
>>>> +}
>>>> +
>>>> int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx
>>>> event_idx)
>>>> {
>>>> uint32_t ctr_idx;
>>>> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
>>>> index 505fc850d38e..50de6031a730 100644
>>>> --- a/target/riscv/pmu.h
>>>> +++ b/target/riscv/pmu.h
>>>> @@ -31,3 +31,5 @@ int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum
>>>> riscv_pmu_event_idx event_idx);
>>>> void riscv_pmu_generate_fdt_node(void *fdt, uint32_t cmask, char
>>>> *pmu_name);
>>>> int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
>>>> uint32_t ctr_idx);
>>>> +void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong
>>>> newpriv);
>>>> +void riscv_pmu_cycle_update_priv(CPURISCVState *env, target_ulong
>>>> newpriv);
[-- Attachment #2: Type: text/html, Size: 15856 bytes --]
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v5 5/5] target/riscv: Implement privilege mode filtering for cycle/instret
2024-03-20 7:21 ` Atish Patra
@ 2024-03-20 8:06 ` Alistair Francis
0 siblings, 0 replies; 15+ messages in thread
From: Alistair Francis @ 2024-03-20 8:06 UTC (permalink / raw)
To: Atish Patra
Cc: LIU Zhiwei, Daniel Henrique Barboza, Alistair Francis, Bin Meng,
Palmer Dabbelt, qemu-devel, qemu-riscv, Weiwei Li, kaiwenxue1
On Wed, Mar 20, 2024 at 5:21 PM Atish Patra <atishp@rivosinc.com> wrote:
>
>
> On 3/19/24 21:54, Alistair Francis wrote:
>
> On Thu, Mar 7, 2024 at 7:26 PM Atish Patra <atishp@rivosinc.com> wrote:
>
> On 3/4/24 22:47, LIU Zhiwei wrote:
>
> On 2024/2/29 2:51, Atish Patra wrote:
>
> Privilege mode filtering can also be emulated for cycle/instret by
> tracking host_ticks/icount during each privilege mode switch. This
> patch implements that for both cycle/instret and mhpmcounters. The
> first one requires Smcntrpmf while the other one requires Sscofpmf
> to be enabled.
>
> The cycle/instret are still computed using host ticks when icount
> is not enabled. Otherwise, they are computed using raw icount which
> is more accurate in icount mode.
>
> Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/cpu.h | 11 +++++
> target/riscv/cpu_bits.h | 5 ++
> target/riscv/cpu_helper.c | 17 ++++++-
> target/riscv/csr.c | 96 ++++++++++++++++++++++++++++++---------
> target/riscv/pmu.c | 64 ++++++++++++++++++++++++++
> target/riscv/pmu.h | 2 +
> 6 files changed, 171 insertions(+), 24 deletions(-)
>
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 174e8ba8e847..9e21d7f7d635 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -157,6 +157,15 @@ typedef struct PMUCTRState {
> target_ulong irq_overflow_left;
> } PMUCTRState;
> +typedef struct PMUFixedCtrState {
> + /* Track cycle and icount for each privilege mode */
> + uint64_t counter[4];
> + uint64_t counter_prev[4];
> + /* Track cycle and icount for each privilege mode when V = 1*/
> + uint64_t counter_virt[2];
> + uint64_t counter_virt_prev[2];
> +} PMUFixedCtrState;
> +
> struct CPUArchState {
> target_ulong gpr[32];
> target_ulong gprh[32]; /* 64 top bits of the 128-bit registers */
> @@ -353,6 +362,8 @@ struct CPUArchState {
> /* PMU event selector configured values for RV32 */
> target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
> + PMUFixedCtrState pmu_fixed_ctrs[2];
> +
> target_ulong sscratch;
> target_ulong mscratch;
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index e866c60a400c..5fe349e313dc 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -920,6 +920,11 @@ typedef enum RISCVException {
> #define MHPMEVENT_BIT_VUINH BIT_ULL(58)
> #define MHPMEVENTH_BIT_VUINH BIT(26)
> +#define MHPMEVENT_FILTER_MASK (MHPMEVENT_BIT_MINH | \
> + MHPMEVENT_BIT_SINH | \
> + MHPMEVENT_BIT_UINH | \
> + MHPMEVENT_BIT_VSINH | \
> + MHPMEVENT_BIT_VUINH)
> #define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000)
> #define MHPMEVENT_IDX_MASK 0xFFFFF
> #define MHPMEVENT_SSCOF_RESVD 16
> diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c
> index d462d95ee165..33965d843d46 100644
> --- a/target/riscv/cpu_helper.c
> +++ b/target/riscv/cpu_helper.c
> @@ -718,8 +718,21 @@ void riscv_cpu_set_mode(CPURISCVState *env,
> target_ulong newpriv)
> {
> g_assert(newpriv <= PRV_M && newpriv != PRV_RESERVED);
> - if (icount_enabled() && newpriv != env->priv) {
> - riscv_itrigger_update_priv(env);
> + /*
> + * Invoke cycle/instret update between priv mode changes or
> + * VS->HS mode transition is SPV bit must be set
> + * HS->VS mode transition where virt_enabled must be set
> + * In both cases, priv will S mode only.
> + */
> + if (newpriv != env->priv ||
> + (env->priv == PRV_S && newpriv == PRV_S &&
> + (env->virt_enabled || get_field(env->hstatus, HSTATUS_SPV)))) {
> + if (icount_enabled()) {
> + riscv_itrigger_update_priv(env);
> + riscv_pmu_icount_update_priv(env, newpriv);
> + } else {
> + riscv_pmu_cycle_update_priv(env, newpriv);
> + }
> }
> /* tlb_flush is unnecessary as mode is contained in mmu_idx */
> env->priv = newpriv;
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index ff9bac537593..482e212c5f74 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -788,32 +788,16 @@ static RISCVException write_vcsr(CPURISCVState
> *env, int csrno,
> return RISCV_EXCP_NONE;
> }
> +#if defined(CONFIG_USER_ONLY)
> /* User Timers and Counters */
> static target_ulong get_ticks(bool shift)
> {
> - int64_t val;
> - target_ulong result;
> -
> -#if !defined(CONFIG_USER_ONLY)
> - if (icount_enabled()) {
> - val = icount_get();
> - } else {
> - val = cpu_get_host_ticks();
> - }
> -#else
> - val = cpu_get_host_ticks();
> -#endif
> -
> - if (shift) {
> - result = val >> 32;
> - } else {
> - result = val;
> - }
> + int64_t val = cpu_get_host_ticks();
> + target_ulong result = shift ? val >> 32 : val;
> return result;
> }
> -#if defined(CONFIG_USER_ONLY)
> static RISCVException read_time(CPURISCVState *env, int csrno,
> target_ulong *val)
> {
> @@ -952,6 +936,71 @@ static RISCVException
> write_mhpmeventh(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
> +static target_ulong
> riscv_pmu_ctr_get_fixed_counters_val(CPURISCVState *env,
> + int
> counter_idx,
> + bool
> upper_half)
> +{
> + uint64_t curr_val = 0;
> + target_ulong result = 0;
> + uint64_t *counter_arr = icount_enabled() ?
> env->pmu_fixed_ctrs[1].counter :
> + env->pmu_fixed_ctrs[0].counter;
> + uint64_t *counter_arr_virt = icount_enabled() ?
> + env->pmu_fixed_ctrs[1].counter_virt :
> + env->pmu_fixed_ctrs[0].counter_virt;
> + uint64_t cfg_val = 0;
> +
> + if (counter_idx == 0) {
> + cfg_val = upper_half ? ((uint64_t)env->mcyclecfgh << 32) :
> + env->mcyclecfg;
> + } else if (counter_idx == 2) {
> + cfg_val = upper_half ? ((uint64_t)env->minstretcfgh << 32) :
> + env->minstretcfg;
> + } else {
> + cfg_val = upper_half ?
> + ((uint64_t)env->mhpmeventh_val[counter_idx] << 32) :
> + env->mhpmevent_val[counter_idx];
> + cfg_val &= MHPMEVENT_FILTER_MASK;
> + }
> +
> + if (!cfg_val) {
> + if (icount_enabled()) {
> + curr_val = icount_get_raw();
> + } else {
> + curr_val = cpu_get_host_ticks();
> + }
> + goto done;
> + }
> +
> + if (!(cfg_val & MCYCLECFG_BIT_MINH)) {
> + curr_val += counter_arr[PRV_M];
> + }
> +
> + if (!(cfg_val & MCYCLECFG_BIT_SINH)) {
> + curr_val += counter_arr[PRV_S];
> + }
> +
> + if (!(cfg_val & MCYCLECFG_BIT_UINH)) {
> + curr_val += counter_arr[PRV_U];
> + }
> +
> + if (!(cfg_val & MCYCLECFG_BIT_VSINH)) {
> + curr_val += counter_arr_virt[PRV_S];
> + }
> +
> + if (!(cfg_val & MCYCLECFG_BIT_VUINH)) {
> + curr_val += counter_arr_virt[PRV_U];
> + }
> +
> +done:
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + result = upper_half ? curr_val >> 32 : curr_val;
> + } else {
> + result = curr_val;
> + }
> +
> + return result;
> +}
> +
> static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
> target_ulong val)
> {
> @@ -962,7 +1011,8 @@ static RISCVException
> write_mhpmcounter(CPURISCVState *env, int csrno,
> counter->mhpmcounter_val = val;
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> - counter->mhpmcounter_prev = get_ticks(false);
> + counter->mhpmcounter_prev =
> riscv_pmu_ctr_get_fixed_counters_val(env,
> + ctr_idx, false);
> if (ctr_idx > 2) {
> if (riscv_cpu_mxl(env) == MXL_RV32) {
> mhpmctr_val = mhpmctr_val |
> @@ -990,7 +1040,8 @@ static RISCVException
> write_mhpmcounterh(CPURISCVState *env, int csrno,
> mhpmctr_val = mhpmctr_val | (mhpmctrh_val << 32);
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> - counter->mhpmcounterh_prev = get_ticks(true);
> + counter->mhpmcounterh_prev =
> riscv_pmu_ctr_get_fixed_counters_val(env,
> + ctr_idx, true);
> if (ctr_idx > 2) {
> riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx);
> }
> @@ -1031,7 +1082,8 @@ static RISCVException
> riscv_pmu_read_ctr(CPURISCVState *env, target_ulong *val,
> */
> if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) {
> - *val = get_ticks(upper_half) - ctr_prev + ctr_val;
> + *val = riscv_pmu_ctr_get_fixed_counters_val(env, ctr_idx,
> upper_half) -
> + ctr_prev + ctr_val;
> } else {
> *val = ctr_val;
> }
> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> index 0e7d58b8a5c2..37309ff64cb6 100644
> --- a/target/riscv/pmu.c
> +++ b/target/riscv/pmu.c
> @@ -19,6 +19,7 @@
> #include "qemu/osdep.h"
> #include "qemu/log.h"
> #include "qemu/error-report.h"
> +#include "qemu/timer.h"
> #include "cpu.h"
> #include "pmu.h"
> #include "sysemu/cpu-timers.h"
> @@ -176,6 +177,69 @@ static int riscv_pmu_incr_ctr_rv64(RISCVCPU
> *cpu, uint32_t ctr_idx)
> return 0;
> }
> +void riscv_pmu_icount_update_priv(CPURISCVState *env, target_ulong
> newpriv)
> +{
> + uint64_t delta;
> + uint64_t *counter_arr;
> + uint64_t *counter_arr_prev;
> + uint64_t current_icount = icount_get_raw();
> +
> + if (env->virt_enabled) {
> + counter_arr = env->pmu_fixed_ctrs[1].counter_virt;
> + counter_arr_prev = env->pmu_fixed_ctrs[1].counter_virt_prev;
> + } else {
> + counter_arr = env->pmu_fixed_ctrs[1].counter;
> + counter_arr_prev = env->pmu_fixed_ctrs[1].counter_prev;
> + }
> +
> + if (newpriv != env->priv) {
> + delta = current_icount - counter_arr_prev[env->priv];
> + counter_arr_prev[newpriv] = current_icount;
> + } else {
> + delta = current_icount - counter_arr_prev[env->priv];
> + if (env->virt_enabled) {
> + /* HS->VS transition.The previous value should
> correspond to HS. */
>
> Hi Atish,
>
> Dose env->virt_enabled ensure HS->VS transition?
>
> As per my understanding, riscv_cpu_set_virt_enabled always invoked
> before riscv_cpu_set_mode.
>
> In helper_mret() we call riscv_cpu_set_mode() before
> riscv_cpu_set_virt_enabled().
>
> Ahh yes. I missed the helper_mret condition.
>
> I don't think there is any requirement on which order we call the functions
>
> Indeed. helper_mret and helper_sret invokes them in different order.
>
> That means env->virt_enabled must be enabled before we enter into this
> function. Let me know if I missed
>
> env->virt_enabled will be true if we are in HS or VS mode, but you
> don't know the transition from just env->virt_enabled being set.
>
> Hmm. In riscv_cpu_do_interrupt(), virt_enabled is set to 0 if a trap is taken to HS mode
> from VS mode. Am I missing something ?
>
Whoops, I meant VU and VS mode.
Alistair
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2024-03-20 8:07 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-28 18:51 [PATCH v5 0/5] Add ISA extension smcntrpmf support Atish Patra
2024-02-28 18:51 ` [PATCH v5 1/5] target/riscv: Fix the predicate functions for mhpmeventhX CSRs Atish Patra
2024-03-05 7:03 ` LIU Zhiwei
2024-02-28 18:51 ` [PATCH v5 2/5] target/riscv: Add cycle & instret privilege mode filtering properties Atish Patra
2024-03-04 17:24 ` Daniel Henrique Barboza
2024-03-05 7:01 ` LIU Zhiwei
2024-03-07 9:27 ` Atish Patra
2024-02-28 18:51 ` [PATCH v5 3/5] target/riscv: Add cycle & instret privilege mode filtering definitions Atish Patra
2024-02-28 18:51 ` [PATCH v5 4/5] target/riscv: Add cycle & instret privilege mode filtering support Atish Patra
2024-02-28 18:51 ` [PATCH v5 5/5] target/riscv: Implement privilege mode filtering for cycle/instret Atish Patra
2024-03-05 6:47 ` LIU Zhiwei
2024-03-07 9:25 ` Atish Patra
2024-03-20 4:54 ` Alistair Francis
2024-03-20 7:21 ` Atish Patra
2024-03-20 8:06 ` Alistair Francis
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).