* [PATCH v2 01/13] target/riscv: Add properties for Indirect CSR Access extension
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
@ 2024-07-23 23:29 ` Atish Patra
2024-07-26 7:42 ` Alistair Francis
2024-07-23 23:29 ` [PATCH v2 02/13] target/riscv: Decouple AIA processing from xiselect and xireg Atish Patra
` (11 subsequent siblings)
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:29 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
From: Kaiwen Xue <kaiwenx@rivosinc.com>
This adds the properties for sxcsrind. Definitions of new registers and
implementations will come with future patches.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
---
target/riscv/cpu.c | 2 ++
target/riscv/cpu_cfg.h | 2 ++
2 files changed, 4 insertions(+)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index a90808a3bace..ebc19090b40d 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -183,12 +183,14 @@ const RISCVIsaExtData isa_edata_arr[] = {
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
+ ISA_EXT_DATA_ENTRY(smcsrind, PRIV_VERSION_1_12_0, ext_smcsrind),
ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, ext_smepmp),
ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
ISA_EXT_DATA_ENTRY(ssccptr, PRIV_VERSION_1_11_0, has_priv_1_11),
ISA_EXT_DATA_ENTRY(sscofpmf, PRIV_VERSION_1_12_0, ext_sscofpmf),
ISA_EXT_DATA_ENTRY(sscounterenw, PRIV_VERSION_1_12_0, has_priv_1_12),
+ ISA_EXT_DATA_ENTRY(sscsrind, PRIV_VERSION_1_12_0, ext_sscsrind),
ISA_EXT_DATA_ENTRY(sstc, PRIV_VERSION_1_12_0, ext_sstc),
ISA_EXT_DATA_ENTRY(sstvala, PRIV_VERSION_1_12_0, has_priv_1_12),
ISA_EXT_DATA_ENTRY(sstvecd, PRIV_VERSION_1_12_0, has_priv_1_12),
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index 8b272fb826ef..b8a5174bc871 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -77,6 +77,8 @@ struct RISCVCPUConfig {
bool ext_smstateen;
bool ext_sstc;
bool ext_smcntrpmf;
+ bool ext_smcsrind;
+ bool ext_sscsrind;
bool ext_svadu;
bool ext_svinval;
bool ext_svnapot;
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 01/13] target/riscv: Add properties for Indirect CSR Access extension
2024-07-23 23:29 ` [PATCH v2 01/13] target/riscv: Add properties for Indirect CSR Access extension Atish Patra
@ 2024-07-26 7:42 ` Alistair Francis
2024-07-27 1:33 ` Atish Kumar Patra
0 siblings, 1 reply; 41+ messages in thread
From: Alistair Francis @ 2024-07-26 7:42 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:31 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> From: Kaiwen Xue <kaiwenx@rivosinc.com>
>
> This adds the properties for sxcsrind. Definitions of new registers and
> implementations will come with future patches.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> ---
> target/riscv/cpu.c | 2 ++
> target/riscv/cpu_cfg.h | 2 ++
> 2 files changed, 4 insertions(+)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index a90808a3bace..ebc19090b40d 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -183,12 +183,14 @@ const RISCVIsaExtData isa_edata_arr[] = {
> ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
> ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
> ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
> + ISA_EXT_DATA_ENTRY(smcsrind, PRIV_VERSION_1_12_0, ext_smcsrind),
This is actually part of the unpriv spec, so it's a bit weird that it
depends on the priv spec. But that's how it's all set up.
But shouldn't this be PRIV_VERSION_1_13_0?
Alistair
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 01/13] target/riscv: Add properties for Indirect CSR Access extension
2024-07-26 7:42 ` Alistair Francis
@ 2024-07-27 1:33 ` Atish Kumar Patra
2024-07-31 9:24 ` Alistair Francis
0 siblings, 1 reply; 41+ messages in thread
From: Atish Kumar Patra @ 2024-07-27 1:33 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Fri, Jul 26, 2024 at 12:42 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Wed, Jul 24, 2024 at 9:31 AM Atish Patra <atishp@rivosinc.com> wrote:
> >
> > From: Kaiwen Xue <kaiwenx@rivosinc.com>
> >
> > This adds the properties for sxcsrind. Definitions of new registers and
> > implementations will come with future patches.
> >
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> > ---
> > target/riscv/cpu.c | 2 ++
> > target/riscv/cpu_cfg.h | 2 ++
> > 2 files changed, 4 insertions(+)
> >
> > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > index a90808a3bace..ebc19090b40d 100644
> > --- a/target/riscv/cpu.c
> > +++ b/target/riscv/cpu.c
> > @@ -183,12 +183,14 @@ const RISCVIsaExtData isa_edata_arr[] = {
> > ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
> > ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
> > ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
> > + ISA_EXT_DATA_ENTRY(smcsrind, PRIV_VERSION_1_12_0, ext_smcsrind),
>
> This is actually part of the unpriv spec, so it's a bit weird that it
> depends on the priv spec. But that's how it's all set up.
>
Smcsrind is part of priv spec[1]. Am I missing something ?
https://drive.google.com/file/d/17GeetSnT5wW3xNuAHI95-SI1gPGd5sJ_/view
> But shouldn't this be PRIV_VERSION_1_13_0?
>
Yes. Sorry I forgot about that. smcntrpmf should also be PRIV_VERSION_1_13_0.
I will send a fix patch along with the v2 for assert fix.
> Alistair
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 01/13] target/riscv: Add properties for Indirect CSR Access extension
2024-07-27 1:33 ` Atish Kumar Patra
@ 2024-07-31 9:24 ` Alistair Francis
2024-08-07 7:53 ` Atish Kumar Patra
0 siblings, 1 reply; 41+ messages in thread
From: Alistair Francis @ 2024-07-31 9:24 UTC (permalink / raw)
To: Atish Kumar Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Sat, Jul 27, 2024 at 11:33 AM Atish Kumar Patra <atishp@rivosinc.com> wrote:
>
> On Fri, Jul 26, 2024 at 12:42 AM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Wed, Jul 24, 2024 at 9:31 AM Atish Patra <atishp@rivosinc.com> wrote:
> > >
> > > From: Kaiwen Xue <kaiwenx@rivosinc.com>
> > >
> > > This adds the properties for sxcsrind. Definitions of new registers and
> > > implementations will come with future patches.
> > >
> > > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > > Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> > > ---
> > > target/riscv/cpu.c | 2 ++
> > > target/riscv/cpu_cfg.h | 2 ++
> > > 2 files changed, 4 insertions(+)
> > >
> > > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > > index a90808a3bace..ebc19090b40d 100644
> > > --- a/target/riscv/cpu.c
> > > +++ b/target/riscv/cpu.c
> > > @@ -183,12 +183,14 @@ const RISCVIsaExtData isa_edata_arr[] = {
> > > ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
> > > ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
> > > ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
> > > + ISA_EXT_DATA_ENTRY(smcsrind, PRIV_VERSION_1_12_0, ext_smcsrind),
> >
> > This is actually part of the unpriv spec, so it's a bit weird that it
> > depends on the priv spec. But that's how it's all set up.
> >
>
> Smcsrind is part of priv spec[1]. Am I missing something ?
>
> https://drive.google.com/file/d/17GeetSnT5wW3xNuAHI95-SI1gPGd5sJ_/view
Ah, I just saw
"This specification has been merged into the Unprivileged Specification" at
https://github.com/riscvarchive/riscv-indirect-csr-access
Alistair
>
> > But shouldn't this be PRIV_VERSION_1_13_0?
> >
>
> Yes. Sorry I forgot about that. smcntrpmf should also be PRIV_VERSION_1_13_0.
> I will send a fix patch along with the v2 for assert fix.
>
> > Alistair
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 01/13] target/riscv: Add properties for Indirect CSR Access extension
2024-07-31 9:24 ` Alistair Francis
@ 2024-08-07 7:53 ` Atish Kumar Patra
0 siblings, 0 replies; 41+ messages in thread
From: Atish Kumar Patra @ 2024-08-07 7:53 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 31, 2024 at 2:24 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Sat, Jul 27, 2024 at 11:33 AM Atish Kumar Patra <atishp@rivosinc.com> wrote:
> >
> > On Fri, Jul 26, 2024 at 12:42 AM Alistair Francis <alistair23@gmail.com> wrote:
> > >
> > > On Wed, Jul 24, 2024 at 9:31 AM Atish Patra <atishp@rivosinc.com> wrote:
> > > >
> > > > From: Kaiwen Xue <kaiwenx@rivosinc.com>
> > > >
> > > > This adds the properties for sxcsrind. Definitions of new registers and
> > > > implementations will come with future patches.
> > > >
> > > > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > > > Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> > > > ---
> > > > target/riscv/cpu.c | 2 ++
> > > > target/riscv/cpu_cfg.h | 2 ++
> > > > 2 files changed, 4 insertions(+)
> > > >
> > > > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > > > index a90808a3bace..ebc19090b40d 100644
> > > > --- a/target/riscv/cpu.c
> > > > +++ b/target/riscv/cpu.c
> > > > @@ -183,12 +183,14 @@ const RISCVIsaExtData isa_edata_arr[] = {
> > > > ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
> > > > ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
> > > > ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
> > > > + ISA_EXT_DATA_ENTRY(smcsrind, PRIV_VERSION_1_12_0, ext_smcsrind),
> > >
> > > This is actually part of the unpriv spec, so it's a bit weird that it
> > > depends on the priv spec. But that's how it's all set up.
> > >
> >
> > Smcsrind is part of priv spec[1]. Am I missing something ?
> >
> > https://drive.google.com/file/d/17GeetSnT5wW3xNuAHI95-SI1gPGd5sJ_/view
>
> Ah, I just saw
>
> "This specification has been merged into the Unprivileged Specification" at
>
> https://github.com/riscvarchive/riscv-indirect-csr-access
>
Probably a typo. I pinged RVI folks as the repo is already read only
now. I can't create an issue either.
> Alistair
>
> >
> > > But shouldn't this be PRIV_VERSION_1_13_0?
> > >
> >
> > Yes. Sorry I forgot about that. smcntrpmf should also be PRIV_VERSION_1_13_0.
> > I will send a fix patch along with the v2 for assert fix.
> >
> > > Alistair
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 02/13] target/riscv: Decouple AIA processing from xiselect and xireg
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
2024-07-23 23:29 ` [PATCH v2 01/13] target/riscv: Add properties for Indirect CSR Access extension Atish Patra
@ 2024-07-23 23:29 ` Atish Patra
2024-08-06 0:05 ` Alistair Francis
2024-07-23 23:30 ` [PATCH v2 03/13] target/riscv: Enable S*stateen bits for AIA Atish Patra
` (10 subsequent siblings)
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:29 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
From: Kaiwen Xue <kaiwenx@rivosinc.com>
Since xiselect and xireg also will be of use in sxcsrind, AIA should
have its own separated interface when those CSRs are accessed.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
---
target/riscv/csr.c | 165 ++++++++++++++++++++++++++++++++++++++++++++---------
1 file changed, 139 insertions(+), 26 deletions(-)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index ea3560342c4f..58be8bc3cc8c 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -29,6 +29,7 @@
#include "sysemu/cpu-timers.h"
#include "qemu/guest-random.h"
#include "qapi/error.h"
+#include <stdbool.h>
/* CSR function table public API */
void riscv_get_csr_ops(int csrno, riscv_csr_operations *ops)
@@ -286,6 +287,15 @@ static RISCVException aia_any32(CPURISCVState *env, int csrno)
return any32(env, csrno);
}
+static RISCVException csrind_or_aia_any(CPURISCVState *env, int csrno)
+{
+ if (!riscv_cpu_cfg(env)->ext_smaia && !riscv_cpu_cfg(env)->ext_smcsrind) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return any(env, csrno);
+}
+
static RISCVException smode(CPURISCVState *env, int csrno)
{
if (riscv_has_ext(env, RVS)) {
@@ -322,6 +332,30 @@ static RISCVException aia_smode32(CPURISCVState *env, int csrno)
return smode32(env, csrno);
}
+static bool csrind_extensions_present(CPURISCVState *env)
+{
+ return riscv_cpu_cfg(env)->ext_smcsrind || riscv_cpu_cfg(env)->ext_sscsrind;
+}
+
+static bool aia_extensions_present(CPURISCVState *env)
+{
+ return riscv_cpu_cfg(env)->ext_smaia || riscv_cpu_cfg(env)->ext_ssaia;
+}
+
+static bool csrind_or_aia_extensions_present(CPURISCVState *env)
+{
+ return csrind_extensions_present(env) || aia_extensions_present(env);
+}
+
+static RISCVException csrind_or_aia_smode(CPURISCVState *env, int csrno)
+{
+ if (!csrind_or_aia_extensions_present(env)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return smode(env, csrno);
+}
+
static RISCVException hmode(CPURISCVState *env, int csrno)
{
if (riscv_has_ext(env, RVH)) {
@@ -341,6 +375,15 @@ static RISCVException hmode32(CPURISCVState *env, int csrno)
}
+static RISCVException csrind_or_aia_hmode(CPURISCVState *env, int csrno)
+{
+ if (!csrind_or_aia_extensions_present(env)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return hmode(env, csrno);
+}
+
static RISCVException umode(CPURISCVState *env, int csrno)
{
if (riscv_has_ext(env, RVU)) {
@@ -1928,6 +1971,22 @@ static int aia_xlate_vs_csrno(CPURISCVState *env, int csrno)
};
}
+static int csrind_xlate_vs_csrno(CPURISCVState *env, int csrno)
+{
+ if (!env->virt_enabled) {
+ return csrno;
+ }
+
+ switch (csrno) {
+ case CSR_SISELECT:
+ return CSR_VSISELECT;
+ case CSR_SIREG:
+ return CSR_VSIREG;
+ default:
+ return csrno;
+ };
+}
+
static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
target_ulong *val, target_ulong new_val,
target_ulong wr_mask)
@@ -1935,7 +1994,7 @@ static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
target_ulong *iselect;
/* Translate CSR number for VS-mode */
- csrno = aia_xlate_vs_csrno(env, csrno);
+ csrno = csrind_xlate_vs_csrno(env, csrno);
/* Find the iselect CSR based on CSR number */
switch (csrno) {
@@ -1964,6 +2023,12 @@ static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
+static bool xiselect_aia_range(target_ulong isel)
+{
+ return (ISELECT_IPRIO0 <= isel && isel <= ISELECT_IPRIO15) ||
+ (ISELECT_IMSIC_FIRST <= isel && isel <= ISELECT_IMSIC_LAST);
+}
+
static int rmw_iprio(target_ulong xlen,
target_ulong iselect, uint8_t *iprio,
target_ulong *val, target_ulong new_val,
@@ -2009,45 +2074,44 @@ static int rmw_iprio(target_ulong xlen,
return 0;
}
-static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
- target_ulong *val, target_ulong new_val,
- target_ulong wr_mask)
+static RISCVException rmw_xireg_aia(CPURISCVState *env, int csrno,
+ target_ulong isel, target_ulong *val,
+ target_ulong new_val, target_ulong wr_mask)
{
- bool virt, isel_reserved;
- uint8_t *iprio;
+ bool virt = false, isel_reserved = false;
int ret = -EINVAL;
- target_ulong priv, isel, vgein;
-
- /* Translate CSR number for VS-mode */
- csrno = aia_xlate_vs_csrno(env, csrno);
+ uint8_t *iprio;
+ target_ulong priv, vgein;
- /* Decode register details from CSR number */
- virt = false;
- isel_reserved = false;
+ /* VS-mode CSR number passed in has already been translated */
switch (csrno) {
case CSR_MIREG:
+ if (!riscv_cpu_cfg(env)->ext_smaia) {
+ goto done;
+ }
iprio = env->miprio;
- isel = env->miselect;
priv = PRV_M;
break;
case CSR_SIREG:
- if (env->priv == PRV_S && env->mvien & MIP_SEIP &&
+ if (!riscv_cpu_cfg(env)->ext_ssaia ||
+ (env->priv == PRV_S && env->mvien & MIP_SEIP &&
env->siselect >= ISELECT_IMSIC_EIDELIVERY &&
- env->siselect <= ISELECT_IMSIC_EIE63) {
+ env->siselect <= ISELECT_IMSIC_EIE63)) {
goto done;
}
iprio = env->siprio;
- isel = env->siselect;
priv = PRV_S;
break;
case CSR_VSIREG:
+ if (!riscv_cpu_cfg(env)->ext_ssaia) {
+ goto done;
+ }
iprio = env->hviprio;
- isel = env->vsiselect;
priv = PRV_S;
virt = true;
break;
default:
- goto done;
+ goto done;
};
/* Find the selected guest interrupt file */
@@ -2078,10 +2142,54 @@ static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
}
done:
+ /*
+ * If AIA is not enabled, illegal instruction exception is always
+ * returned regardless of whether we are in VS-mode or not
+ */
if (ret) {
return (env->virt_enabled && virt && !isel_reserved) ?
RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
}
+
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
+ target_ulong *val, target_ulong new_val,
+ target_ulong wr_mask)
+{
+ bool virt = false;
+ int ret = -EINVAL;
+ target_ulong isel;
+
+ /* Translate CSR number for VS-mode */
+ csrno = csrind_xlate_vs_csrno(env, csrno);
+
+ /* Decode register details from CSR number */
+ switch (csrno) {
+ case CSR_MIREG:
+ isel = env->miselect;
+ break;
+ case CSR_SIREG:
+ isel = env->siselect;
+ break;
+ case CSR_VSIREG:
+ isel = env->vsiselect;
+ virt = true;
+ break;
+ default:
+ goto done;
+ };
+
+ if (xiselect_aia_range(isel)) {
+ return rmw_xireg_aia(env, csrno, isel, val, new_val, wr_mask);
+ }
+
+done:
+ if (ret) {
+ return (env->virt_enabled && virt) ?
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
+ }
return RISCV_EXCP_NONE;
}
@@ -4981,8 +5089,10 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
/* Machine-Level Window to Indirectly Accessed Registers (AIA) */
- [CSR_MISELECT] = { "miselect", aia_any, NULL, NULL, rmw_xiselect },
- [CSR_MIREG] = { "mireg", aia_any, NULL, NULL, rmw_xireg },
+ [CSR_MISELECT] = { "miselect", csrind_or_aia_any, NULL, NULL,
+ rmw_xiselect },
+ [CSR_MIREG] = { "mireg", csrind_or_aia_any, NULL, NULL,
+ rmw_xireg },
/* Machine-Level Interrupts (AIA) */
[CSR_MTOPEI] = { "mtopei", aia_any, NULL, NULL, rmw_xtopei },
@@ -5100,8 +5210,10 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_SATP] = { "satp", satp, read_satp, write_satp },
/* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
- [CSR_SISELECT] = { "siselect", aia_smode, NULL, NULL, rmw_xiselect },
- [CSR_SIREG] = { "sireg", aia_smode, NULL, NULL, rmw_xireg },
+ [CSR_SISELECT] = { "siselect", csrind_or_aia_smode, NULL, NULL,
+ rmw_xiselect },
+ [CSR_SIREG] = { "sireg", csrind_or_aia_smode, NULL, NULL,
+ rmw_xireg },
/* Supervisor-Level Interrupts (AIA) */
[CSR_STOPEI] = { "stopei", aia_smode, NULL, NULL, rmw_xtopei },
@@ -5180,9 +5292,10 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
/*
* VS-Level Window to Indirectly Accessed Registers (H-extension with AIA)
*/
- [CSR_VSISELECT] = { "vsiselect", aia_hmode, NULL, NULL,
- rmw_xiselect },
- [CSR_VSIREG] = { "vsireg", aia_hmode, NULL, NULL, rmw_xireg },
+ [CSR_VSISELECT] = { "vsiselect", csrind_or_aia_hmode, NULL, NULL,
+ rmw_xiselect },
+ [CSR_VSIREG] = { "vsireg", csrind_or_aia_hmode, NULL, NULL,
+ rmw_xireg },
/* VS-Level Interrupts (H-extension with AIA) */
[CSR_VSTOPEI] = { "vstopei", aia_hmode, NULL, NULL, rmw_xtopei },
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 02/13] target/riscv: Decouple AIA processing from xiselect and xireg
2024-07-23 23:29 ` [PATCH v2 02/13] target/riscv: Decouple AIA processing from xiselect and xireg Atish Patra
@ 2024-08-06 0:05 ` Alistair Francis
0 siblings, 0 replies; 41+ messages in thread
From: Alistair Francis @ 2024-08-06 0:05 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:32 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> From: Kaiwen Xue <kaiwenx@rivosinc.com>
>
> Since xiselect and xireg also will be of use in sxcsrind, AIA should
> have its own separated interface when those CSRs are accessed.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/csr.c | 165 ++++++++++++++++++++++++++++++++++++++++++++---------
> 1 file changed, 139 insertions(+), 26 deletions(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index ea3560342c4f..58be8bc3cc8c 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -29,6 +29,7 @@
> #include "sysemu/cpu-timers.h"
> #include "qemu/guest-random.h"
> #include "qapi/error.h"
> +#include <stdbool.h>
>
> /* CSR function table public API */
> void riscv_get_csr_ops(int csrno, riscv_csr_operations *ops)
> @@ -286,6 +287,15 @@ static RISCVException aia_any32(CPURISCVState *env, int csrno)
> return any32(env, csrno);
> }
>
> +static RISCVException csrind_or_aia_any(CPURISCVState *env, int csrno)
> +{
> + if (!riscv_cpu_cfg(env)->ext_smaia && !riscv_cpu_cfg(env)->ext_smcsrind) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return any(env, csrno);
> +}
> +
> static RISCVException smode(CPURISCVState *env, int csrno)
> {
> if (riscv_has_ext(env, RVS)) {
> @@ -322,6 +332,30 @@ static RISCVException aia_smode32(CPURISCVState *env, int csrno)
> return smode32(env, csrno);
> }
>
> +static bool csrind_extensions_present(CPURISCVState *env)
> +{
> + return riscv_cpu_cfg(env)->ext_smcsrind || riscv_cpu_cfg(env)->ext_sscsrind;
> +}
> +
> +static bool aia_extensions_present(CPURISCVState *env)
> +{
> + return riscv_cpu_cfg(env)->ext_smaia || riscv_cpu_cfg(env)->ext_ssaia;
> +}
> +
> +static bool csrind_or_aia_extensions_present(CPURISCVState *env)
> +{
> + return csrind_extensions_present(env) || aia_extensions_present(env);
> +}
> +
> +static RISCVException csrind_or_aia_smode(CPURISCVState *env, int csrno)
> +{
> + if (!csrind_or_aia_extensions_present(env)) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return smode(env, csrno);
> +}
> +
> static RISCVException hmode(CPURISCVState *env, int csrno)
> {
> if (riscv_has_ext(env, RVH)) {
> @@ -341,6 +375,15 @@ static RISCVException hmode32(CPURISCVState *env, int csrno)
>
> }
>
> +static RISCVException csrind_or_aia_hmode(CPURISCVState *env, int csrno)
> +{
> + if (!csrind_or_aia_extensions_present(env)) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return hmode(env, csrno);
> +}
> +
> static RISCVException umode(CPURISCVState *env, int csrno)
> {
> if (riscv_has_ext(env, RVU)) {
> @@ -1928,6 +1971,22 @@ static int aia_xlate_vs_csrno(CPURISCVState *env, int csrno)
> };
> }
>
> +static int csrind_xlate_vs_csrno(CPURISCVState *env, int csrno)
> +{
> + if (!env->virt_enabled) {
> + return csrno;
> + }
> +
> + switch (csrno) {
> + case CSR_SISELECT:
> + return CSR_VSISELECT;
> + case CSR_SIREG:
> + return CSR_VSIREG;
> + default:
> + return csrno;
> + };
> +}
> +
> static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
> target_ulong *val, target_ulong new_val,
> target_ulong wr_mask)
> @@ -1935,7 +1994,7 @@ static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
> target_ulong *iselect;
>
> /* Translate CSR number for VS-mode */
> - csrno = aia_xlate_vs_csrno(env, csrno);
> + csrno = csrind_xlate_vs_csrno(env, csrno);
>
> /* Find the iselect CSR based on CSR number */
> switch (csrno) {
> @@ -1964,6 +2023,12 @@ static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
>
> +static bool xiselect_aia_range(target_ulong isel)
> +{
> + return (ISELECT_IPRIO0 <= isel && isel <= ISELECT_IPRIO15) ||
> + (ISELECT_IMSIC_FIRST <= isel && isel <= ISELECT_IMSIC_LAST);
> +}
> +
> static int rmw_iprio(target_ulong xlen,
> target_ulong iselect, uint8_t *iprio,
> target_ulong *val, target_ulong new_val,
> @@ -2009,45 +2074,44 @@ static int rmw_iprio(target_ulong xlen,
> return 0;
> }
>
> -static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
> - target_ulong *val, target_ulong new_val,
> - target_ulong wr_mask)
> +static RISCVException rmw_xireg_aia(CPURISCVState *env, int csrno,
> + target_ulong isel, target_ulong *val,
> + target_ulong new_val, target_ulong wr_mask)
> {
> - bool virt, isel_reserved;
> - uint8_t *iprio;
> + bool virt = false, isel_reserved = false;
> int ret = -EINVAL;
> - target_ulong priv, isel, vgein;
> -
> - /* Translate CSR number for VS-mode */
> - csrno = aia_xlate_vs_csrno(env, csrno);
> + uint8_t *iprio;
> + target_ulong priv, vgein;
>
> - /* Decode register details from CSR number */
> - virt = false;
> - isel_reserved = false;
> + /* VS-mode CSR number passed in has already been translated */
> switch (csrno) {
> case CSR_MIREG:
> + if (!riscv_cpu_cfg(env)->ext_smaia) {
> + goto done;
> + }
> iprio = env->miprio;
> - isel = env->miselect;
> priv = PRV_M;
> break;
> case CSR_SIREG:
> - if (env->priv == PRV_S && env->mvien & MIP_SEIP &&
> + if (!riscv_cpu_cfg(env)->ext_ssaia ||
> + (env->priv == PRV_S && env->mvien & MIP_SEIP &&
> env->siselect >= ISELECT_IMSIC_EIDELIVERY &&
> - env->siselect <= ISELECT_IMSIC_EIE63) {
> + env->siselect <= ISELECT_IMSIC_EIE63)) {
> goto done;
> }
> iprio = env->siprio;
> - isel = env->siselect;
> priv = PRV_S;
> break;
> case CSR_VSIREG:
> + if (!riscv_cpu_cfg(env)->ext_ssaia) {
> + goto done;
> + }
> iprio = env->hviprio;
> - isel = env->vsiselect;
> priv = PRV_S;
> virt = true;
> break;
> default:
> - goto done;
> + goto done;
> };
>
> /* Find the selected guest interrupt file */
> @@ -2078,10 +2142,54 @@ static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
> }
>
> done:
> + /*
> + * If AIA is not enabled, illegal instruction exception is always
> + * returned regardless of whether we are in VS-mode or not
> + */
> if (ret) {
> return (env->virt_enabled && virt && !isel_reserved) ?
> RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
> }
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> +static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
> + target_ulong *val, target_ulong new_val,
> + target_ulong wr_mask)
> +{
> + bool virt = false;
> + int ret = -EINVAL;
> + target_ulong isel;
> +
> + /* Translate CSR number for VS-mode */
> + csrno = csrind_xlate_vs_csrno(env, csrno);
> +
> + /* Decode register details from CSR number */
> + switch (csrno) {
> + case CSR_MIREG:
> + isel = env->miselect;
> + break;
> + case CSR_SIREG:
> + isel = env->siselect;
> + break;
> + case CSR_VSIREG:
> + isel = env->vsiselect;
> + virt = true;
> + break;
> + default:
> + goto done;
> + };
> +
> + if (xiselect_aia_range(isel)) {
> + return rmw_xireg_aia(env, csrno, isel, val, new_val, wr_mask);
> + }
> +
> +done:
> + if (ret) {
> + return (env->virt_enabled && virt) ?
> + RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
> + }
> return RISCV_EXCP_NONE;
> }
>
> @@ -4981,8 +5089,10 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_MIP] = { "mip", any, NULL, NULL, rmw_mip },
>
> /* Machine-Level Window to Indirectly Accessed Registers (AIA) */
> - [CSR_MISELECT] = { "miselect", aia_any, NULL, NULL, rmw_xiselect },
> - [CSR_MIREG] = { "mireg", aia_any, NULL, NULL, rmw_xireg },
> + [CSR_MISELECT] = { "miselect", csrind_or_aia_any, NULL, NULL,
> + rmw_xiselect },
> + [CSR_MIREG] = { "mireg", csrind_or_aia_any, NULL, NULL,
> + rmw_xireg },
>
> /* Machine-Level Interrupts (AIA) */
> [CSR_MTOPEI] = { "mtopei", aia_any, NULL, NULL, rmw_xtopei },
> @@ -5100,8 +5210,10 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_SATP] = { "satp", satp, read_satp, write_satp },
>
> /* Supervisor-Level Window to Indirectly Accessed Registers (AIA) */
> - [CSR_SISELECT] = { "siselect", aia_smode, NULL, NULL, rmw_xiselect },
> - [CSR_SIREG] = { "sireg", aia_smode, NULL, NULL, rmw_xireg },
> + [CSR_SISELECT] = { "siselect", csrind_or_aia_smode, NULL, NULL,
> + rmw_xiselect },
> + [CSR_SIREG] = { "sireg", csrind_or_aia_smode, NULL, NULL,
> + rmw_xireg },
>
> /* Supervisor-Level Interrupts (AIA) */
> [CSR_STOPEI] = { "stopei", aia_smode, NULL, NULL, rmw_xtopei },
> @@ -5180,9 +5292,10 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> /*
> * VS-Level Window to Indirectly Accessed Registers (H-extension with AIA)
> */
> - [CSR_VSISELECT] = { "vsiselect", aia_hmode, NULL, NULL,
> - rmw_xiselect },
> - [CSR_VSIREG] = { "vsireg", aia_hmode, NULL, NULL, rmw_xireg },
> + [CSR_VSISELECT] = { "vsiselect", csrind_or_aia_hmode, NULL, NULL,
> + rmw_xiselect },
> + [CSR_VSIREG] = { "vsireg", csrind_or_aia_hmode, NULL, NULL,
> + rmw_xireg },
>
> /* VS-Level Interrupts (H-extension with AIA) */
> [CSR_VSTOPEI] = { "vstopei", aia_hmode, NULL, NULL, rmw_xtopei },
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 03/13] target/riscv: Enable S*stateen bits for AIA
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
2024-07-23 23:29 ` [PATCH v2 01/13] target/riscv: Add properties for Indirect CSR Access extension Atish Patra
2024-07-23 23:29 ` [PATCH v2 02/13] target/riscv: Decouple AIA processing from xiselect and xireg Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-08-06 0:12 ` Alistair Francis
2024-07-23 23:30 ` [PATCH v2 04/13] target/riscv: Support generic CSR indirect access Atish Patra
` (9 subsequent siblings)
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
As per the ratified AIA spec v1.0, three stateen bits control AIA CSR
access.
Bit 60 controls the indirect CSRs
Bit 59 controls the most AIA CSR state
Bit 58 controls the IMSIC state such as stopei and vstopei
Enable the corresponding bits in [m|h]stateen and enable corresponding
checks in the CSR accessor functions.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/csr.c | 88 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 87 insertions(+), 1 deletion(-)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 58be8bc3cc8c..18b9ae802b15 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -316,19 +316,42 @@ static RISCVException smode32(CPURISCVState *env, int csrno)
static RISCVException aia_smode(CPURISCVState *env, int csrno)
{
+ int ret;
+
if (!riscv_cpu_cfg(env)->ext_ssaia) {
return RISCV_EXCP_ILLEGAL_INST;
}
+ if (csrno == CSR_STOPEI) {
+ ret = smstateen_acc_ok(env, 0, SMSTATEEN0_IMSIC);
+ } else {
+ ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
+ }
+
+ if (ret != RISCV_EXCP_NONE) {
+ return ret;
+ }
+
return smode(env, csrno);
}
static RISCVException aia_smode32(CPURISCVState *env, int csrno)
{
+ int ret;
+
if (!riscv_cpu_cfg(env)->ext_ssaia) {
return RISCV_EXCP_ILLEGAL_INST;
}
+ ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
+ if (ret != RISCV_EXCP_NONE) {
+ return ret;
+ }
+
+ if (ret != RISCV_EXCP_NONE) {
+ return ret;
+ }
+
return smode32(env, csrno);
}
@@ -567,15 +590,38 @@ static RISCVException pointer_masking(CPURISCVState *env, int csrno)
static RISCVException aia_hmode(CPURISCVState *env, int csrno)
{
+ int ret;
+
if (!riscv_cpu_cfg(env)->ext_ssaia) {
return RISCV_EXCP_ILLEGAL_INST;
}
- return hmode(env, csrno);
+ if (csrno == CSR_VSTOPEI) {
+ ret = smstateen_acc_ok(env, 0, SMSTATEEN0_IMSIC);
+ } else {
+ ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
+ }
+
+ if (ret != RISCV_EXCP_NONE) {
+ return ret;
+ }
+
+ return hmode(env, csrno);
}
static RISCVException aia_hmode32(CPURISCVState *env, int csrno)
{
+ int ret;
+
+ if (!riscv_cpu_cfg(env)->ext_ssaia) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
+ if (ret != RISCV_EXCP_NONE) {
+ return ret;
+ }
+
if (!riscv_cpu_cfg(env)->ext_ssaia) {
return RISCV_EXCP_ILLEGAL_INST;
}
@@ -1992,6 +2038,12 @@ static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
target_ulong wr_mask)
{
target_ulong *iselect;
+ int ret;
+
+ ret = smstateen_acc_ok(env, 0, SMSTATEEN0_SVSLCT);
+ if (ret != RISCV_EXCP_NONE) {
+ return ret;
+ }
/* Translate CSR number for VS-mode */
csrno = csrind_xlate_vs_csrno(env, csrno);
@@ -2162,6 +2214,11 @@ static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
int ret = -EINVAL;
target_ulong isel;
+ ret = smstateen_acc_ok(env, 0, SMSTATEEN0_SVSLCT);
+ if (ret != RISCV_EXCP_NONE) {
+ return ret;
+ }
+
/* Translate CSR number for VS-mode */
csrno = csrind_xlate_vs_csrno(env, csrno);
@@ -2610,6 +2667,22 @@ static RISCVException write_mstateen0(CPURISCVState *env, int csrno,
if (env->priv_ver >= PRIV_VERSION_1_13_0) {
wr_mask |= SMSTATEEN0_P1P13;
}
+ /*
+ * TODO: Do we need to check ssaia as well ? Can we enable ssaia without
+ * smaia ?
+ */
+ if (riscv_cpu_cfg(env)->ext_smaia) {
+ wr_mask |= SMSTATEEN0_SVSLCT;
+ }
+
+ /*
+ * As per the AIA specification, SMSTATEEN0_IMSIC is valid only if IMSIC is
+ * implemented. However, that information is with MachineState and we can't
+ * figure that out in csr.c. Just enable if Smaia is available.
+ */
+ if (riscv_cpu_cfg(env)->ext_smaia) {
+ wr_mask |= (SMSTATEEN0_AIA | SMSTATEEN0_IMSIC);
+ }
return write_mstateen(env, csrno, wr_mask, new_val);
}
@@ -2691,6 +2764,19 @@ static RISCVException write_hstateen0(CPURISCVState *env, int csrno,
wr_mask |= SMSTATEEN0_FCSR;
}
+ if (riscv_cpu_cfg(env)->ext_ssaia) {
+ wr_mask |= SMSTATEEN0_SVSLCT;
+ }
+
+ /*
+ * As per the AIA specification, SMSTATEEN0_IMSIC is valid only if IMSIC is
+ * implemented. However, that information is with MachineState and we can't
+ * figure that out in csr.c. Just enable if Ssaia is available.
+ */
+ if (riscv_cpu_cfg(env)->ext_ssaia) {
+ wr_mask |= (SMSTATEEN0_AIA | SMSTATEEN0_IMSIC);
+ }
+
return write_hstateen(env, csrno, wr_mask, new_val);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 03/13] target/riscv: Enable S*stateen bits for AIA
2024-07-23 23:30 ` [PATCH v2 03/13] target/riscv: Enable S*stateen bits for AIA Atish Patra
@ 2024-08-06 0:12 ` Alistair Francis
2024-08-07 7:53 ` Atish Kumar Patra
0 siblings, 1 reply; 41+ messages in thread
From: Alistair Francis @ 2024-08-06 0:12 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:31 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> As per the ratified AIA spec v1.0, three stateen bits control AIA CSR
> access.
>
> Bit 60 controls the indirect CSRs
> Bit 59 controls the most AIA CSR state
> Bit 58 controls the IMSIC state such as stopei and vstopei
>
> Enable the corresponding bits in [m|h]stateen and enable corresponding
> checks in the CSR accessor functions.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/csr.c | 88 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 87 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 58be8bc3cc8c..18b9ae802b15 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -316,19 +316,42 @@ static RISCVException smode32(CPURISCVState *env, int csrno)
>
> static RISCVException aia_smode(CPURISCVState *env, int csrno)
> {
> + int ret;
> +
> if (!riscv_cpu_cfg(env)->ext_ssaia) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
>
> + if (csrno == CSR_STOPEI) {
> + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_IMSIC);
> + } else {
> + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
> + }
> +
> + if (ret != RISCV_EXCP_NONE) {
> + return ret;
> + }
> +
> return smode(env, csrno);
> }
>
> static RISCVException aia_smode32(CPURISCVState *env, int csrno)
> {
> + int ret;
> +
> if (!riscv_cpu_cfg(env)->ext_ssaia) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
>
> + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
> + if (ret != RISCV_EXCP_NONE) {
> + return ret;
> + }
> +
> + if (ret != RISCV_EXCP_NONE) {
> + return ret;
> + }
> +
> return smode32(env, csrno);
> }
>
> @@ -567,15 +590,38 @@ static RISCVException pointer_masking(CPURISCVState *env, int csrno)
>
> static RISCVException aia_hmode(CPURISCVState *env, int csrno)
> {
> + int ret;
> +
> if (!riscv_cpu_cfg(env)->ext_ssaia) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
>
> - return hmode(env, csrno);
> + if (csrno == CSR_VSTOPEI) {
> + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_IMSIC);
> + } else {
> + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
> + }
> +
> + if (ret != RISCV_EXCP_NONE) {
> + return ret;
> + }
> +
> + return hmode(env, csrno);
> }
>
> static RISCVException aia_hmode32(CPURISCVState *env, int csrno)
> {
> + int ret;
> +
> + if (!riscv_cpu_cfg(env)->ext_ssaia) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
> + if (ret != RISCV_EXCP_NONE) {
> + return ret;
> + }
> +
> if (!riscv_cpu_cfg(env)->ext_ssaia) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
> @@ -1992,6 +2038,12 @@ static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
> target_ulong wr_mask)
> {
> target_ulong *iselect;
> + int ret;
> +
> + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_SVSLCT);
> + if (ret != RISCV_EXCP_NONE) {
> + return ret;
> + }
>
> /* Translate CSR number for VS-mode */
> csrno = csrind_xlate_vs_csrno(env, csrno);
> @@ -2162,6 +2214,11 @@ static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
> int ret = -EINVAL;
> target_ulong isel;
>
> + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_SVSLCT);
> + if (ret != RISCV_EXCP_NONE) {
> + return ret;
> + }
> +
> /* Translate CSR number for VS-mode */
> csrno = csrind_xlate_vs_csrno(env, csrno);
>
> @@ -2610,6 +2667,22 @@ static RISCVException write_mstateen0(CPURISCVState *env, int csrno,
> if (env->priv_ver >= PRIV_VERSION_1_13_0) {
> wr_mask |= SMSTATEEN0_P1P13;
> }
> + /*
> + * TODO: Do we need to check ssaia as well ? Can we enable ssaia without
> + * smaia ?
> + */
> + if (riscv_cpu_cfg(env)->ext_smaia) {
> + wr_mask |= SMSTATEEN0_SVSLCT;
> + }
This looks right to me, do we need the TODO?
Otherwise
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 03/13] target/riscv: Enable S*stateen bits for AIA
2024-08-06 0:12 ` Alistair Francis
@ 2024-08-07 7:53 ` Atish Kumar Patra
0 siblings, 0 replies; 41+ messages in thread
From: Atish Kumar Patra @ 2024-08-07 7:53 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Mon, Aug 5, 2024 at 5:12 PM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Wed, Jul 24, 2024 at 9:31 AM Atish Patra <atishp@rivosinc.com> wrote:
> >
> > As per the ratified AIA spec v1.0, three stateen bits control AIA CSR
> > access.
> >
> > Bit 60 controls the indirect CSRs
> > Bit 59 controls the most AIA CSR state
> > Bit 58 controls the IMSIC state such as stopei and vstopei
> >
> > Enable the corresponding bits in [m|h]stateen and enable corresponding
> > checks in the CSR accessor functions.
> >
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
> > target/riscv/csr.c | 88 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 87 insertions(+), 1 deletion(-)
> >
> > diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> > index 58be8bc3cc8c..18b9ae802b15 100644
> > --- a/target/riscv/csr.c
> > +++ b/target/riscv/csr.c
> > @@ -316,19 +316,42 @@ static RISCVException smode32(CPURISCVState *env, int csrno)
> >
> > static RISCVException aia_smode(CPURISCVState *env, int csrno)
> > {
> > + int ret;
> > +
> > if (!riscv_cpu_cfg(env)->ext_ssaia) {
> > return RISCV_EXCP_ILLEGAL_INST;
> > }
> >
> > + if (csrno == CSR_STOPEI) {
> > + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_IMSIC);
> > + } else {
> > + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
> > + }
> > +
> > + if (ret != RISCV_EXCP_NONE) {
> > + return ret;
> > + }
> > +
> > return smode(env, csrno);
> > }
> >
> > static RISCVException aia_smode32(CPURISCVState *env, int csrno)
> > {
> > + int ret;
> > +
> > if (!riscv_cpu_cfg(env)->ext_ssaia) {
> > return RISCV_EXCP_ILLEGAL_INST;
> > }
> >
> > + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
> > + if (ret != RISCV_EXCP_NONE) {
> > + return ret;
> > + }
> > +
> > + if (ret != RISCV_EXCP_NONE) {
> > + return ret;
> > + }
> > +
> > return smode32(env, csrno);
> > }
> >
> > @@ -567,15 +590,38 @@ static RISCVException pointer_masking(CPURISCVState *env, int csrno)
> >
> > static RISCVException aia_hmode(CPURISCVState *env, int csrno)
> > {
> > + int ret;
> > +
> > if (!riscv_cpu_cfg(env)->ext_ssaia) {
> > return RISCV_EXCP_ILLEGAL_INST;
> > }
> >
> > - return hmode(env, csrno);
> > + if (csrno == CSR_VSTOPEI) {
> > + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_IMSIC);
> > + } else {
> > + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
> > + }
> > +
> > + if (ret != RISCV_EXCP_NONE) {
> > + return ret;
> > + }
> > +
> > + return hmode(env, csrno);
> > }
> >
> > static RISCVException aia_hmode32(CPURISCVState *env, int csrno)
> > {
> > + int ret;
> > +
> > + if (!riscv_cpu_cfg(env)->ext_ssaia) {
> > + return RISCV_EXCP_ILLEGAL_INST;
> > + }
> > +
> > + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_AIA);
> > + if (ret != RISCV_EXCP_NONE) {
> > + return ret;
> > + }
> > +
> > if (!riscv_cpu_cfg(env)->ext_ssaia) {
> > return RISCV_EXCP_ILLEGAL_INST;
> > }
> > @@ -1992,6 +2038,12 @@ static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
> > target_ulong wr_mask)
> > {
> > target_ulong *iselect;
> > + int ret;
> > +
> > + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_SVSLCT);
> > + if (ret != RISCV_EXCP_NONE) {
> > + return ret;
> > + }
> >
> > /* Translate CSR number for VS-mode */
> > csrno = csrind_xlate_vs_csrno(env, csrno);
> > @@ -2162,6 +2214,11 @@ static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
> > int ret = -EINVAL;
> > target_ulong isel;
> >
> > + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_SVSLCT);
> > + if (ret != RISCV_EXCP_NONE) {
> > + return ret;
> > + }
> > +
> > /* Translate CSR number for VS-mode */
> > csrno = csrind_xlate_vs_csrno(env, csrno);
> >
> > @@ -2610,6 +2667,22 @@ static RISCVException write_mstateen0(CPURISCVState *env, int csrno,
> > if (env->priv_ver >= PRIV_VERSION_1_13_0) {
> > wr_mask |= SMSTATEEN0_P1P13;
> > }
> > + /*
> > + * TODO: Do we need to check ssaia as well ? Can we enable ssaia without
> > + * smaia ?
> > + */
> > + if (riscv_cpu_cfg(env)->ext_smaia) {
> > + wr_mask |= SMSTATEEN0_SVSLCT;
> > + }
>
> This looks right to me, do we need the TODO?
>
cool. I will remove the TODO.
> Otherwise
>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
>
> Alistair
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 04/13] target/riscv: Support generic CSR indirect access
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
` (2 preceding siblings ...)
2024-07-23 23:30 ` [PATCH v2 03/13] target/riscv: Enable S*stateen bits for AIA Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-08-06 0:15 ` Alistair Francis
2024-07-23 23:30 ` [PATCH v2 05/13] target/riscv: Add counter delegation definitions Atish Patra
` (8 subsequent siblings)
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
From: Kaiwen Xue <kaiwenx@rivosinc.com>
This adds the indirect access registers required by sscsrind/smcsrind
and the operations on them. Note that xiselect and xireg are used for
both AIA and sxcsrind, and the behavior of accessing them depends on
whether each extension is enabled and the value stored in xiselect.
Co-developed-by: Atish Patra <atishp@rivosinc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
---
target/riscv/cpu.c | 2 +
target/riscv/cpu_bits.h | 28 ++++++++-
target/riscv/cpu_cfg.h | 2 +
target/riscv/csr.c | 149 ++++++++++++++++++++++++++++++++++++++++++++++--
4 files changed, 175 insertions(+), 6 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index ebc19090b40d..ac2dce734d80 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -182,12 +182,14 @@ const RISCVIsaExtData isa_edata_arr[] = {
ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
+ ISA_EXT_DATA_ENTRY(smcdeleg, PRIV_VERSION_1_12_0, ext_smcdeleg),
ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
ISA_EXT_DATA_ENTRY(smcsrind, PRIV_VERSION_1_12_0, ext_smcsrind),
ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, ext_smepmp),
ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
ISA_EXT_DATA_ENTRY(ssccptr, PRIV_VERSION_1_11_0, has_priv_1_11),
+ ISA_EXT_DATA_ENTRY(ssccfg, PRIV_VERSION_1_12_0, ext_ssccfg),
ISA_EXT_DATA_ENTRY(sscofpmf, PRIV_VERSION_1_12_0, ext_sscofpmf),
ISA_EXT_DATA_ENTRY(sscounterenw, PRIV_VERSION_1_12_0, has_priv_1_12),
ISA_EXT_DATA_ENTRY(sscsrind, PRIV_VERSION_1_12_0, ext_sscsrind),
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
index 32b068f18aa5..2a8b53a6622e 100644
--- a/target/riscv/cpu_bits.h
+++ b/target/riscv/cpu_bits.h
@@ -170,6 +170,13 @@
#define CSR_MISELECT 0x350
#define CSR_MIREG 0x351
+/* Machine Indirect Register Alias */
+#define CSR_MIREG2 0x352
+#define CSR_MIREG3 0x353
+#define CSR_MIREG4 0x355
+#define CSR_MIREG5 0x356
+#define CSR_MIREG6 0x357
+
/* Machine-Level Interrupts (AIA) */
#define CSR_MTOPEI 0x35c
#define CSR_MTOPI 0xfb0
@@ -219,6 +226,13 @@
#define CSR_SISELECT 0x150
#define CSR_SIREG 0x151
+/* Supervisor Indirect Register Alias */
+#define CSR_SIREG2 0x152
+#define CSR_SIREG3 0x153
+#define CSR_SIREG4 0x155
+#define CSR_SIREG5 0x156
+#define CSR_SIREG6 0x157
+
/* Supervisor-Level Interrupts (AIA) */
#define CSR_STOPEI 0x15c
#define CSR_STOPI 0xdb0
@@ -285,6 +299,13 @@
#define CSR_VSISELECT 0x250
#define CSR_VSIREG 0x251
+/* Virtual Supervisor Indirect Alias */
+#define CSR_VSIREG2 0x252
+#define CSR_VSIREG3 0x253
+#define CSR_VSIREG4 0x255
+#define CSR_VSIREG5 0x256
+#define CSR_VSIREG6 0x257
+
/* VS-Level Interrupts (H-extension with AIA) */
#define CSR_VSTOPEI 0x25c
#define CSR_VSTOPI 0xeb0
@@ -846,10 +867,13 @@ typedef enum RISCVException {
#define ISELECT_IMSIC_EIE63 0xff
#define ISELECT_IMSIC_FIRST ISELECT_IMSIC_EIDELIVERY
#define ISELECT_IMSIC_LAST ISELECT_IMSIC_EIE63
-#define ISELECT_MASK 0x1ff
+#define ISELECT_MASK_AIA 0x1ff
+
+/* MISELECT, SISELECT, and VSISELECT bits (AIA) */
+#define ISELECT_MASK_SXCSRIND 0xfff
/* Dummy [M|S|VS]ISELECT value for emulating [M|S|VS]TOPEI CSRs */
-#define ISELECT_IMSIC_TOPEI (ISELECT_MASK + 1)
+#define ISELECT_IMSIC_TOPEI (ISELECT_MASK_AIA + 1)
/* IMSIC bits (AIA) */
#define IMSIC_TOPEI_IID_SHIFT 16
diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
index b8a5174bc871..b2be3d47c7a3 100644
--- a/target/riscv/cpu_cfg.h
+++ b/target/riscv/cpu_cfg.h
@@ -79,6 +79,8 @@ struct RISCVCPUConfig {
bool ext_smcntrpmf;
bool ext_smcsrind;
bool ext_sscsrind;
+ bool ext_smcdeleg;
+ bool ext_ssccfg;
bool ext_svadu;
bool ext_svinval;
bool ext_svnapot;
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 18b9ae802b15..4ae3931f0ada 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -287,6 +287,17 @@ static RISCVException aia_any32(CPURISCVState *env, int csrno)
return any32(env, csrno);
}
+static RISCVException csrind_any(CPURISCVState *env, int csrno)
+{
+ RISCVCPU *cpu = env_archcpu(env);
+
+ if (!cpu->cfg.ext_smcsrind) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return RISCV_EXCP_NONE;
+}
+
static RISCVException csrind_or_aia_any(CPURISCVState *env, int csrno)
{
if (!riscv_cpu_cfg(env)->ext_smaia && !riscv_cpu_cfg(env)->ext_smcsrind) {
@@ -370,6 +381,15 @@ static bool csrind_or_aia_extensions_present(CPURISCVState *env)
return csrind_extensions_present(env) || aia_extensions_present(env);
}
+static RISCVException csrind_smode(CPURISCVState *env, int csrno)
+{
+ if (!csrind_extensions_present(env)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return smode(env, csrno);
+}
+
static RISCVException csrind_or_aia_smode(CPURISCVState *env, int csrno)
{
if (!csrind_or_aia_extensions_present(env)) {
@@ -398,6 +418,15 @@ static RISCVException hmode32(CPURISCVState *env, int csrno)
}
+static RISCVException csrind_hmode(CPURISCVState *env, int csrno)
+{
+ if (!csrind_extensions_present(env)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return hmode(env, csrno);
+}
+
static RISCVException csrind_or_aia_hmode(CPURISCVState *env, int csrno)
{
if (!csrind_or_aia_extensions_present(env)) {
@@ -2027,7 +2056,12 @@ static int csrind_xlate_vs_csrno(CPURISCVState *env, int csrno)
case CSR_SISELECT:
return CSR_VSISELECT;
case CSR_SIREG:
- return CSR_VSIREG;
+ case CSR_SIREG2:
+ case CSR_SIREG3:
+ case CSR_SIREG4:
+ case CSR_SIREG5:
+ case CSR_SIREG6:
+ return CSR_VSIREG + (csrno - CSR_SIREG);
default:
return csrno;
};
@@ -2067,7 +2101,12 @@ static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
*val = *iselect;
}
- wr_mask &= ISELECT_MASK;
+ if (riscv_cpu_cfg(env)->ext_smcsrind || riscv_cpu_cfg(env)->ext_sscsrind) {
+ wr_mask &= ISELECT_MASK_SXCSRIND;
+ } else {
+ wr_mask &= ISELECT_MASK_AIA;
+ }
+
if (wr_mask) {
*iselect = (*iselect & ~wr_mask) | (new_val & wr_mask);
}
@@ -2206,6 +2245,59 @@ done:
return RISCV_EXCP_NONE;
}
+/*
+ * rmw_xireg_csrind: Perform indirect access to xireg and xireg2-xireg6
+ *
+ * Perform indirect access to xireg and xireg2-xireg6.
+ * This is a generic interface for all xireg CSRs. Apart from AIA, all other
+ * extension using csrind should be implemented here.
+ */
+static int rmw_xireg_csrind(CPURISCVState *env, int csrno,
+ target_ulong isel, target_ulong *val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+ return -EINVAL;
+}
+
+static int rmw_xiregi(CPURISCVState *env, int csrno, target_ulong *val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+ bool virt = false;
+ int ret = -EINVAL;
+ target_ulong isel;
+
+ ret = smstateen_acc_ok(env, 0, SMSTATEEN0_SVSLCT);
+ if (ret != RISCV_EXCP_NONE) {
+ return ret;
+ }
+
+ /* Translate CSR number for VS-mode */
+ csrno = csrind_xlate_vs_csrno(env, csrno);
+
+ if (CSR_MIREG <= csrno && csrno <= CSR_MIREG6 &&
+ csrno != CSR_MIREG4 - 1) {
+ isel = env->miselect;
+ } else if (CSR_SIREG <= csrno && csrno <= CSR_SIREG6 &&
+ csrno != CSR_SIREG4 - 1) {
+ isel = env->siselect;
+ } else if (CSR_VSIREG <= csrno && csrno <= CSR_VSIREG6 &&
+ csrno != CSR_VSIREG4 - 1) {
+ isel = env->vsiselect;
+ virt = true;
+ } else {
+ goto done;
+ }
+
+ return rmw_xireg_csrind(env, csrno, isel, val, new_val, wr_mask);
+
+done:
+ if (ret) {
+ return (env->virt_enabled && virt) ?
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
+ }
+ return RISCV_EXCP_NONE;
+}
+
static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
target_ulong *val, target_ulong new_val,
target_ulong wr_mask)
@@ -2238,8 +2330,21 @@ static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
goto done;
};
+ /*
+ * Use the xiselect range to determine actual op on xireg.
+ *
+ * Since we only checked the existence of AIA or Indirect Access in the
+ * predicate, we should check the existence of the exact extension when
+ * we get to a specific range and return illegal instruction exception even
+ * in VS-mode.
+ */
if (xiselect_aia_range(isel)) {
return rmw_xireg_aia(env, csrno, isel, val, new_val, wr_mask);
+ } else if (riscv_cpu_cfg(env)->ext_smcsrind ||
+ riscv_cpu_cfg(env)->ext_sscsrind) {
+ return rmw_xireg_csrind(env, csrno, isel, val, new_val, wr_mask);
+ } else {
+ return RISCV_EXCP_ILLEGAL_INST;
}
done:
@@ -2671,7 +2776,7 @@ static RISCVException write_mstateen0(CPURISCVState *env, int csrno,
* TODO: Do we need to check ssaia as well ? Can we enable ssaia without
* smaia ?
*/
- if (riscv_cpu_cfg(env)->ext_smaia) {
+ if (riscv_cpu_cfg(env)->ext_smaia || riscv_cpu_cfg(env)->ext_smcsrind) {
wr_mask |= SMSTATEEN0_SVSLCT;
}
@@ -2764,7 +2869,7 @@ static RISCVException write_hstateen0(CPURISCVState *env, int csrno,
wr_mask |= SMSTATEEN0_FCSR;
}
- if (riscv_cpu_cfg(env)->ext_ssaia) {
+ if (riscv_cpu_cfg(env)->ext_ssaia || riscv_cpu_cfg(env)->ext_sscsrind) {
wr_mask |= SMSTATEEN0_SVSLCT;
}
@@ -5180,6 +5285,18 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_MIREG] = { "mireg", csrind_or_aia_any, NULL, NULL,
rmw_xireg },
+ /* Machine Indirect Register Alias */
+ [CSR_MIREG2] = { "mireg2", csrind_any, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_MIREG3] = { "mireg3", csrind_any, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_MIREG4] = { "mireg4", csrind_any, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_MIREG5] = { "mireg5", csrind_any, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_MIREG6] = { "mireg6", csrind_any, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+
/* Machine-Level Interrupts (AIA) */
[CSR_MTOPEI] = { "mtopei", aia_any, NULL, NULL, rmw_xtopei },
[CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
@@ -5301,6 +5418,18 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_SIREG] = { "sireg", csrind_or_aia_smode, NULL, NULL,
rmw_xireg },
+ /* Supervisor Indirect Register Alias */
+ [CSR_SIREG2] = { "sireg2", csrind_smode, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_SIREG3] = { "sireg3", csrind_smode, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_SIREG4] = { "sireg4", csrind_smode, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_SIREG5] = { "sireg5", csrind_smode, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_SIREG6] = { "sireg6", csrind_smode, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+
/* Supervisor-Level Interrupts (AIA) */
[CSR_STOPEI] = { "stopei", aia_smode, NULL, NULL, rmw_xtopei },
[CSR_STOPI] = { "stopi", aia_smode, read_stopi },
@@ -5383,6 +5512,18 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
[CSR_VSIREG] = { "vsireg", csrind_or_aia_hmode, NULL, NULL,
rmw_xireg },
+ /* Virtual Supervisor Indirect Alias */
+ [CSR_VSIREG2] = { "vsireg2", csrind_hmode, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_VSIREG3] = { "vsireg3", csrind_hmode, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_VSIREG4] = { "vsireg4", csrind_hmode, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_VSIREG5] = { "vsireg5", csrind_hmode, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+ [CSR_VSIREG6] = { "vsireg6", csrind_hmode, NULL, NULL, rmw_xiregi,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+
/* VS-Level Interrupts (H-extension with AIA) */
[CSR_VSTOPEI] = { "vstopei", aia_hmode, NULL, NULL, rmw_xtopei },
[CSR_VSTOPI] = { "vstopi", aia_hmode, read_vstopi },
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 04/13] target/riscv: Support generic CSR indirect access
2024-07-23 23:30 ` [PATCH v2 04/13] target/riscv: Support generic CSR indirect access Atish Patra
@ 2024-08-06 0:15 ` Alistair Francis
2024-08-07 7:50 ` Atish Kumar Patra
0 siblings, 1 reply; 41+ messages in thread
From: Alistair Francis @ 2024-08-06 0:15 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:32 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> From: Kaiwen Xue <kaiwenx@rivosinc.com>
>
> This adds the indirect access registers required by sscsrind/smcsrind
> and the operations on them. Note that xiselect and xireg are used for
> both AIA and sxcsrind, and the behavior of accessing them depends on
> whether each extension is enabled and the value stored in xiselect.
>
> Co-developed-by: Atish Patra <atishp@rivosinc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> ---
> target/riscv/cpu.c | 2 +
> target/riscv/cpu_bits.h | 28 ++++++++-
> target/riscv/cpu_cfg.h | 2 +
> target/riscv/csr.c | 149 ++++++++++++++++++++++++++++++++++++++++++++++--
> 4 files changed, 175 insertions(+), 6 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index ebc19090b40d..ac2dce734d80 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -182,12 +182,14 @@ const RISCVIsaExtData isa_edata_arr[] = {
> ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
> ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
> ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
> + ISA_EXT_DATA_ENTRY(smcdeleg, PRIV_VERSION_1_12_0, ext_smcdeleg),
> ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
> ISA_EXT_DATA_ENTRY(smcsrind, PRIV_VERSION_1_12_0, ext_smcsrind),
> ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, ext_smepmp),
> ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
> ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
> ISA_EXT_DATA_ENTRY(ssccptr, PRIV_VERSION_1_11_0, has_priv_1_11),
> + ISA_EXT_DATA_ENTRY(ssccfg, PRIV_VERSION_1_12_0, ext_ssccfg),
Looks like these are in the wrong patch
> ISA_EXT_DATA_ENTRY(sscofpmf, PRIV_VERSION_1_12_0, ext_sscofpmf),
> ISA_EXT_DATA_ENTRY(sscounterenw, PRIV_VERSION_1_12_0, has_priv_1_12),
> ISA_EXT_DATA_ENTRY(sscsrind, PRIV_VERSION_1_12_0, ext_sscsrind),
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index 32b068f18aa5..2a8b53a6622e 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -170,6 +170,13 @@
> #define CSR_MISELECT 0x350
> #define CSR_MIREG 0x351
>
> +/* Machine Indirect Register Alias */
> +#define CSR_MIREG2 0x352
> +#define CSR_MIREG3 0x353
> +#define CSR_MIREG4 0x355
> +#define CSR_MIREG5 0x356
> +#define CSR_MIREG6 0x357
> +
> /* Machine-Level Interrupts (AIA) */
> #define CSR_MTOPEI 0x35c
> #define CSR_MTOPI 0xfb0
> @@ -219,6 +226,13 @@
> #define CSR_SISELECT 0x150
> #define CSR_SIREG 0x151
>
> +/* Supervisor Indirect Register Alias */
> +#define CSR_SIREG2 0x152
> +#define CSR_SIREG3 0x153
> +#define CSR_SIREG4 0x155
> +#define CSR_SIREG5 0x156
> +#define CSR_SIREG6 0x157
> +
> /* Supervisor-Level Interrupts (AIA) */
> #define CSR_STOPEI 0x15c
> #define CSR_STOPI 0xdb0
> @@ -285,6 +299,13 @@
> #define CSR_VSISELECT 0x250
> #define CSR_VSIREG 0x251
>
> +/* Virtual Supervisor Indirect Alias */
> +#define CSR_VSIREG2 0x252
> +#define CSR_VSIREG3 0x253
> +#define CSR_VSIREG4 0x255
> +#define CSR_VSIREG5 0x256
> +#define CSR_VSIREG6 0x257
> +
> /* VS-Level Interrupts (H-extension with AIA) */
> #define CSR_VSTOPEI 0x25c
> #define CSR_VSTOPI 0xeb0
> @@ -846,10 +867,13 @@ typedef enum RISCVException {
> #define ISELECT_IMSIC_EIE63 0xff
> #define ISELECT_IMSIC_FIRST ISELECT_IMSIC_EIDELIVERY
> #define ISELECT_IMSIC_LAST ISELECT_IMSIC_EIE63
> -#define ISELECT_MASK 0x1ff
> +#define ISELECT_MASK_AIA 0x1ff
> +
> +/* MISELECT, SISELECT, and VSISELECT bits (AIA) */
> +#define ISELECT_MASK_SXCSRIND 0xfff
>
> /* Dummy [M|S|VS]ISELECT value for emulating [M|S|VS]TOPEI CSRs */
> -#define ISELECT_IMSIC_TOPEI (ISELECT_MASK + 1)
> +#define ISELECT_IMSIC_TOPEI (ISELECT_MASK_AIA + 1)
>
> /* IMSIC bits (AIA) */
> #define IMSIC_TOPEI_IID_SHIFT 16
> diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
> index b8a5174bc871..b2be3d47c7a3 100644
> --- a/target/riscv/cpu_cfg.h
> +++ b/target/riscv/cpu_cfg.h
> @@ -79,6 +79,8 @@ struct RISCVCPUConfig {
> bool ext_smcntrpmf;
> bool ext_smcsrind;
> bool ext_sscsrind;
> + bool ext_smcdeleg;
> + bool ext_ssccfg;
Same here
Alistair
> bool ext_svadu;
> bool ext_svinval;
> bool ext_svnapot;
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 18b9ae802b15..4ae3931f0ada 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -287,6 +287,17 @@ static RISCVException aia_any32(CPURISCVState *env, int csrno)
> return any32(env, csrno);
> }
>
> +static RISCVException csrind_any(CPURISCVState *env, int csrno)
> +{
> + RISCVCPU *cpu = env_archcpu(env);
> +
> + if (!cpu->cfg.ext_smcsrind) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return RISCV_EXCP_NONE;
> +}
> +
> static RISCVException csrind_or_aia_any(CPURISCVState *env, int csrno)
> {
> if (!riscv_cpu_cfg(env)->ext_smaia && !riscv_cpu_cfg(env)->ext_smcsrind) {
> @@ -370,6 +381,15 @@ static bool csrind_or_aia_extensions_present(CPURISCVState *env)
> return csrind_extensions_present(env) || aia_extensions_present(env);
> }
>
> +static RISCVException csrind_smode(CPURISCVState *env, int csrno)
> +{
> + if (!csrind_extensions_present(env)) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return smode(env, csrno);
> +}
> +
> static RISCVException csrind_or_aia_smode(CPURISCVState *env, int csrno)
> {
> if (!csrind_or_aia_extensions_present(env)) {
> @@ -398,6 +418,15 @@ static RISCVException hmode32(CPURISCVState *env, int csrno)
>
> }
>
> +static RISCVException csrind_hmode(CPURISCVState *env, int csrno)
> +{
> + if (!csrind_extensions_present(env)) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return hmode(env, csrno);
> +}
> +
> static RISCVException csrind_or_aia_hmode(CPURISCVState *env, int csrno)
> {
> if (!csrind_or_aia_extensions_present(env)) {
> @@ -2027,7 +2056,12 @@ static int csrind_xlate_vs_csrno(CPURISCVState *env, int csrno)
> case CSR_SISELECT:
> return CSR_VSISELECT;
> case CSR_SIREG:
> - return CSR_VSIREG;
> + case CSR_SIREG2:
> + case CSR_SIREG3:
> + case CSR_SIREG4:
> + case CSR_SIREG5:
> + case CSR_SIREG6:
> + return CSR_VSIREG + (csrno - CSR_SIREG);
> default:
> return csrno;
> };
> @@ -2067,7 +2101,12 @@ static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
> *val = *iselect;
> }
>
> - wr_mask &= ISELECT_MASK;
> + if (riscv_cpu_cfg(env)->ext_smcsrind || riscv_cpu_cfg(env)->ext_sscsrind) {
> + wr_mask &= ISELECT_MASK_SXCSRIND;
> + } else {
> + wr_mask &= ISELECT_MASK_AIA;
> + }
> +
> if (wr_mask) {
> *iselect = (*iselect & ~wr_mask) | (new_val & wr_mask);
> }
> @@ -2206,6 +2245,59 @@ done:
> return RISCV_EXCP_NONE;
> }
>
> +/*
> + * rmw_xireg_csrind: Perform indirect access to xireg and xireg2-xireg6
> + *
> + * Perform indirect access to xireg and xireg2-xireg6.
> + * This is a generic interface for all xireg CSRs. Apart from AIA, all other
> + * extension using csrind should be implemented here.
> + */
> +static int rmw_xireg_csrind(CPURISCVState *env, int csrno,
> + target_ulong isel, target_ulong *val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> + return -EINVAL;
> +}
> +
> +static int rmw_xiregi(CPURISCVState *env, int csrno, target_ulong *val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> + bool virt = false;
> + int ret = -EINVAL;
> + target_ulong isel;
> +
> + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_SVSLCT);
> + if (ret != RISCV_EXCP_NONE) {
> + return ret;
> + }
> +
> + /* Translate CSR number for VS-mode */
> + csrno = csrind_xlate_vs_csrno(env, csrno);
> +
> + if (CSR_MIREG <= csrno && csrno <= CSR_MIREG6 &&
> + csrno != CSR_MIREG4 - 1) {
> + isel = env->miselect;
> + } else if (CSR_SIREG <= csrno && csrno <= CSR_SIREG6 &&
> + csrno != CSR_SIREG4 - 1) {
> + isel = env->siselect;
> + } else if (CSR_VSIREG <= csrno && csrno <= CSR_VSIREG6 &&
> + csrno != CSR_VSIREG4 - 1) {
> + isel = env->vsiselect;
> + virt = true;
> + } else {
> + goto done;
> + }
> +
> + return rmw_xireg_csrind(env, csrno, isel, val, new_val, wr_mask);
> +
> +done:
> + if (ret) {
> + return (env->virt_enabled && virt) ?
> + RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
> + }
> + return RISCV_EXCP_NONE;
> +}
> +
> static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
> target_ulong *val, target_ulong new_val,
> target_ulong wr_mask)
> @@ -2238,8 +2330,21 @@ static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
> goto done;
> };
>
> + /*
> + * Use the xiselect range to determine actual op on xireg.
> + *
> + * Since we only checked the existence of AIA or Indirect Access in the
> + * predicate, we should check the existence of the exact extension when
> + * we get to a specific range and return illegal instruction exception even
> + * in VS-mode.
> + */
> if (xiselect_aia_range(isel)) {
> return rmw_xireg_aia(env, csrno, isel, val, new_val, wr_mask);
> + } else if (riscv_cpu_cfg(env)->ext_smcsrind ||
> + riscv_cpu_cfg(env)->ext_sscsrind) {
> + return rmw_xireg_csrind(env, csrno, isel, val, new_val, wr_mask);
> + } else {
> + return RISCV_EXCP_ILLEGAL_INST;
> }
>
> done:
> @@ -2671,7 +2776,7 @@ static RISCVException write_mstateen0(CPURISCVState *env, int csrno,
> * TODO: Do we need to check ssaia as well ? Can we enable ssaia without
> * smaia ?
> */
> - if (riscv_cpu_cfg(env)->ext_smaia) {
> + if (riscv_cpu_cfg(env)->ext_smaia || riscv_cpu_cfg(env)->ext_smcsrind) {
> wr_mask |= SMSTATEEN0_SVSLCT;
> }
>
> @@ -2764,7 +2869,7 @@ static RISCVException write_hstateen0(CPURISCVState *env, int csrno,
> wr_mask |= SMSTATEEN0_FCSR;
> }
>
> - if (riscv_cpu_cfg(env)->ext_ssaia) {
> + if (riscv_cpu_cfg(env)->ext_ssaia || riscv_cpu_cfg(env)->ext_sscsrind) {
> wr_mask |= SMSTATEEN0_SVSLCT;
> }
>
> @@ -5180,6 +5285,18 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_MIREG] = { "mireg", csrind_or_aia_any, NULL, NULL,
> rmw_xireg },
>
> + /* Machine Indirect Register Alias */
> + [CSR_MIREG2] = { "mireg2", csrind_any, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_MIREG3] = { "mireg3", csrind_any, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_MIREG4] = { "mireg4", csrind_any, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_MIREG5] = { "mireg5", csrind_any, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_MIREG6] = { "mireg6", csrind_any, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> +
> /* Machine-Level Interrupts (AIA) */
> [CSR_MTOPEI] = { "mtopei", aia_any, NULL, NULL, rmw_xtopei },
> [CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
> @@ -5301,6 +5418,18 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_SIREG] = { "sireg", csrind_or_aia_smode, NULL, NULL,
> rmw_xireg },
>
> + /* Supervisor Indirect Register Alias */
> + [CSR_SIREG2] = { "sireg2", csrind_smode, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_SIREG3] = { "sireg3", csrind_smode, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_SIREG4] = { "sireg4", csrind_smode, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_SIREG5] = { "sireg5", csrind_smode, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_SIREG6] = { "sireg6", csrind_smode, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> +
> /* Supervisor-Level Interrupts (AIA) */
> [CSR_STOPEI] = { "stopei", aia_smode, NULL, NULL, rmw_xtopei },
> [CSR_STOPI] = { "stopi", aia_smode, read_stopi },
> @@ -5383,6 +5512,18 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> [CSR_VSIREG] = { "vsireg", csrind_or_aia_hmode, NULL, NULL,
> rmw_xireg },
>
> + /* Virtual Supervisor Indirect Alias */
> + [CSR_VSIREG2] = { "vsireg2", csrind_hmode, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_VSIREG3] = { "vsireg3", csrind_hmode, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_VSIREG4] = { "vsireg4", csrind_hmode, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_VSIREG5] = { "vsireg5", csrind_hmode, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> + [CSR_VSIREG6] = { "vsireg6", csrind_hmode, NULL, NULL, rmw_xiregi,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> +
> /* VS-Level Interrupts (H-extension with AIA) */
> [CSR_VSTOPEI] = { "vstopei", aia_hmode, NULL, NULL, rmw_xtopei },
> [CSR_VSTOPI] = { "vstopi", aia_hmode, read_vstopi },
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 04/13] target/riscv: Support generic CSR indirect access
2024-08-06 0:15 ` Alistair Francis
@ 2024-08-07 7:50 ` Atish Kumar Patra
0 siblings, 0 replies; 41+ messages in thread
From: Atish Kumar Patra @ 2024-08-07 7:50 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Mon, Aug 5, 2024 at 5:15 PM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Wed, Jul 24, 2024 at 9:32 AM Atish Patra <atishp@rivosinc.com> wrote:
> >
> > From: Kaiwen Xue <kaiwenx@rivosinc.com>
> >
> > This adds the indirect access registers required by sscsrind/smcsrind
> > and the operations on them. Note that xiselect and xireg are used for
> > both AIA and sxcsrind, and the behavior of accessing them depends on
> > whether each extension is enabled and the value stored in xiselect.
> >
> > Co-developed-by: Atish Patra <atishp@rivosinc.com>
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> > ---
> > target/riscv/cpu.c | 2 +
> > target/riscv/cpu_bits.h | 28 ++++++++-
> > target/riscv/cpu_cfg.h | 2 +
> > target/riscv/csr.c | 149 ++++++++++++++++++++++++++++++++++++++++++++++--
> > 4 files changed, 175 insertions(+), 6 deletions(-)
> >
> > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > index ebc19090b40d..ac2dce734d80 100644
> > --- a/target/riscv/cpu.c
> > +++ b/target/riscv/cpu.c
> > @@ -182,12 +182,14 @@ const RISCVIsaExtData isa_edata_arr[] = {
> > ISA_EXT_DATA_ENTRY(zhinx, PRIV_VERSION_1_12_0, ext_zhinx),
> > ISA_EXT_DATA_ENTRY(zhinxmin, PRIV_VERSION_1_12_0, ext_zhinxmin),
> > ISA_EXT_DATA_ENTRY(smaia, PRIV_VERSION_1_12_0, ext_smaia),
> > + ISA_EXT_DATA_ENTRY(smcdeleg, PRIV_VERSION_1_12_0, ext_smcdeleg),
> > ISA_EXT_DATA_ENTRY(smcntrpmf, PRIV_VERSION_1_12_0, ext_smcntrpmf),
> > ISA_EXT_DATA_ENTRY(smcsrind, PRIV_VERSION_1_12_0, ext_smcsrind),
> > ISA_EXT_DATA_ENTRY(smepmp, PRIV_VERSION_1_12_0, ext_smepmp),
> > ISA_EXT_DATA_ENTRY(smstateen, PRIV_VERSION_1_12_0, ext_smstateen),
> > ISA_EXT_DATA_ENTRY(ssaia, PRIV_VERSION_1_12_0, ext_ssaia),
> > ISA_EXT_DATA_ENTRY(ssccptr, PRIV_VERSION_1_11_0, has_priv_1_11),
> > + ISA_EXT_DATA_ENTRY(ssccfg, PRIV_VERSION_1_12_0, ext_ssccfg),
>
> Looks like these are in the wrong patch
>
Yes. Thanks for pointing it out. I will fix these in the next version.
> > ISA_EXT_DATA_ENTRY(sscofpmf, PRIV_VERSION_1_12_0, ext_sscofpmf),
> > ISA_EXT_DATA_ENTRY(sscounterenw, PRIV_VERSION_1_12_0, has_priv_1_12),
> > ISA_EXT_DATA_ENTRY(sscsrind, PRIV_VERSION_1_12_0, ext_sscsrind),
> > diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> > index 32b068f18aa5..2a8b53a6622e 100644
> > --- a/target/riscv/cpu_bits.h
> > +++ b/target/riscv/cpu_bits.h
> > @@ -170,6 +170,13 @@
> > #define CSR_MISELECT 0x350
> > #define CSR_MIREG 0x351
> >
> > +/* Machine Indirect Register Alias */
> > +#define CSR_MIREG2 0x352
> > +#define CSR_MIREG3 0x353
> > +#define CSR_MIREG4 0x355
> > +#define CSR_MIREG5 0x356
> > +#define CSR_MIREG6 0x357
> > +
> > /* Machine-Level Interrupts (AIA) */
> > #define CSR_MTOPEI 0x35c
> > #define CSR_MTOPI 0xfb0
> > @@ -219,6 +226,13 @@
> > #define CSR_SISELECT 0x150
> > #define CSR_SIREG 0x151
> >
> > +/* Supervisor Indirect Register Alias */
> > +#define CSR_SIREG2 0x152
> > +#define CSR_SIREG3 0x153
> > +#define CSR_SIREG4 0x155
> > +#define CSR_SIREG5 0x156
> > +#define CSR_SIREG6 0x157
> > +
> > /* Supervisor-Level Interrupts (AIA) */
> > #define CSR_STOPEI 0x15c
> > #define CSR_STOPI 0xdb0
> > @@ -285,6 +299,13 @@
> > #define CSR_VSISELECT 0x250
> > #define CSR_VSIREG 0x251
> >
> > +/* Virtual Supervisor Indirect Alias */
> > +#define CSR_VSIREG2 0x252
> > +#define CSR_VSIREG3 0x253
> > +#define CSR_VSIREG4 0x255
> > +#define CSR_VSIREG5 0x256
> > +#define CSR_VSIREG6 0x257
> > +
> > /* VS-Level Interrupts (H-extension with AIA) */
> > #define CSR_VSTOPEI 0x25c
> > #define CSR_VSTOPI 0xeb0
> > @@ -846,10 +867,13 @@ typedef enum RISCVException {
> > #define ISELECT_IMSIC_EIE63 0xff
> > #define ISELECT_IMSIC_FIRST ISELECT_IMSIC_EIDELIVERY
> > #define ISELECT_IMSIC_LAST ISELECT_IMSIC_EIE63
> > -#define ISELECT_MASK 0x1ff
> > +#define ISELECT_MASK_AIA 0x1ff
> > +
> > +/* MISELECT, SISELECT, and VSISELECT bits (AIA) */
> > +#define ISELECT_MASK_SXCSRIND 0xfff
> >
> > /* Dummy [M|S|VS]ISELECT value for emulating [M|S|VS]TOPEI CSRs */
> > -#define ISELECT_IMSIC_TOPEI (ISELECT_MASK + 1)
> > +#define ISELECT_IMSIC_TOPEI (ISELECT_MASK_AIA + 1)
> >
> > /* IMSIC bits (AIA) */
> > #define IMSIC_TOPEI_IID_SHIFT 16
> > diff --git a/target/riscv/cpu_cfg.h b/target/riscv/cpu_cfg.h
> > index b8a5174bc871..b2be3d47c7a3 100644
> > --- a/target/riscv/cpu_cfg.h
> > +++ b/target/riscv/cpu_cfg.h
> > @@ -79,6 +79,8 @@ struct RISCVCPUConfig {
> > bool ext_smcntrpmf;
> > bool ext_smcsrind;
> > bool ext_sscsrind;
> > + bool ext_smcdeleg;
> > + bool ext_ssccfg;
>
> Same here
>
> Alistair
>
> > bool ext_svadu;
> > bool ext_svinval;
> > bool ext_svnapot;
> > diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> > index 18b9ae802b15..4ae3931f0ada 100644
> > --- a/target/riscv/csr.c
> > +++ b/target/riscv/csr.c
> > @@ -287,6 +287,17 @@ static RISCVException aia_any32(CPURISCVState *env, int csrno)
> > return any32(env, csrno);
> > }
> >
> > +static RISCVException csrind_any(CPURISCVState *env, int csrno)
> > +{
> > + RISCVCPU *cpu = env_archcpu(env);
> > +
> > + if (!cpu->cfg.ext_smcsrind) {
> > + return RISCV_EXCP_ILLEGAL_INST;
> > + }
> > +
> > + return RISCV_EXCP_NONE;
> > +}
> > +
> > static RISCVException csrind_or_aia_any(CPURISCVState *env, int csrno)
> > {
> > if (!riscv_cpu_cfg(env)->ext_smaia && !riscv_cpu_cfg(env)->ext_smcsrind) {
> > @@ -370,6 +381,15 @@ static bool csrind_or_aia_extensions_present(CPURISCVState *env)
> > return csrind_extensions_present(env) || aia_extensions_present(env);
> > }
> >
> > +static RISCVException csrind_smode(CPURISCVState *env, int csrno)
> > +{
> > + if (!csrind_extensions_present(env)) {
> > + return RISCV_EXCP_ILLEGAL_INST;
> > + }
> > +
> > + return smode(env, csrno);
> > +}
> > +
> > static RISCVException csrind_or_aia_smode(CPURISCVState *env, int csrno)
> > {
> > if (!csrind_or_aia_extensions_present(env)) {
> > @@ -398,6 +418,15 @@ static RISCVException hmode32(CPURISCVState *env, int csrno)
> >
> > }
> >
> > +static RISCVException csrind_hmode(CPURISCVState *env, int csrno)
> > +{
> > + if (!csrind_extensions_present(env)) {
> > + return RISCV_EXCP_ILLEGAL_INST;
> > + }
> > +
> > + return hmode(env, csrno);
> > +}
> > +
> > static RISCVException csrind_or_aia_hmode(CPURISCVState *env, int csrno)
> > {
> > if (!csrind_or_aia_extensions_present(env)) {
> > @@ -2027,7 +2056,12 @@ static int csrind_xlate_vs_csrno(CPURISCVState *env, int csrno)
> > case CSR_SISELECT:
> > return CSR_VSISELECT;
> > case CSR_SIREG:
> > - return CSR_VSIREG;
> > + case CSR_SIREG2:
> > + case CSR_SIREG3:
> > + case CSR_SIREG4:
> > + case CSR_SIREG5:
> > + case CSR_SIREG6:
> > + return CSR_VSIREG + (csrno - CSR_SIREG);
> > default:
> > return csrno;
> > };
> > @@ -2067,7 +2101,12 @@ static RISCVException rmw_xiselect(CPURISCVState *env, int csrno,
> > *val = *iselect;
> > }
> >
> > - wr_mask &= ISELECT_MASK;
> > + if (riscv_cpu_cfg(env)->ext_smcsrind || riscv_cpu_cfg(env)->ext_sscsrind) {
> > + wr_mask &= ISELECT_MASK_SXCSRIND;
> > + } else {
> > + wr_mask &= ISELECT_MASK_AIA;
> > + }
> > +
> > if (wr_mask) {
> > *iselect = (*iselect & ~wr_mask) | (new_val & wr_mask);
> > }
> > @@ -2206,6 +2245,59 @@ done:
> > return RISCV_EXCP_NONE;
> > }
> >
> > +/*
> > + * rmw_xireg_csrind: Perform indirect access to xireg and xireg2-xireg6
> > + *
> > + * Perform indirect access to xireg and xireg2-xireg6.
> > + * This is a generic interface for all xireg CSRs. Apart from AIA, all other
> > + * extension using csrind should be implemented here.
> > + */
> > +static int rmw_xireg_csrind(CPURISCVState *env, int csrno,
> > + target_ulong isel, target_ulong *val,
> > + target_ulong new_val, target_ulong wr_mask)
> > +{
> > + return -EINVAL;
> > +}
> > +
> > +static int rmw_xiregi(CPURISCVState *env, int csrno, target_ulong *val,
> > + target_ulong new_val, target_ulong wr_mask)
> > +{
> > + bool virt = false;
> > + int ret = -EINVAL;
> > + target_ulong isel;
> > +
> > + ret = smstateen_acc_ok(env, 0, SMSTATEEN0_SVSLCT);
> > + if (ret != RISCV_EXCP_NONE) {
> > + return ret;
> > + }
> > +
> > + /* Translate CSR number for VS-mode */
> > + csrno = csrind_xlate_vs_csrno(env, csrno);
> > +
> > + if (CSR_MIREG <= csrno && csrno <= CSR_MIREG6 &&
> > + csrno != CSR_MIREG4 - 1) {
> > + isel = env->miselect;
> > + } else if (CSR_SIREG <= csrno && csrno <= CSR_SIREG6 &&
> > + csrno != CSR_SIREG4 - 1) {
> > + isel = env->siselect;
> > + } else if (CSR_VSIREG <= csrno && csrno <= CSR_VSIREG6 &&
> > + csrno != CSR_VSIREG4 - 1) {
> > + isel = env->vsiselect;
> > + virt = true;
> > + } else {
> > + goto done;
> > + }
> > +
> > + return rmw_xireg_csrind(env, csrno, isel, val, new_val, wr_mask);
> > +
> > +done:
> > + if (ret) {
> > + return (env->virt_enabled && virt) ?
> > + RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
> > + }
> > + return RISCV_EXCP_NONE;
> > +}
> > +
> > static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
> > target_ulong *val, target_ulong new_val,
> > target_ulong wr_mask)
> > @@ -2238,8 +2330,21 @@ static RISCVException rmw_xireg(CPURISCVState *env, int csrno,
> > goto done;
> > };
> >
> > + /*
> > + * Use the xiselect range to determine actual op on xireg.
> > + *
> > + * Since we only checked the existence of AIA or Indirect Access in the
> > + * predicate, we should check the existence of the exact extension when
> > + * we get to a specific range and return illegal instruction exception even
> > + * in VS-mode.
> > + */
> > if (xiselect_aia_range(isel)) {
> > return rmw_xireg_aia(env, csrno, isel, val, new_val, wr_mask);
> > + } else if (riscv_cpu_cfg(env)->ext_smcsrind ||
> > + riscv_cpu_cfg(env)->ext_sscsrind) {
> > + return rmw_xireg_csrind(env, csrno, isel, val, new_val, wr_mask);
> > + } else {
> > + return RISCV_EXCP_ILLEGAL_INST;
> > }
> >
> > done:
> > @@ -2671,7 +2776,7 @@ static RISCVException write_mstateen0(CPURISCVState *env, int csrno,
> > * TODO: Do we need to check ssaia as well ? Can we enable ssaia without
> > * smaia ?
> > */
> > - if (riscv_cpu_cfg(env)->ext_smaia) {
> > + if (riscv_cpu_cfg(env)->ext_smaia || riscv_cpu_cfg(env)->ext_smcsrind) {
> > wr_mask |= SMSTATEEN0_SVSLCT;
> > }
> >
> > @@ -2764,7 +2869,7 @@ static RISCVException write_hstateen0(CPURISCVState *env, int csrno,
> > wr_mask |= SMSTATEEN0_FCSR;
> > }
> >
> > - if (riscv_cpu_cfg(env)->ext_ssaia) {
> > + if (riscv_cpu_cfg(env)->ext_ssaia || riscv_cpu_cfg(env)->ext_sscsrind) {
> > wr_mask |= SMSTATEEN0_SVSLCT;
> > }
> >
> > @@ -5180,6 +5285,18 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> > [CSR_MIREG] = { "mireg", csrind_or_aia_any, NULL, NULL,
> > rmw_xireg },
> >
> > + /* Machine Indirect Register Alias */
> > + [CSR_MIREG2] = { "mireg2", csrind_any, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_MIREG3] = { "mireg3", csrind_any, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_MIREG4] = { "mireg4", csrind_any, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_MIREG5] = { "mireg5", csrind_any, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_MIREG6] = { "mireg6", csrind_any, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > +
> > /* Machine-Level Interrupts (AIA) */
> > [CSR_MTOPEI] = { "mtopei", aia_any, NULL, NULL, rmw_xtopei },
> > [CSR_MTOPI] = { "mtopi", aia_any, read_mtopi },
> > @@ -5301,6 +5418,18 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> > [CSR_SIREG] = { "sireg", csrind_or_aia_smode, NULL, NULL,
> > rmw_xireg },
> >
> > + /* Supervisor Indirect Register Alias */
> > + [CSR_SIREG2] = { "sireg2", csrind_smode, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_SIREG3] = { "sireg3", csrind_smode, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_SIREG4] = { "sireg4", csrind_smode, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_SIREG5] = { "sireg5", csrind_smode, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_SIREG6] = { "sireg6", csrind_smode, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > +
> > /* Supervisor-Level Interrupts (AIA) */
> > [CSR_STOPEI] = { "stopei", aia_smode, NULL, NULL, rmw_xtopei },
> > [CSR_STOPI] = { "stopi", aia_smode, read_stopi },
> > @@ -5383,6 +5512,18 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> > [CSR_VSIREG] = { "vsireg", csrind_or_aia_hmode, NULL, NULL,
> > rmw_xireg },
> >
> > + /* Virtual Supervisor Indirect Alias */
> > + [CSR_VSIREG2] = { "vsireg2", csrind_hmode, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_VSIREG3] = { "vsireg3", csrind_hmode, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_VSIREG4] = { "vsireg4", csrind_hmode, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_VSIREG5] = { "vsireg5", csrind_hmode, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > + [CSR_VSIREG6] = { "vsireg6", csrind_hmode, NULL, NULL, rmw_xiregi,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > +
> > /* VS-Level Interrupts (H-extension with AIA) */
> > [CSR_VSTOPEI] = { "vstopei", aia_hmode, NULL, NULL, rmw_xtopei },
> > [CSR_VSTOPI] = { "vstopi", aia_hmode, read_vstopi },
> >
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 05/13] target/riscv: Add counter delegation definitions
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
` (3 preceding siblings ...)
2024-07-23 23:30 ` [PATCH v2 04/13] target/riscv: Support generic CSR indirect access Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-08-06 0:36 ` Alistair Francis
2024-07-23 23:30 ` [PATCH v2 06/13] target/riscv: Add select value range check for counter delegation Atish Patra
` (7 subsequent siblings)
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
From: Kaiwen Xue <kaiwenx@rivosinc.com>
This adds definitions for counter delegation, including the new
scountinhibit register and the mstateen.CD bit.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
---
target/riscv/cpu.h | 1 +
target/riscv/cpu_bits.h | 8 +++++++-
target/riscv/machine.c | 1 +
3 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index 1619c3acb666..af25550a4a54 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -368,6 +368,7 @@ struct CPUArchState {
uint32_t scounteren;
uint32_t mcounteren;
+ uint32_t scountinhibit;
uint32_t mcountinhibit;
/* PMU cycle & instret privilege mode filtering */
diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
index 2a8b53a6622e..d20468412dca 100644
--- a/target/riscv/cpu_bits.h
+++ b/target/riscv/cpu_bits.h
@@ -207,6 +207,9 @@
#define CSR_SSTATEEN2 0x10E
#define CSR_SSTATEEN3 0x10F
+/* Supervisor Counter Delegation */
+#define CSR_SCOUNTINHIBIT 0x120
+
/* Supervisor Trap Handling */
#define CSR_SSCRATCH 0x140
#define CSR_SEPC 0x141
@@ -778,6 +781,7 @@ typedef enum RISCVException {
#define MENVCFG_CBIE (3UL << 4)
#define MENVCFG_CBCFE BIT(6)
#define MENVCFG_CBZE BIT(7)
+#define MENVCFG_CDE (1ULL << 60)
#define MENVCFG_ADUE (1ULL << 61)
#define MENVCFG_PBMTE (1ULL << 62)
#define MENVCFG_STCE (1ULL << 63)
@@ -869,7 +873,9 @@ typedef enum RISCVException {
#define ISELECT_IMSIC_LAST ISELECT_IMSIC_EIE63
#define ISELECT_MASK_AIA 0x1ff
-/* MISELECT, SISELECT, and VSISELECT bits (AIA) */
+/* [M|S|VS]SELCT value for Indirect CSR Access Extension */
+#define ISELECT_CD_FIRST 0x40
+#define ISELECT_CD_LAST 0x5f
#define ISELECT_MASK_SXCSRIND 0xfff
/* Dummy [M|S|VS]ISELECT value for emulating [M|S|VS]TOPEI CSRs */
diff --git a/target/riscv/machine.c b/target/riscv/machine.c
index 492c2c6d9d79..1a2a68407852 100644
--- a/target/riscv/machine.c
+++ b/target/riscv/machine.c
@@ -398,6 +398,7 @@ const VMStateDescription vmstate_riscv_cpu = {
VMSTATE_UINTTL(env.siselect, RISCVCPU),
VMSTATE_UINT32(env.scounteren, RISCVCPU),
VMSTATE_UINT32(env.mcounteren, RISCVCPU),
+ VMSTATE_UINT32(env.scountinhibit, RISCVCPU),
VMSTATE_UINT32(env.mcountinhibit, RISCVCPU),
VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
vmstate_pmu_ctr_state, PMUCTRState),
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 05/13] target/riscv: Add counter delegation definitions
2024-07-23 23:30 ` [PATCH v2 05/13] target/riscv: Add counter delegation definitions Atish Patra
@ 2024-08-06 0:36 ` Alistair Francis
0 siblings, 0 replies; 41+ messages in thread
From: Alistair Francis @ 2024-08-06 0:36 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:33 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> From: Kaiwen Xue <kaiwenx@rivosinc.com>
>
> This adds definitions for counter delegation, including the new
> scountinhibit register and the mstateen.CD bit.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/cpu.h | 1 +
> target/riscv/cpu_bits.h | 8 +++++++-
> target/riscv/machine.c | 1 +
> 3 files changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 1619c3acb666..af25550a4a54 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -368,6 +368,7 @@ struct CPUArchState {
> uint32_t scounteren;
> uint32_t mcounteren;
>
> + uint32_t scountinhibit;
> uint32_t mcountinhibit;
>
> /* PMU cycle & instret privilege mode filtering */
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index 2a8b53a6622e..d20468412dca 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -207,6 +207,9 @@
> #define CSR_SSTATEEN2 0x10E
> #define CSR_SSTATEEN3 0x10F
>
> +/* Supervisor Counter Delegation */
> +#define CSR_SCOUNTINHIBIT 0x120
> +
> /* Supervisor Trap Handling */
> #define CSR_SSCRATCH 0x140
> #define CSR_SEPC 0x141
> @@ -778,6 +781,7 @@ typedef enum RISCVException {
> #define MENVCFG_CBIE (3UL << 4)
> #define MENVCFG_CBCFE BIT(6)
> #define MENVCFG_CBZE BIT(7)
> +#define MENVCFG_CDE (1ULL << 60)
> #define MENVCFG_ADUE (1ULL << 61)
> #define MENVCFG_PBMTE (1ULL << 62)
> #define MENVCFG_STCE (1ULL << 63)
> @@ -869,7 +873,9 @@ typedef enum RISCVException {
> #define ISELECT_IMSIC_LAST ISELECT_IMSIC_EIE63
> #define ISELECT_MASK_AIA 0x1ff
>
> -/* MISELECT, SISELECT, and VSISELECT bits (AIA) */
> +/* [M|S|VS]SELCT value for Indirect CSR Access Extension */
> +#define ISELECT_CD_FIRST 0x40
> +#define ISELECT_CD_LAST 0x5f
> #define ISELECT_MASK_SXCSRIND 0xfff
>
> /* Dummy [M|S|VS]ISELECT value for emulating [M|S|VS]TOPEI CSRs */
> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> index 492c2c6d9d79..1a2a68407852 100644
> --- a/target/riscv/machine.c
> +++ b/target/riscv/machine.c
> @@ -398,6 +398,7 @@ const VMStateDescription vmstate_riscv_cpu = {
> VMSTATE_UINTTL(env.siselect, RISCVCPU),
> VMSTATE_UINT32(env.scounteren, RISCVCPU),
> VMSTATE_UINT32(env.mcounteren, RISCVCPU),
> + VMSTATE_UINT32(env.scountinhibit, RISCVCPU),
> VMSTATE_UINT32(env.mcountinhibit, RISCVCPU),
> VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
> vmstate_pmu_ctr_state, PMUCTRState),
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 06/13] target/riscv: Add select value range check for counter delegation
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
` (4 preceding siblings ...)
2024-07-23 23:30 ` [PATCH v2 05/13] target/riscv: Add counter delegation definitions Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-08-06 0:41 ` Alistair Francis
2024-07-23 23:30 ` [PATCH v2 07/13] target/riscv: Add counter delegation/configuration support Atish Patra
` (6 subsequent siblings)
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
From: Kaiwen Xue <kaiwenx@rivosinc.com>
This adds checks in ops performed on xireg and xireg2-xireg6 so that the
counter delegation function will receive a valid xiselect value with the
proper extensions enabled.
Co-developed-by: Atish Patra <atishp@rivosinc.com>
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/csr.c | 36 +++++++++++++++++++++++++++++++++++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index 4ae3931f0ada..da27ba1b7580 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -2120,6 +2120,11 @@ static bool xiselect_aia_range(target_ulong isel)
(ISELECT_IMSIC_FIRST <= isel && isel <= ISELECT_IMSIC_LAST);
}
+static bool xiselect_cd_range(target_ulong isel)
+{
+ return (ISELECT_CD_FIRST <= isel && isel <= ISELECT_CD_LAST);
+}
+
static int rmw_iprio(target_ulong xlen,
target_ulong iselect, uint8_t *iprio,
target_ulong *val, target_ulong new_val,
@@ -2245,6 +2250,17 @@ done:
return RISCV_EXCP_NONE;
}
+static int rmw_xireg_cd(CPURISCVState *env, int csrno,
+ target_ulong isel, target_ulong *val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+ if (!riscv_cpu_cfg(env)->ext_smcdeleg) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+ /* TODO: Implement the functionality later */
+ return RISCV_EXCP_NONE;
+}
+
/*
* rmw_xireg_csrind: Perform indirect access to xireg and xireg2-xireg6
*
@@ -2256,7 +2272,25 @@ static int rmw_xireg_csrind(CPURISCVState *env, int csrno,
target_ulong isel, target_ulong *val,
target_ulong new_val, target_ulong wr_mask)
{
- return -EINVAL;
+ int ret = -EINVAL;
+ bool virt = csrno == CSR_VSIREG ? true : false;
+
+ if (xiselect_cd_range(isel)) {
+ ret = rmw_xireg_cd(env, csrno, isel, val, new_val, wr_mask);
+ } else {
+ /*
+ * As per the specification, access to unimplented region is undefined
+ * but recommendation is to raise illegal instruction exception.
+ */
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ if (ret) {
+ return (env->virt_enabled && virt) ?
+ RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ return RISCV_EXCP_NONE;
}
static int rmw_xiregi(CPURISCVState *env, int csrno, target_ulong *val,
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 06/13] target/riscv: Add select value range check for counter delegation
2024-07-23 23:30 ` [PATCH v2 06/13] target/riscv: Add select value range check for counter delegation Atish Patra
@ 2024-08-06 0:41 ` Alistair Francis
0 siblings, 0 replies; 41+ messages in thread
From: Alistair Francis @ 2024-08-06 0:41 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:31 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> From: Kaiwen Xue <kaiwenx@rivosinc.com>
>
> This adds checks in ops performed on xireg and xireg2-xireg6 so that the
> counter delegation function will receive a valid xiselect value with the
> proper extensions enabled.
>
> Co-developed-by: Atish Patra <atishp@rivosinc.com>
> Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/csr.c | 36 +++++++++++++++++++++++++++++++++++-
> 1 file changed, 35 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 4ae3931f0ada..da27ba1b7580 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -2120,6 +2120,11 @@ static bool xiselect_aia_range(target_ulong isel)
> (ISELECT_IMSIC_FIRST <= isel && isel <= ISELECT_IMSIC_LAST);
> }
>
> +static bool xiselect_cd_range(target_ulong isel)
> +{
> + return (ISELECT_CD_FIRST <= isel && isel <= ISELECT_CD_LAST);
> +}
> +
> static int rmw_iprio(target_ulong xlen,
> target_ulong iselect, uint8_t *iprio,
> target_ulong *val, target_ulong new_val,
> @@ -2245,6 +2250,17 @@ done:
> return RISCV_EXCP_NONE;
> }
>
> +static int rmw_xireg_cd(CPURISCVState *env, int csrno,
> + target_ulong isel, target_ulong *val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> + if (!riscv_cpu_cfg(env)->ext_smcdeleg) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> + /* TODO: Implement the functionality later */
> + return RISCV_EXCP_NONE;
> +}
> +
> /*
> * rmw_xireg_csrind: Perform indirect access to xireg and xireg2-xireg6
> *
> @@ -2256,7 +2272,25 @@ static int rmw_xireg_csrind(CPURISCVState *env, int csrno,
> target_ulong isel, target_ulong *val,
> target_ulong new_val, target_ulong wr_mask)
> {
> - return -EINVAL;
> + int ret = -EINVAL;
> + bool virt = csrno == CSR_VSIREG ? true : false;
> +
> + if (xiselect_cd_range(isel)) {
> + ret = rmw_xireg_cd(env, csrno, isel, val, new_val, wr_mask);
> + } else {
> + /*
> + * As per the specification, access to unimplented region is undefined
> + * but recommendation is to raise illegal instruction exception.
> + */
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + if (ret) {
> + return (env->virt_enabled && virt) ?
> + RISCV_EXCP_VIRT_INSTRUCTION_FAULT : RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + return RISCV_EXCP_NONE;
> }
>
> static int rmw_xiregi(CPURISCVState *env, int csrno, target_ulong *val,
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 07/13] target/riscv: Add counter delegation/configuration support
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
` (5 preceding siblings ...)
2024-07-23 23:30 ` [PATCH v2 06/13] target/riscv: Add select value range check for counter delegation Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-08-06 1:21 ` Alistair Francis
2024-07-23 23:30 ` [PATCH v2 08/13] target/riscv: Add configuration for S[m|s]csrind, Smcdeleg/Ssccfg Atish Patra
` (5 subsequent siblings)
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
From: Kaiwen Xue <kaiwenx@rivosinc.com>
The Smcdeleg/Ssccfg adds the support for counter delegation via
S*indcsr and Ssccfg.
It also adds a new shadow CSR scountinhibit and menvcfg enable bit (CDE)
to enable this extension and scountovf virtualization.
Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
Co-developed-by: Atish Patra <atishp@rivosinc.com>
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/csr.c | 302 ++++++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 290 insertions(+), 12 deletions(-)
diff --git a/target/riscv/csr.c b/target/riscv/csr.c
index da27ba1b7580..2369a746a285 100644
--- a/target/riscv/csr.c
+++ b/target/riscv/csr.c
@@ -366,6 +366,21 @@ static RISCVException aia_smode32(CPURISCVState *env, int csrno)
return smode32(env, csrno);
}
+static RISCVException scountinhibit_pred(CPURISCVState *env, int csrno)
+{
+ RISCVCPU *cpu = env_archcpu(env);
+
+ if (!cpu->cfg.ext_ssccfg || !cpu->cfg.ext_smcdeleg) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ if (env->virt_enabled) {
+ return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
+ }
+
+ return smode(env, csrno);
+}
+
static bool csrind_extensions_present(CPURISCVState *env)
{
return riscv_cpu_cfg(env)->ext_smcsrind || riscv_cpu_cfg(env)->ext_sscsrind;
@@ -1190,10 +1205,9 @@ done:
return result;
}
-static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
- target_ulong val)
+static RISCVException riscv_pmu_write_ctr(CPURISCVState *env, target_ulong val,
+ uint32_t ctr_idx)
{
- int ctr_idx = csrno - CSR_MCYCLE;
PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
uint64_t mhpmctr_val = val;
@@ -1218,10 +1232,9 @@ static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
-static RISCVException write_mhpmcounterh(CPURISCVState *env, int csrno,
- target_ulong val)
+static RISCVException riscv_pmu_write_ctrh(CPURISCVState *env, target_ulong val,
+ uint32_t ctr_idx)
{
- int ctr_idx = csrno - CSR_MCYCLEH;
PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
uint64_t mhpmctr_val = counter->mhpmcounter_val;
uint64_t mhpmctrh_val = val;
@@ -1243,6 +1256,20 @@ static RISCVException write_mhpmcounterh(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
+static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
+{
+ int ctr_idx = csrno - CSR_MCYCLE;
+
+ return riscv_pmu_write_ctr(env, val, ctr_idx);
+}
+
+static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
+{
+ int ctr_idx = csrno - CSR_MCYCLEH;
+
+ return riscv_pmu_write_ctrh(env, val, ctr_idx);
+}
+
RISCVException riscv_pmu_read_ctr(CPURISCVState *env, target_ulong *val,
bool upper_half, uint32_t ctr_idx)
{
@@ -1308,6 +1335,167 @@ static RISCVException read_hpmcounterh(CPURISCVState *env, int csrno,
return riscv_pmu_read_ctr(env, val, true, ctr_index);
}
+static int rmw_cd_mhpmcounter(CPURISCVState *env, int ctr_idx,
+ target_ulong *val, target_ulong new_val,
+ target_ulong wr_mask)
+{
+ if (wr_mask != 0 && wr_mask != -1) {
+ return -EINVAL;
+ }
+
+ if (!wr_mask && val) {
+ riscv_pmu_read_ctr(env, val, false, ctr_idx);
+ } else if (wr_mask) {
+ riscv_pmu_write_ctr(env, new_val, ctr_idx);
+ } else {
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int rmw_cd_mhpmcounterh(CPURISCVState *env, int ctr_idx,
+ target_ulong *val, target_ulong new_val,
+ target_ulong wr_mask)
+{
+ if (wr_mask != 0 && wr_mask != -1) {
+ return -EINVAL;
+ }
+
+ if (!wr_mask && val) {
+ riscv_pmu_read_ctr(env, val, true, ctr_idx);
+ } else if (wr_mask) {
+ riscv_pmu_write_ctrh(env, new_val, ctr_idx);
+ } else {
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int rmw_cd_mhpmevent(CPURISCVState *env, int evt_index,
+ target_ulong *val, target_ulong new_val,
+ target_ulong wr_mask)
+{
+ uint64_t mhpmevt_val = new_val;
+
+ if (wr_mask != 0 && wr_mask != -1) {
+ return -EINVAL;
+ }
+
+ if (!wr_mask && val) {
+ *val = env->mhpmevent_val[evt_index];
+ if (riscv_cpu_cfg(env)->ext_sscofpmf) {
+ *val &= ~MHPMEVENT_BIT_MINH;
+ }
+ } else if (wr_mask) {
+ wr_mask &= ~MHPMEVENT_BIT_MINH;
+ mhpmevt_val = (new_val & wr_mask) |
+ (env->mhpmevent_val[evt_index] & ~wr_mask);
+ if (riscv_cpu_mxl(env) == MXL_RV32) {
+ mhpmevt_val = mhpmevt_val |
+ ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
+ }
+ env->mhpmevent_val[evt_index] = mhpmevt_val;
+ riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
+ } else {
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int rmw_cd_mhpmeventh(CPURISCVState *env, int evt_index,
+ target_ulong *val, target_ulong new_val,
+ target_ulong wr_mask)
+{
+ uint64_t mhpmevth_val;
+ uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
+
+ if (wr_mask != 0 && wr_mask != -1) {
+ return -EINVAL;
+ }
+
+ if (!wr_mask && val) {
+ *val = env->mhpmeventh_val[evt_index];
+ if (riscv_cpu_cfg(env)->ext_sscofpmf) {
+ *val &= ~MHPMEVENTH_BIT_MINH;
+ }
+ } else if (wr_mask) {
+ wr_mask &= ~MHPMEVENTH_BIT_MINH;
+ env->mhpmeventh_val[evt_index] =
+ (new_val & wr_mask) | (env->mhpmeventh_val[evt_index] & ~wr_mask);
+ mhpmevth_val = env->mhpmeventh_val[evt_index];
+ mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
+ riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
+ } else {
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int rmw_cd_ctr_cfg(CPURISCVState *env, int cfg_index, target_ulong *val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+ switch (cfg_index) {
+ case 0: /* CYCLECFG */
+ if (wr_mask) {
+ wr_mask &= ~MCYCLECFG_BIT_MINH;
+ env->mcyclecfg = (new_val & wr_mask) | (env->mcyclecfg & ~wr_mask);
+ } else {
+ *val = env->mcyclecfg &= ~MHPMEVENTH_BIT_MINH;
+ }
+ break;
+ case 2: /* INSTRETCFG */
+ if (wr_mask) {
+ wr_mask &= ~MINSTRETCFG_BIT_MINH;
+ env->minstretcfg = (new_val & wr_mask) |
+ (env->minstretcfg & ~wr_mask);
+ } else {
+ *val = env->minstretcfg &= ~MHPMEVENTH_BIT_MINH;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int rmw_cd_ctr_cfgh(CPURISCVState *env, int cfg_index, target_ulong *val,
+ target_ulong new_val, target_ulong wr_mask)
+{
+
+ if (riscv_cpu_mxl(env) != MXL_RV32) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ switch (cfg_index) {
+ case 0: /* CYCLECFGH */
+ if (wr_mask) {
+ wr_mask &= ~MCYCLECFGH_BIT_MINH;
+ env->mcyclecfgh = (new_val & wr_mask) |
+ (env->mcyclecfgh & ~wr_mask);
+ } else {
+ *val = env->mcyclecfgh;
+ }
+ break;
+ case 2: /* INSTRETCFGH */
+ if (wr_mask) {
+ wr_mask &= ~MINSTRETCFGH_BIT_MINH;
+ env->minstretcfgh = (new_val & wr_mask) |
+ (env->minstretcfgh & ~wr_mask);
+ } else {
+ *val = env->minstretcfgh;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+
static RISCVException read_scountovf(CPURISCVState *env, int csrno,
target_ulong *val)
{
@@ -1317,6 +1505,14 @@ static RISCVException read_scountovf(CPURISCVState *env, int csrno,
target_ulong *mhpm_evt_val;
uint64_t of_bit_mask;
+ /* Virtualize scountovf for counter delegation */
+ if (riscv_cpu_cfg(env)->ext_sscofpmf &&
+ riscv_cpu_cfg(env)->ext_ssccfg &&
+ get_field(env->menvcfg, MENVCFG_CDE) &&
+ env->virt_enabled) {
+ return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
+ }
+
if (riscv_cpu_mxl(env) == MXL_RV32) {
mhpm_evt_val = env->mhpmeventh_val;
of_bit_mask = MHPMEVENTH_BIT_OF;
@@ -2254,11 +2450,70 @@ static int rmw_xireg_cd(CPURISCVState *env, int csrno,
target_ulong isel, target_ulong *val,
target_ulong new_val, target_ulong wr_mask)
{
- if (!riscv_cpu_cfg(env)->ext_smcdeleg) {
+ int ret = -EINVAL;
+ int ctr_index = isel - ISELECT_CD_FIRST;
+ int isel_hpm_start = ISELECT_CD_FIRST + 3;
+
+ if (!riscv_cpu_cfg(env)->ext_smcdeleg || !riscv_cpu_cfg(env)->ext_ssccfg) {
return RISCV_EXCP_ILLEGAL_INST;
}
- /* TODO: Implement the functionality later */
- return RISCV_EXCP_NONE;
+
+ /* Invalid siselect value for reserved */
+ if (ctr_index == 1) {
+ goto done;
+ }
+
+ /* sireg4 and sireg5 provides access RV32 only CSRs */
+ if (((csrno == CSR_SIREG5) || (csrno == CSR_SIREG4)) &&
+ (riscv_cpu_mxl(env) != MXL_RV32)) {
+ return RISCV_EXCP_ILLEGAL_INST;
+ }
+
+ /* Check Sscofpmf dependancy */
+ if (!riscv_cpu_cfg(env)->ext_sscofpmf && csrno == CSR_SIREG5 &&
+ (isel_hpm_start <= isel && isel <= ISELECT_CD_LAST)) {
+ goto done;
+ }
+
+ /* Check smcntrpmf dependancy */
+ if (!riscv_cpu_cfg(env)->ext_smcntrpmf &&
+ (csrno == CSR_SIREG2 || csrno == CSR_SIREG5) &&
+ (ISELECT_CD_FIRST <= isel && isel < isel_hpm_start)) {
+ goto done;
+ }
+
+ if (!get_field(env->mcounteren, BIT(ctr_index)) ||
+ !get_field(env->menvcfg, MENVCFG_CDE)) {
+ goto done;
+ }
+
+ switch (csrno) {
+ case CSR_SIREG:
+ ret = rmw_cd_mhpmcounter(env, ctr_index, val, new_val, wr_mask);
+ break;
+ case CSR_SIREG4:
+ ret = rmw_cd_mhpmcounterh(env, ctr_index, val, new_val, wr_mask);
+ break;
+ case CSR_SIREG2:
+ if (ctr_index <= 2) {
+ ret = rmw_cd_ctr_cfg(env, ctr_index, val, new_val, wr_mask);
+ } else {
+ ret = rmw_cd_mhpmevent(env, ctr_index, val, new_val, wr_mask);
+ }
+ break;
+ case CSR_SIREG5:
+ if (ctr_index <= 2) {
+ ret = rmw_cd_ctr_cfgh(env, ctr_index, val, new_val, wr_mask);
+ } else {
+ ret = rmw_cd_mhpmeventh(env, ctr_index, val, new_val, wr_mask);
+ }
+ break;
+ default:
+ goto done;
+ }
+
+done:
+ return ret;
}
/*
@@ -2540,6 +2795,21 @@ static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
return RISCV_EXCP_NONE;
}
+static RISCVException read_scountinhibit(CPURISCVState *env, int csrno,
+ target_ulong *val)
+{
+ /* S-mode can only access the bits delegated by M-mode */
+ *val = env->mcountinhibit & env->mcounteren;
+ return RISCV_EXCP_NONE;
+}
+
+static RISCVException write_scountinhibit(CPURISCVState *env, int csrno,
+ target_ulong val)
+{
+ write_mcountinhibit(env, csrno, val & env->mcounteren);
+ return RISCV_EXCP_NONE;
+}
+
static RISCVException read_mcounteren(CPURISCVState *env, int csrno,
target_ulong *val)
{
@@ -2642,12 +2912,14 @@ static RISCVException write_menvcfg(CPURISCVState *env, int csrno,
target_ulong val)
{
const RISCVCPUConfig *cfg = riscv_cpu_cfg(env);
- uint64_t mask = MENVCFG_FIOM | MENVCFG_CBIE | MENVCFG_CBCFE | MENVCFG_CBZE;
+ uint64_t mask = MENVCFG_FIOM | MENVCFG_CBIE | MENVCFG_CBCFE |
+ MENVCFG_CBZE | MENVCFG_CDE;
if (riscv_cpu_mxl(env) == MXL_RV64) {
mask |= (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
(cfg->ext_sstc ? MENVCFG_STCE : 0) |
- (cfg->ext_svadu ? MENVCFG_ADUE : 0);
+ (cfg->ext_svadu ? MENVCFG_ADUE : 0) |
+ (cfg->ext_smcdeleg ? MENVCFG_CDE : 0);
}
env->menvcfg = (env->menvcfg & ~mask) | (val & mask);
@@ -2667,7 +2939,8 @@ static RISCVException write_menvcfgh(CPURISCVState *env, int csrno,
const RISCVCPUConfig *cfg = riscv_cpu_cfg(env);
uint64_t mask = (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
(cfg->ext_sstc ? MENVCFG_STCE : 0) |
- (cfg->ext_svadu ? MENVCFG_ADUE : 0);
+ (cfg->ext_svadu ? MENVCFG_ADUE : 0) |
+ (cfg->ext_smcdeleg ? MENVCFG_CDE : 0);
uint64_t valh = (uint64_t)val << 32;
env->menvcfg = (env->menvcfg & ~mask) | (valh & mask);
@@ -5417,6 +5690,11 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
write_sstateen_1_3,
.min_priv_ver = PRIV_VERSION_1_12_0 },
+ /* Supervisor Counter Delegation */
+ [CSR_SCOUNTINHIBIT] = {"scountinhibit", scountinhibit_pred,
+ read_scountinhibit, write_scountinhibit,
+ .min_priv_ver = PRIV_VERSION_1_12_0 },
+
/* Supervisor Trap Setup */
[CSR_SSTATUS] = { "sstatus", smode, read_sstatus, write_sstatus,
NULL, read_sstatus_i128 },
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 07/13] target/riscv: Add counter delegation/configuration support
2024-07-23 23:30 ` [PATCH v2 07/13] target/riscv: Add counter delegation/configuration support Atish Patra
@ 2024-08-06 1:21 ` Alistair Francis
2024-08-07 7:48 ` Atish Kumar Patra
0 siblings, 1 reply; 41+ messages in thread
From: Alistair Francis @ 2024-08-06 1:21 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:32 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> From: Kaiwen Xue <kaiwenx@rivosinc.com>
>
> The Smcdeleg/Ssccfg adds the support for counter delegation via
> S*indcsr and Ssccfg.
>
> It also adds a new shadow CSR scountinhibit and menvcfg enable bit (CDE)
> to enable this extension and scountovf virtualization.
>
> Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> Co-developed-by: Atish Patra <atishp@rivosinc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/csr.c | 302 ++++++++++++++++++++++++++++++++++++++++++++++++++---
> 1 file changed, 290 insertions(+), 12 deletions(-)
>
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index da27ba1b7580..2369a746a285 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -366,6 +366,21 @@ static RISCVException aia_smode32(CPURISCVState *env, int csrno)
> return smode32(env, csrno);
> }
>
> +static RISCVException scountinhibit_pred(CPURISCVState *env, int csrno)
> +{
> + RISCVCPU *cpu = env_archcpu(env);
> +
> + if (!cpu->cfg.ext_ssccfg || !cpu->cfg.ext_smcdeleg) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + if (env->virt_enabled) {
> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> + }
> +
> + return smode(env, csrno);
> +}
> +
> static bool csrind_extensions_present(CPURISCVState *env)
> {
> return riscv_cpu_cfg(env)->ext_smcsrind || riscv_cpu_cfg(env)->ext_sscsrind;
> @@ -1190,10 +1205,9 @@ done:
> return result;
> }
>
> -static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
> - target_ulong val)
> +static RISCVException riscv_pmu_write_ctr(CPURISCVState *env, target_ulong val,
> + uint32_t ctr_idx)
> {
> - int ctr_idx = csrno - CSR_MCYCLE;
> PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> uint64_t mhpmctr_val = val;
>
> @@ -1218,10 +1232,9 @@ static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
>
> -static RISCVException write_mhpmcounterh(CPURISCVState *env, int csrno,
> - target_ulong val)
> +static RISCVException riscv_pmu_write_ctrh(CPURISCVState *env, target_ulong val,
> + uint32_t ctr_idx)
> {
> - int ctr_idx = csrno - CSR_MCYCLEH;
> PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> uint64_t mhpmctr_val = counter->mhpmcounter_val;
> uint64_t mhpmctrh_val = val;
> @@ -1243,6 +1256,20 @@ static RISCVException write_mhpmcounterh(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
>
> +static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
> +{
> + int ctr_idx = csrno - CSR_MCYCLE;
> +
> + return riscv_pmu_write_ctr(env, val, ctr_idx);
> +}
> +
> +static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
> +{
> + int ctr_idx = csrno - CSR_MCYCLEH;
> +
> + return riscv_pmu_write_ctrh(env, val, ctr_idx);
> +}
> +
> RISCVException riscv_pmu_read_ctr(CPURISCVState *env, target_ulong *val,
> bool upper_half, uint32_t ctr_idx)
> {
> @@ -1308,6 +1335,167 @@ static RISCVException read_hpmcounterh(CPURISCVState *env, int csrno,
> return riscv_pmu_read_ctr(env, val, true, ctr_index);
> }
>
> +static int rmw_cd_mhpmcounter(CPURISCVState *env, int ctr_idx,
> + target_ulong *val, target_ulong new_val,
> + target_ulong wr_mask)
> +{
> + if (wr_mask != 0 && wr_mask != -1) {
> + return -EINVAL;
> + }
> +
> + if (!wr_mask && val) {
> + riscv_pmu_read_ctr(env, val, false, ctr_idx);
> + } else if (wr_mask) {
> + riscv_pmu_write_ctr(env, new_val, ctr_idx);
> + } else {
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static int rmw_cd_mhpmcounterh(CPURISCVState *env, int ctr_idx,
> + target_ulong *val, target_ulong new_val,
> + target_ulong wr_mask)
> +{
> + if (wr_mask != 0 && wr_mask != -1) {
> + return -EINVAL;
> + }
> +
> + if (!wr_mask && val) {
> + riscv_pmu_read_ctr(env, val, true, ctr_idx);
riscv_pmu_write_ctrh?
Alistair
> + } else if (wr_mask) {
> + riscv_pmu_write_ctrh(env, new_val, ctr_idx);
> + } else {
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static int rmw_cd_mhpmevent(CPURISCVState *env, int evt_index,
> + target_ulong *val, target_ulong new_val,
> + target_ulong wr_mask)
> +{
> + uint64_t mhpmevt_val = new_val;
> +
> + if (wr_mask != 0 && wr_mask != -1) {
> + return -EINVAL;
> + }
> +
> + if (!wr_mask && val) {
> + *val = env->mhpmevent_val[evt_index];
> + if (riscv_cpu_cfg(env)->ext_sscofpmf) {
> + *val &= ~MHPMEVENT_BIT_MINH;
> + }
> + } else if (wr_mask) {
> + wr_mask &= ~MHPMEVENT_BIT_MINH;
> + mhpmevt_val = (new_val & wr_mask) |
> + (env->mhpmevent_val[evt_index] & ~wr_mask);
> + if (riscv_cpu_mxl(env) == MXL_RV32) {
> + mhpmevt_val = mhpmevt_val |
> + ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
> + }
> + env->mhpmevent_val[evt_index] = mhpmevt_val;
> + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> + } else {
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static int rmw_cd_mhpmeventh(CPURISCVState *env, int evt_index,
> + target_ulong *val, target_ulong new_val,
> + target_ulong wr_mask)
> +{
> + uint64_t mhpmevth_val;
> + uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
> +
> + if (wr_mask != 0 && wr_mask != -1) {
> + return -EINVAL;
> + }
> +
> + if (!wr_mask && val) {
> + *val = env->mhpmeventh_val[evt_index];
> + if (riscv_cpu_cfg(env)->ext_sscofpmf) {
> + *val &= ~MHPMEVENTH_BIT_MINH;
> + }
> + } else if (wr_mask) {
> + wr_mask &= ~MHPMEVENTH_BIT_MINH;
> + env->mhpmeventh_val[evt_index] =
> + (new_val & wr_mask) | (env->mhpmeventh_val[evt_index] & ~wr_mask);
> + mhpmevth_val = env->mhpmeventh_val[evt_index];
> + mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
> + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> + } else {
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static int rmw_cd_ctr_cfg(CPURISCVState *env, int cfg_index, target_ulong *val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> + switch (cfg_index) {
> + case 0: /* CYCLECFG */
> + if (wr_mask) {
> + wr_mask &= ~MCYCLECFG_BIT_MINH;
> + env->mcyclecfg = (new_val & wr_mask) | (env->mcyclecfg & ~wr_mask);
> + } else {
> + *val = env->mcyclecfg &= ~MHPMEVENTH_BIT_MINH;
> + }
> + break;
> + case 2: /* INSTRETCFG */
> + if (wr_mask) {
> + wr_mask &= ~MINSTRETCFG_BIT_MINH;
> + env->minstretcfg = (new_val & wr_mask) |
> + (env->minstretcfg & ~wr_mask);
> + } else {
> + *val = env->minstretcfg &= ~MHPMEVENTH_BIT_MINH;
> + }
> + break;
> + default:
> + return -EINVAL;
> + }
> + return 0;
> +}
> +
> +static int rmw_cd_ctr_cfgh(CPURISCVState *env, int cfg_index, target_ulong *val,
> + target_ulong new_val, target_ulong wr_mask)
> +{
> +
> + if (riscv_cpu_mxl(env) != MXL_RV32) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + switch (cfg_index) {
> + case 0: /* CYCLECFGH */
> + if (wr_mask) {
> + wr_mask &= ~MCYCLECFGH_BIT_MINH;
> + env->mcyclecfgh = (new_val & wr_mask) |
> + (env->mcyclecfgh & ~wr_mask);
> + } else {
> + *val = env->mcyclecfgh;
> + }
> + break;
> + case 2: /* INSTRETCFGH */
> + if (wr_mask) {
> + wr_mask &= ~MINSTRETCFGH_BIT_MINH;
> + env->minstretcfgh = (new_val & wr_mask) |
> + (env->minstretcfgh & ~wr_mask);
> + } else {
> + *val = env->minstretcfgh;
> + }
> + break;
> + default:
> + return -EINVAL;
> + }
> + return 0;
> +}
> +
> +
> static RISCVException read_scountovf(CPURISCVState *env, int csrno,
> target_ulong *val)
> {
> @@ -1317,6 +1505,14 @@ static RISCVException read_scountovf(CPURISCVState *env, int csrno,
> target_ulong *mhpm_evt_val;
> uint64_t of_bit_mask;
>
> + /* Virtualize scountovf for counter delegation */
> + if (riscv_cpu_cfg(env)->ext_sscofpmf &&
> + riscv_cpu_cfg(env)->ext_ssccfg &&
> + get_field(env->menvcfg, MENVCFG_CDE) &&
> + env->virt_enabled) {
> + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> + }
> +
> if (riscv_cpu_mxl(env) == MXL_RV32) {
> mhpm_evt_val = env->mhpmeventh_val;
> of_bit_mask = MHPMEVENTH_BIT_OF;
> @@ -2254,11 +2450,70 @@ static int rmw_xireg_cd(CPURISCVState *env, int csrno,
> target_ulong isel, target_ulong *val,
> target_ulong new_val, target_ulong wr_mask)
> {
> - if (!riscv_cpu_cfg(env)->ext_smcdeleg) {
> + int ret = -EINVAL;
> + int ctr_index = isel - ISELECT_CD_FIRST;
> + int isel_hpm_start = ISELECT_CD_FIRST + 3;
> +
> + if (!riscv_cpu_cfg(env)->ext_smcdeleg || !riscv_cpu_cfg(env)->ext_ssccfg) {
> return RISCV_EXCP_ILLEGAL_INST;
> }
> - /* TODO: Implement the functionality later */
> - return RISCV_EXCP_NONE;
> +
> + /* Invalid siselect value for reserved */
> + if (ctr_index == 1) {
> + goto done;
> + }
> +
> + /* sireg4 and sireg5 provides access RV32 only CSRs */
> + if (((csrno == CSR_SIREG5) || (csrno == CSR_SIREG4)) &&
> + (riscv_cpu_mxl(env) != MXL_RV32)) {
> + return RISCV_EXCP_ILLEGAL_INST;
> + }
> +
> + /* Check Sscofpmf dependancy */
> + if (!riscv_cpu_cfg(env)->ext_sscofpmf && csrno == CSR_SIREG5 &&
> + (isel_hpm_start <= isel && isel <= ISELECT_CD_LAST)) {
> + goto done;
> + }
> +
> + /* Check smcntrpmf dependancy */
> + if (!riscv_cpu_cfg(env)->ext_smcntrpmf &&
> + (csrno == CSR_SIREG2 || csrno == CSR_SIREG5) &&
> + (ISELECT_CD_FIRST <= isel && isel < isel_hpm_start)) {
> + goto done;
> + }
> +
> + if (!get_field(env->mcounteren, BIT(ctr_index)) ||
> + !get_field(env->menvcfg, MENVCFG_CDE)) {
> + goto done;
> + }
> +
> + switch (csrno) {
> + case CSR_SIREG:
> + ret = rmw_cd_mhpmcounter(env, ctr_index, val, new_val, wr_mask);
> + break;
> + case CSR_SIREG4:
> + ret = rmw_cd_mhpmcounterh(env, ctr_index, val, new_val, wr_mask);
> + break;
> + case CSR_SIREG2:
> + if (ctr_index <= 2) {
> + ret = rmw_cd_ctr_cfg(env, ctr_index, val, new_val, wr_mask);
> + } else {
> + ret = rmw_cd_mhpmevent(env, ctr_index, val, new_val, wr_mask);
> + }
> + break;
> + case CSR_SIREG5:
> + if (ctr_index <= 2) {
> + ret = rmw_cd_ctr_cfgh(env, ctr_index, val, new_val, wr_mask);
> + } else {
> + ret = rmw_cd_mhpmeventh(env, ctr_index, val, new_val, wr_mask);
> + }
> + break;
> + default:
> + goto done;
> + }
> +
> +done:
> + return ret;
> }
>
> /*
> @@ -2540,6 +2795,21 @@ static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
> return RISCV_EXCP_NONE;
> }
>
> +static RISCVException read_scountinhibit(CPURISCVState *env, int csrno,
> + target_ulong *val)
> +{
> + /* S-mode can only access the bits delegated by M-mode */
> + *val = env->mcountinhibit & env->mcounteren;
> + return RISCV_EXCP_NONE;
> +}
> +
> +static RISCVException write_scountinhibit(CPURISCVState *env, int csrno,
> + target_ulong val)
> +{
> + write_mcountinhibit(env, csrno, val & env->mcounteren);
> + return RISCV_EXCP_NONE;
> +}
> +
> static RISCVException read_mcounteren(CPURISCVState *env, int csrno,
> target_ulong *val)
> {
> @@ -2642,12 +2912,14 @@ static RISCVException write_menvcfg(CPURISCVState *env, int csrno,
> target_ulong val)
> {
> const RISCVCPUConfig *cfg = riscv_cpu_cfg(env);
> - uint64_t mask = MENVCFG_FIOM | MENVCFG_CBIE | MENVCFG_CBCFE | MENVCFG_CBZE;
> + uint64_t mask = MENVCFG_FIOM | MENVCFG_CBIE | MENVCFG_CBCFE |
> + MENVCFG_CBZE | MENVCFG_CDE;
>
> if (riscv_cpu_mxl(env) == MXL_RV64) {
> mask |= (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
> (cfg->ext_sstc ? MENVCFG_STCE : 0) |
> - (cfg->ext_svadu ? MENVCFG_ADUE : 0);
> + (cfg->ext_svadu ? MENVCFG_ADUE : 0) |
> + (cfg->ext_smcdeleg ? MENVCFG_CDE : 0);
> }
> env->menvcfg = (env->menvcfg & ~mask) | (val & mask);
>
> @@ -2667,7 +2939,8 @@ static RISCVException write_menvcfgh(CPURISCVState *env, int csrno,
> const RISCVCPUConfig *cfg = riscv_cpu_cfg(env);
> uint64_t mask = (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
> (cfg->ext_sstc ? MENVCFG_STCE : 0) |
> - (cfg->ext_svadu ? MENVCFG_ADUE : 0);
> + (cfg->ext_svadu ? MENVCFG_ADUE : 0) |
> + (cfg->ext_smcdeleg ? MENVCFG_CDE : 0);
> uint64_t valh = (uint64_t)val << 32;
>
> env->menvcfg = (env->menvcfg & ~mask) | (valh & mask);
> @@ -5417,6 +5690,11 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> write_sstateen_1_3,
> .min_priv_ver = PRIV_VERSION_1_12_0 },
>
> + /* Supervisor Counter Delegation */
> + [CSR_SCOUNTINHIBIT] = {"scountinhibit", scountinhibit_pred,
> + read_scountinhibit, write_scountinhibit,
> + .min_priv_ver = PRIV_VERSION_1_12_0 },
> +
> /* Supervisor Trap Setup */
> [CSR_SSTATUS] = { "sstatus", smode, read_sstatus, write_sstatus,
> NULL, read_sstatus_i128 },
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 07/13] target/riscv: Add counter delegation/configuration support
2024-08-06 1:21 ` Alistair Francis
@ 2024-08-07 7:48 ` Atish Kumar Patra
0 siblings, 0 replies; 41+ messages in thread
From: Atish Kumar Patra @ 2024-08-07 7:48 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Mon, Aug 5, 2024 at 6:21 PM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Wed, Jul 24, 2024 at 9:32 AM Atish Patra <atishp@rivosinc.com> wrote:
> >
> > From: Kaiwen Xue <kaiwenx@rivosinc.com>
> >
> > The Smcdeleg/Ssccfg adds the support for counter delegation via
> > S*indcsr and Ssccfg.
> >
> > It also adds a new shadow CSR scountinhibit and menvcfg enable bit (CDE)
> > to enable this extension and scountovf virtualization.
> >
> > Signed-off-by: Kaiwen Xue <kaiwenx@rivosinc.com>
> > Co-developed-by: Atish Patra <atishp@rivosinc.com>
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
> > target/riscv/csr.c | 302 ++++++++++++++++++++++++++++++++++++++++++++++++++---
> > 1 file changed, 290 insertions(+), 12 deletions(-)
> >
> > diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> > index da27ba1b7580..2369a746a285 100644
> > --- a/target/riscv/csr.c
> > +++ b/target/riscv/csr.c
> > @@ -366,6 +366,21 @@ static RISCVException aia_smode32(CPURISCVState *env, int csrno)
> > return smode32(env, csrno);
> > }
> >
> > +static RISCVException scountinhibit_pred(CPURISCVState *env, int csrno)
> > +{
> > + RISCVCPU *cpu = env_archcpu(env);
> > +
> > + if (!cpu->cfg.ext_ssccfg || !cpu->cfg.ext_smcdeleg) {
> > + return RISCV_EXCP_ILLEGAL_INST;
> > + }
> > +
> > + if (env->virt_enabled) {
> > + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > + }
> > +
> > + return smode(env, csrno);
> > +}
> > +
> > static bool csrind_extensions_present(CPURISCVState *env)
> > {
> > return riscv_cpu_cfg(env)->ext_smcsrind || riscv_cpu_cfg(env)->ext_sscsrind;
> > @@ -1190,10 +1205,9 @@ done:
> > return result;
> > }
> >
> > -static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
> > - target_ulong val)
> > +static RISCVException riscv_pmu_write_ctr(CPURISCVState *env, target_ulong val,
> > + uint32_t ctr_idx)
> > {
> > - int ctr_idx = csrno - CSR_MCYCLE;
> > PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > uint64_t mhpmctr_val = val;
> >
> > @@ -1218,10 +1232,9 @@ static RISCVException write_mhpmcounter(CPURISCVState *env, int csrno,
> > return RISCV_EXCP_NONE;
> > }
> >
> > -static RISCVException write_mhpmcounterh(CPURISCVState *env, int csrno,
> > - target_ulong val)
> > +static RISCVException riscv_pmu_write_ctrh(CPURISCVState *env, target_ulong val,
> > + uint32_t ctr_idx)
> > {
> > - int ctr_idx = csrno - CSR_MCYCLEH;
> > PMUCTRState *counter = &env->pmu_ctrs[ctr_idx];
> > uint64_t mhpmctr_val = counter->mhpmcounter_val;
> > uint64_t mhpmctrh_val = val;
> > @@ -1243,6 +1256,20 @@ static RISCVException write_mhpmcounterh(CPURISCVState *env, int csrno,
> > return RISCV_EXCP_NONE;
> > }
> >
> > +static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong val)
> > +{
> > + int ctr_idx = csrno - CSR_MCYCLE;
> > +
> > + return riscv_pmu_write_ctr(env, val, ctr_idx);
> > +}
> > +
> > +static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong val)
> > +{
> > + int ctr_idx = csrno - CSR_MCYCLEH;
> > +
> > + return riscv_pmu_write_ctrh(env, val, ctr_idx);
> > +}
> > +
> > RISCVException riscv_pmu_read_ctr(CPURISCVState *env, target_ulong *val,
> > bool upper_half, uint32_t ctr_idx)
> > {
> > @@ -1308,6 +1335,167 @@ static RISCVException read_hpmcounterh(CPURISCVState *env, int csrno,
> > return riscv_pmu_read_ctr(env, val, true, ctr_index);
> > }
> >
> > +static int rmw_cd_mhpmcounter(CPURISCVState *env, int ctr_idx,
> > + target_ulong *val, target_ulong new_val,
> > + target_ulong wr_mask)
> > +{
> > + if (wr_mask != 0 && wr_mask != -1) {
> > + return -EINVAL;
> > + }
> > +
> > + if (!wr_mask && val) {
> > + riscv_pmu_read_ctr(env, val, false, ctr_idx);
> > + } else if (wr_mask) {
> > + riscv_pmu_write_ctr(env, new_val, ctr_idx);
> > + } else {
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int rmw_cd_mhpmcounterh(CPURISCVState *env, int ctr_idx,
> > + target_ulong *val, target_ulong new_val,
> > + target_ulong wr_mask)
> > +{
> > + if (wr_mask != 0 && wr_mask != -1) {
> > + return -EINVAL;
> > + }
> > +
> > + if (!wr_mask && val) {
> > + riscv_pmu_read_ctr(env, val, true, ctr_idx);
>
> riscv_pmu_write_ctrh?
>
My bad. Copy paste error. Thanks for pointing that out.
> Alistair
>
> > + } else if (wr_mask) {
> > + riscv_pmu_write_ctrh(env, new_val, ctr_idx);
> > + } else {
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int rmw_cd_mhpmevent(CPURISCVState *env, int evt_index,
> > + target_ulong *val, target_ulong new_val,
> > + target_ulong wr_mask)
> > +{
> > + uint64_t mhpmevt_val = new_val;
> > +
> > + if (wr_mask != 0 && wr_mask != -1) {
> > + return -EINVAL;
> > + }
> > +
> > + if (!wr_mask && val) {
> > + *val = env->mhpmevent_val[evt_index];
> > + if (riscv_cpu_cfg(env)->ext_sscofpmf) {
> > + *val &= ~MHPMEVENT_BIT_MINH;
> > + }
> > + } else if (wr_mask) {
> > + wr_mask &= ~MHPMEVENT_BIT_MINH;
> > + mhpmevt_val = (new_val & wr_mask) |
> > + (env->mhpmevent_val[evt_index] & ~wr_mask);
> > + if (riscv_cpu_mxl(env) == MXL_RV32) {
> > + mhpmevt_val = mhpmevt_val |
> > + ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
> > + }
> > + env->mhpmevent_val[evt_index] = mhpmevt_val;
> > + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> > + } else {
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int rmw_cd_mhpmeventh(CPURISCVState *env, int evt_index,
> > + target_ulong *val, target_ulong new_val,
> > + target_ulong wr_mask)
> > +{
> > + uint64_t mhpmevth_val;
> > + uint64_t mhpmevt_val = env->mhpmevent_val[evt_index];
> > +
> > + if (wr_mask != 0 && wr_mask != -1) {
> > + return -EINVAL;
> > + }
> > +
> > + if (!wr_mask && val) {
> > + *val = env->mhpmeventh_val[evt_index];
> > + if (riscv_cpu_cfg(env)->ext_sscofpmf) {
> > + *val &= ~MHPMEVENTH_BIT_MINH;
> > + }
> > + } else if (wr_mask) {
> > + wr_mask &= ~MHPMEVENTH_BIT_MINH;
> > + env->mhpmeventh_val[evt_index] =
> > + (new_val & wr_mask) | (env->mhpmeventh_val[evt_index] & ~wr_mask);
> > + mhpmevth_val = env->mhpmeventh_val[evt_index];
> > + mhpmevt_val = mhpmevt_val | (mhpmevth_val << 32);
> > + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index);
> > + } else {
> > + return -EINVAL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int rmw_cd_ctr_cfg(CPURISCVState *env, int cfg_index, target_ulong *val,
> > + target_ulong new_val, target_ulong wr_mask)
> > +{
> > + switch (cfg_index) {
> > + case 0: /* CYCLECFG */
> > + if (wr_mask) {
> > + wr_mask &= ~MCYCLECFG_BIT_MINH;
> > + env->mcyclecfg = (new_val & wr_mask) | (env->mcyclecfg & ~wr_mask);
> > + } else {
> > + *val = env->mcyclecfg &= ~MHPMEVENTH_BIT_MINH;
> > + }
> > + break;
> > + case 2: /* INSTRETCFG */
> > + if (wr_mask) {
> > + wr_mask &= ~MINSTRETCFG_BIT_MINH;
> > + env->minstretcfg = (new_val & wr_mask) |
> > + (env->minstretcfg & ~wr_mask);
> > + } else {
> > + *val = env->minstretcfg &= ~MHPMEVENTH_BIT_MINH;
> > + }
> > + break;
> > + default:
> > + return -EINVAL;
> > + }
> > + return 0;
> > +}
> > +
> > +static int rmw_cd_ctr_cfgh(CPURISCVState *env, int cfg_index, target_ulong *val,
> > + target_ulong new_val, target_ulong wr_mask)
> > +{
> > +
> > + if (riscv_cpu_mxl(env) != MXL_RV32) {
> > + return RISCV_EXCP_ILLEGAL_INST;
> > + }
> > +
> > + switch (cfg_index) {
> > + case 0: /* CYCLECFGH */
> > + if (wr_mask) {
> > + wr_mask &= ~MCYCLECFGH_BIT_MINH;
> > + env->mcyclecfgh = (new_val & wr_mask) |
> > + (env->mcyclecfgh & ~wr_mask);
> > + } else {
> > + *val = env->mcyclecfgh;
> > + }
> > + break;
> > + case 2: /* INSTRETCFGH */
> > + if (wr_mask) {
> > + wr_mask &= ~MINSTRETCFGH_BIT_MINH;
> > + env->minstretcfgh = (new_val & wr_mask) |
> > + (env->minstretcfgh & ~wr_mask);
> > + } else {
> > + *val = env->minstretcfgh;
> > + }
> > + break;
> > + default:
> > + return -EINVAL;
> > + }
> > + return 0;
> > +}
> > +
> > +
> > static RISCVException read_scountovf(CPURISCVState *env, int csrno,
> > target_ulong *val)
> > {
> > @@ -1317,6 +1505,14 @@ static RISCVException read_scountovf(CPURISCVState *env, int csrno,
> > target_ulong *mhpm_evt_val;
> > uint64_t of_bit_mask;
> >
> > + /* Virtualize scountovf for counter delegation */
> > + if (riscv_cpu_cfg(env)->ext_sscofpmf &&
> > + riscv_cpu_cfg(env)->ext_ssccfg &&
> > + get_field(env->menvcfg, MENVCFG_CDE) &&
> > + env->virt_enabled) {
> > + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> > + }
> > +
> > if (riscv_cpu_mxl(env) == MXL_RV32) {
> > mhpm_evt_val = env->mhpmeventh_val;
> > of_bit_mask = MHPMEVENTH_BIT_OF;
> > @@ -2254,11 +2450,70 @@ static int rmw_xireg_cd(CPURISCVState *env, int csrno,
> > target_ulong isel, target_ulong *val,
> > target_ulong new_val, target_ulong wr_mask)
> > {
> > - if (!riscv_cpu_cfg(env)->ext_smcdeleg) {
> > + int ret = -EINVAL;
> > + int ctr_index = isel - ISELECT_CD_FIRST;
> > + int isel_hpm_start = ISELECT_CD_FIRST + 3;
> > +
> > + if (!riscv_cpu_cfg(env)->ext_smcdeleg || !riscv_cpu_cfg(env)->ext_ssccfg) {
> > return RISCV_EXCP_ILLEGAL_INST;
> > }
> > - /* TODO: Implement the functionality later */
> > - return RISCV_EXCP_NONE;
> > +
> > + /* Invalid siselect value for reserved */
> > + if (ctr_index == 1) {
> > + goto done;
> > + }
> > +
> > + /* sireg4 and sireg5 provides access RV32 only CSRs */
> > + if (((csrno == CSR_SIREG5) || (csrno == CSR_SIREG4)) &&
> > + (riscv_cpu_mxl(env) != MXL_RV32)) {
> > + return RISCV_EXCP_ILLEGAL_INST;
> > + }
> > +
> > + /* Check Sscofpmf dependancy */
> > + if (!riscv_cpu_cfg(env)->ext_sscofpmf && csrno == CSR_SIREG5 &&
> > + (isel_hpm_start <= isel && isel <= ISELECT_CD_LAST)) {
> > + goto done;
> > + }
> > +
> > + /* Check smcntrpmf dependancy */
> > + if (!riscv_cpu_cfg(env)->ext_smcntrpmf &&
> > + (csrno == CSR_SIREG2 || csrno == CSR_SIREG5) &&
> > + (ISELECT_CD_FIRST <= isel && isel < isel_hpm_start)) {
> > + goto done;
> > + }
> > +
> > + if (!get_field(env->mcounteren, BIT(ctr_index)) ||
> > + !get_field(env->menvcfg, MENVCFG_CDE)) {
> > + goto done;
> > + }
> > +
> > + switch (csrno) {
> > + case CSR_SIREG:
> > + ret = rmw_cd_mhpmcounter(env, ctr_index, val, new_val, wr_mask);
> > + break;
> > + case CSR_SIREG4:
> > + ret = rmw_cd_mhpmcounterh(env, ctr_index, val, new_val, wr_mask);
> > + break;
> > + case CSR_SIREG2:
> > + if (ctr_index <= 2) {
> > + ret = rmw_cd_ctr_cfg(env, ctr_index, val, new_val, wr_mask);
> > + } else {
> > + ret = rmw_cd_mhpmevent(env, ctr_index, val, new_val, wr_mask);
> > + }
> > + break;
> > + case CSR_SIREG5:
> > + if (ctr_index <= 2) {
> > + ret = rmw_cd_ctr_cfgh(env, ctr_index, val, new_val, wr_mask);
> > + } else {
> > + ret = rmw_cd_mhpmeventh(env, ctr_index, val, new_val, wr_mask);
> > + }
> > + break;
> > + default:
> > + goto done;
> > + }
> > +
> > +done:
> > + return ret;
> > }
> >
> > /*
> > @@ -2540,6 +2795,21 @@ static RISCVException write_mcountinhibit(CPURISCVState *env, int csrno,
> > return RISCV_EXCP_NONE;
> > }
> >
> > +static RISCVException read_scountinhibit(CPURISCVState *env, int csrno,
> > + target_ulong *val)
> > +{
> > + /* S-mode can only access the bits delegated by M-mode */
> > + *val = env->mcountinhibit & env->mcounteren;
> > + return RISCV_EXCP_NONE;
> > +}
> > +
> > +static RISCVException write_scountinhibit(CPURISCVState *env, int csrno,
> > + target_ulong val)
> > +{
> > + write_mcountinhibit(env, csrno, val & env->mcounteren);
> > + return RISCV_EXCP_NONE;
> > +}
> > +
> > static RISCVException read_mcounteren(CPURISCVState *env, int csrno,
> > target_ulong *val)
> > {
> > @@ -2642,12 +2912,14 @@ static RISCVException write_menvcfg(CPURISCVState *env, int csrno,
> > target_ulong val)
> > {
> > const RISCVCPUConfig *cfg = riscv_cpu_cfg(env);
> > - uint64_t mask = MENVCFG_FIOM | MENVCFG_CBIE | MENVCFG_CBCFE | MENVCFG_CBZE;
> > + uint64_t mask = MENVCFG_FIOM | MENVCFG_CBIE | MENVCFG_CBCFE |
> > + MENVCFG_CBZE | MENVCFG_CDE;
> >
> > if (riscv_cpu_mxl(env) == MXL_RV64) {
> > mask |= (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
> > (cfg->ext_sstc ? MENVCFG_STCE : 0) |
> > - (cfg->ext_svadu ? MENVCFG_ADUE : 0);
> > + (cfg->ext_svadu ? MENVCFG_ADUE : 0) |
> > + (cfg->ext_smcdeleg ? MENVCFG_CDE : 0);
> > }
> > env->menvcfg = (env->menvcfg & ~mask) | (val & mask);
> >
> > @@ -2667,7 +2939,8 @@ static RISCVException write_menvcfgh(CPURISCVState *env, int csrno,
> > const RISCVCPUConfig *cfg = riscv_cpu_cfg(env);
> > uint64_t mask = (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) |
> > (cfg->ext_sstc ? MENVCFG_STCE : 0) |
> > - (cfg->ext_svadu ? MENVCFG_ADUE : 0);
> > + (cfg->ext_svadu ? MENVCFG_ADUE : 0) |
> > + (cfg->ext_smcdeleg ? MENVCFG_CDE : 0);
> > uint64_t valh = (uint64_t)val << 32;
> >
> > env->menvcfg = (env->menvcfg & ~mask) | (valh & mask);
> > @@ -5417,6 +5690,11 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = {
> > write_sstateen_1_3,
> > .min_priv_ver = PRIV_VERSION_1_12_0 },
> >
> > + /* Supervisor Counter Delegation */
> > + [CSR_SCOUNTINHIBIT] = {"scountinhibit", scountinhibit_pred,
> > + read_scountinhibit, write_scountinhibit,
> > + .min_priv_ver = PRIV_VERSION_1_12_0 },
> > +
> > /* Supervisor Trap Setup */
> > [CSR_SSTATUS] = { "sstatus", smode, read_sstatus, write_sstatus,
> > NULL, read_sstatus_i128 },
> >
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 08/13] target/riscv: Add configuration for S[m|s]csrind, Smcdeleg/Ssccfg
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
` (6 preceding siblings ...)
2024-07-23 23:30 ` [PATCH v2 07/13] target/riscv: Add counter delegation/configuration support Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-08-06 1:22 ` Alistair Francis
2024-07-23 23:30 ` [PATCH v2 09/13] target/riscv: Invoke pmu init after feature enable Atish Patra
` (4 subsequent siblings)
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
Add configuration options so that they can be enabled/disabld from
qemu commandline.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index ac2dce734d80..1731dc461376 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -1477,6 +1477,10 @@ const RISCVCPUMultiExtConfig riscv_cpu_extensions[] = {
/* Defaults for standard extensions */
MULTI_EXT_CFG_BOOL("sscofpmf", ext_sscofpmf, false),
MULTI_EXT_CFG_BOOL("smcntrpmf", ext_smcntrpmf, false),
+ MULTI_EXT_CFG_BOOL("smcsrind", ext_smcsrind, false),
+ MULTI_EXT_CFG_BOOL("smcdeleg", ext_smcdeleg, false),
+ MULTI_EXT_CFG_BOOL("sscsrind", ext_sscsrind, false),
+ MULTI_EXT_CFG_BOOL("ssccfg", ext_ssccfg, false),
MULTI_EXT_CFG_BOOL("zifencei", ext_zifencei, true),
MULTI_EXT_CFG_BOOL("zicsr", ext_zicsr, true),
MULTI_EXT_CFG_BOOL("zihintntl", ext_zihintntl, true),
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 08/13] target/riscv: Add configuration for S[m|s]csrind, Smcdeleg/Ssccfg
2024-07-23 23:30 ` [PATCH v2 08/13] target/riscv: Add configuration for S[m|s]csrind, Smcdeleg/Ssccfg Atish Patra
@ 2024-08-06 1:22 ` Alistair Francis
0 siblings, 0 replies; 41+ messages in thread
From: Alistair Francis @ 2024-08-06 1:22 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:33 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> Add configuration options so that they can be enabled/disabld from
> qemu commandline.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/cpu.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index ac2dce734d80..1731dc461376 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -1477,6 +1477,10 @@ const RISCVCPUMultiExtConfig riscv_cpu_extensions[] = {
> /* Defaults for standard extensions */
> MULTI_EXT_CFG_BOOL("sscofpmf", ext_sscofpmf, false),
> MULTI_EXT_CFG_BOOL("smcntrpmf", ext_smcntrpmf, false),
> + MULTI_EXT_CFG_BOOL("smcsrind", ext_smcsrind, false),
> + MULTI_EXT_CFG_BOOL("smcdeleg", ext_smcdeleg, false),
> + MULTI_EXT_CFG_BOOL("sscsrind", ext_sscsrind, false),
> + MULTI_EXT_CFG_BOOL("ssccfg", ext_ssccfg, false),
> MULTI_EXT_CFG_BOOL("zifencei", ext_zifencei, true),
> MULTI_EXT_CFG_BOOL("zicsr", ext_zicsr, true),
> MULTI_EXT_CFG_BOOL("zihintntl", ext_zihintntl, true),
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 09/13] target/riscv: Invoke pmu init after feature enable
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
` (7 preceding siblings ...)
2024-07-23 23:30 ` [PATCH v2 08/13] target/riscv: Add configuration for S[m|s]csrind, Smcdeleg/Ssccfg Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-08-06 1:43 ` Alistair Francis
2024-07-23 23:30 ` [PATCH v2 10/13] target/riscv: Enable sscofpmf for bare cpu by default Atish Patra
` (3 subsequent siblings)
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
The dependant ISA features are enabled at the end of cpu_realize
in finalize_features. Thus, PMU init should be invoked after that
only. Move the init invocation to riscv_tcg_cpu_finalize_features.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/tcg/tcg-cpu.c | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
index b8814ab753bd..d78d5960cf30 100644
--- a/target/riscv/tcg/tcg-cpu.c
+++ b/target/riscv/tcg/tcg-cpu.c
@@ -898,6 +898,20 @@ void riscv_tcg_cpu_finalize_features(RISCVCPU *cpu, Error **errp)
error_propagate(errp, local_err);
return;
}
+#ifndef CONFIG_USER_ONLY
+ if (cpu->cfg.pmu_mask) {
+ riscv_pmu_init(cpu, &local_err);
+ if (local_err != NULL) {
+ error_propagate(errp, local_err);
+ return;
+ }
+
+ if (cpu->cfg.ext_sscofpmf) {
+ cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
+ riscv_pmu_timer_cb, cpu);
+ }
+ }
+#endif
}
void riscv_tcg_cpu_finalize_dynamic_decoder(RISCVCPU *cpu)
@@ -945,7 +959,6 @@ static bool riscv_tcg_cpu_realize(CPUState *cs, Error **errp)
#ifndef CONFIG_USER_ONLY
CPURISCVState *env = &cpu->env;
- Error *local_err = NULL;
tcg_cflags_set(CPU(cs), CF_PCREL);
@@ -953,19 +966,6 @@ static bool riscv_tcg_cpu_realize(CPUState *cs, Error **errp)
riscv_timer_init(cpu);
}
- if (cpu->cfg.pmu_mask) {
- riscv_pmu_init(cpu, &local_err);
- if (local_err != NULL) {
- error_propagate(errp, local_err);
- return false;
- }
-
- if (cpu->cfg.ext_sscofpmf) {
- cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
- riscv_pmu_timer_cb, cpu);
- }
- }
-
/* With H-Ext, VSSIP, VSTIP, VSEIP and SGEIP are hardwired to one. */
if (riscv_has_ext(env, RVH)) {
env->mideleg = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP | MIP_SGEIP;
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 09/13] target/riscv: Invoke pmu init after feature enable
2024-07-23 23:30 ` [PATCH v2 09/13] target/riscv: Invoke pmu init after feature enable Atish Patra
@ 2024-08-06 1:43 ` Alistair Francis
0 siblings, 0 replies; 41+ messages in thread
From: Alistair Francis @ 2024-08-06 1:43 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:32 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> The dependant ISA features are enabled at the end of cpu_realize
> in finalize_features. Thus, PMU init should be invoked after that
> only. Move the init invocation to riscv_tcg_cpu_finalize_features.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> target/riscv/tcg/tcg-cpu.c | 28 ++++++++++++++--------------
> 1 file changed, 14 insertions(+), 14 deletions(-)
>
> diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
> index b8814ab753bd..d78d5960cf30 100644
> --- a/target/riscv/tcg/tcg-cpu.c
> +++ b/target/riscv/tcg/tcg-cpu.c
> @@ -898,6 +898,20 @@ void riscv_tcg_cpu_finalize_features(RISCVCPU *cpu, Error **errp)
> error_propagate(errp, local_err);
> return;
> }
> +#ifndef CONFIG_USER_ONLY
> + if (cpu->cfg.pmu_mask) {
> + riscv_pmu_init(cpu, &local_err);
> + if (local_err != NULL) {
> + error_propagate(errp, local_err);
> + return;
> + }
> +
> + if (cpu->cfg.ext_sscofpmf) {
> + cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
> + riscv_pmu_timer_cb, cpu);
> + }
> + }
> +#endif
> }
>
> void riscv_tcg_cpu_finalize_dynamic_decoder(RISCVCPU *cpu)
> @@ -945,7 +959,6 @@ static bool riscv_tcg_cpu_realize(CPUState *cs, Error **errp)
>
> #ifndef CONFIG_USER_ONLY
> CPURISCVState *env = &cpu->env;
> - Error *local_err = NULL;
>
> tcg_cflags_set(CPU(cs), CF_PCREL);
>
> @@ -953,19 +966,6 @@ static bool riscv_tcg_cpu_realize(CPUState *cs, Error **errp)
> riscv_timer_init(cpu);
> }
>
> - if (cpu->cfg.pmu_mask) {
> - riscv_pmu_init(cpu, &local_err);
> - if (local_err != NULL) {
> - error_propagate(errp, local_err);
> - return false;
> - }
> -
> - if (cpu->cfg.ext_sscofpmf) {
> - cpu->pmu_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL,
> - riscv_pmu_timer_cb, cpu);
> - }
> - }
> -
> /* With H-Ext, VSSIP, VSTIP, VSEIP and SGEIP are hardwired to one. */
> if (riscv_has_ext(env, RVH)) {
> env->mideleg = MIP_VSSIP | MIP_VSTIP | MIP_VSEIP | MIP_SGEIP;
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 10/13] target/riscv: Enable sscofpmf for bare cpu by default
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
` (8 preceding siblings ...)
2024-07-23 23:30 ` [PATCH v2 09/13] target/riscv: Invoke pmu init after feature enable Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-08-06 1:51 ` Alistair Francis
2024-07-23 23:30 ` [PATCH v2 11/13] target/riscv: Repurpose the implied rule startergy Atish Patra
` (2 subsequent siblings)
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
Sscofpmf has been supported on virt machine for a long time. It is
required to enable profiling on virt machines. Let's enable it
by default for ease of usage.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 1731dc461376..393d1d67120e 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -494,6 +494,7 @@ static void rv64_base_cpu_init(Object *obj)
env->priv_ver = PRIV_VERSION_LATEST;
#ifndef CONFIG_USER_ONLY
set_satp_mode_max_supported(RISCV_CPU(obj), VM_1_10_SV57);
+ cpu->cfg.ext_sscofpmf = true;
#endif
}
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 10/13] target/riscv: Enable sscofpmf for bare cpu by default
2024-07-23 23:30 ` [PATCH v2 10/13] target/riscv: Enable sscofpmf for bare cpu by default Atish Patra
@ 2024-08-06 1:51 ` Alistair Francis
2024-08-06 8:27 ` Andrew Jones
0 siblings, 1 reply; 41+ messages in thread
From: Alistair Francis @ 2024-08-06 1:51 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:33 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> Sscofpmf has been supported on virt machine for a long time. It is
> required to enable profiling on virt machines. Let's enable it
> by default for ease of usage.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/cpu.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 1731dc461376..393d1d67120e 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -494,6 +494,7 @@ static void rv64_base_cpu_init(Object *obj)
> env->priv_ver = PRIV_VERSION_LATEST;
> #ifndef CONFIG_USER_ONLY
> set_satp_mode_max_supported(RISCV_CPU(obj), VM_1_10_SV57);
> + cpu->cfg.ext_sscofpmf = true;
Unfortunately we don't want to do this, the base CPU should be bare
bones and then users can enable extensions.
Alistair
> #endif
> }
>
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 10/13] target/riscv: Enable sscofpmf for bare cpu by default
2024-08-06 1:51 ` Alistair Francis
@ 2024-08-06 8:27 ` Andrew Jones
0 siblings, 0 replies; 41+ messages in thread
From: Andrew Jones @ 2024-08-06 8:27 UTC (permalink / raw)
To: Alistair Francis
Cc: Atish Patra, qemu-riscv, qemu-devel, palmer, liwei1518,
zhiwei_liu, bin.meng, dbarboza, alistair.francis
On Tue, Aug 06, 2024 at 11:51:20AM GMT, Alistair Francis wrote:
> On Wed, Jul 24, 2024 at 9:33 AM Atish Patra <atishp@rivosinc.com> wrote:
> >
> > Sscofpmf has been supported on virt machine for a long time. It is
> > required to enable profiling on virt machines. Let's enable it
> > by default for ease of usage.
> >
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
> > target/riscv/cpu.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > index 1731dc461376..393d1d67120e 100644
> > --- a/target/riscv/cpu.c
> > +++ b/target/riscv/cpu.c
> > @@ -494,6 +494,7 @@ static void rv64_base_cpu_init(Object *obj)
> > env->priv_ver = PRIV_VERSION_LATEST;
> > #ifndef CONFIG_USER_ONLY
> > set_satp_mode_max_supported(RISCV_CPU(obj), VM_1_10_SV57);
> > + cpu->cfg.ext_sscofpmf = true;
>
> Unfortunately we don't want to do this, the base CPU should be bare
> bones and then users can enable extensions.
But we do want to enable stuff by default in the 'max' cpu type.
Thanks,
drew
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 11/13] target/riscv: Repurpose the implied rule startergy
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
` (9 preceding siblings ...)
2024-07-23 23:30 ` [PATCH v2 10/13] target/riscv: Enable sscofpmf for bare cpu by default Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-07-23 23:30 ` [PATCH v2 12/13] target/riscv: Add a preferred ISA extension rule Atish Patra
2024-07-23 23:30 ` [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule Atish Patra
12 siblings, 0 replies; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
The current infrastructure for implied ISA extension enabling can
be used for other cases where a particular ISA is dependant on
multiple other ISA extension to enable all the features.
Rename the implied rule functions/data structures to accomodate that.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/tcg/tcg-cpu.c | 32 ++++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
index d78d5960cf30..1c9a87029423 100644
--- a/target/riscv/tcg/tcg-cpu.c
+++ b/target/riscv/tcg/tcg-cpu.c
@@ -40,7 +40,7 @@
static GHashTable *multi_ext_user_opts;
static GHashTable *misa_ext_user_opts;
-static GHashTable *multi_ext_implied_rules;
+static GHashTable *multi_ext_enabling_rules;
static GHashTable *misa_ext_implied_rules;
static bool cpu_cfg_ext_is_user_set(uint32_t ext_offset)
@@ -730,7 +730,7 @@ static void riscv_cpu_validate_profiles(RISCVCPU *cpu)
}
}
-static void riscv_cpu_init_implied_exts_rules(void)
+static void riscv_cpu_init_ext_rules(void)
{
RISCVCPUImpliedExtsRule *rule;
#ifndef CONFIG_USER_ONLY
@@ -756,14 +756,14 @@ static void riscv_cpu_init_implied_exts_rules(void)
#ifndef CONFIG_USER_ONLY
rule->enabled = bitmap_new(ms->smp.cpus);
#endif
- g_hash_table_insert(multi_ext_implied_rules,
+ g_hash_table_insert(multi_ext_enabling_rules,
GUINT_TO_POINTER(rule->ext), (gpointer)rule);
}
initialized = true;
}
-static void cpu_enable_implied_rule(RISCVCPU *cpu,
+static void cpu_enable_ext_rule(RISCVCPU *cpu,
RISCVCPUImpliedExtsRule *rule)
{
CPURISCVState *env = &cpu->env;
@@ -787,7 +787,7 @@ static void cpu_enable_implied_rule(RISCVCPU *cpu,
GUINT_TO_POINTER(misa_bits[i]));
if (ir) {
- cpu_enable_implied_rule(cpu, ir);
+ cpu_enable_ext_rule(cpu, ir);
}
}
}
@@ -798,12 +798,12 @@ static void cpu_enable_implied_rule(RISCVCPU *cpu,
rule->implied_multi_exts[i] != RISCV_IMPLIED_EXTS_RULE_END; i++) {
cpu_cfg_ext_auto_update(cpu, rule->implied_multi_exts[i], true);
- ir = g_hash_table_lookup(multi_ext_implied_rules,
- GUINT_TO_POINTER(
- rule->implied_multi_exts[i]));
+ ir = g_hash_table_lookup(multi_ext_enabling_rules,
+ GUINT_TO_POINTER(
+ rule->implied_multi_exts[i]));
if (ir) {
- cpu_enable_implied_rule(cpu, ir);
+ cpu_enable_ext_rule(cpu, ir);
}
}
@@ -844,7 +844,7 @@ static void cpu_enable_zc_implied_rules(RISCVCPU *cpu)
}
}
-static void riscv_cpu_enable_implied_rules(RISCVCPU *cpu)
+static void riscv_cpu_enable_ext_rules(RISCVCPU *cpu)
{
RISCVCPUImpliedExtsRule *rule;
int i;
@@ -855,14 +855,14 @@ static void riscv_cpu_enable_implied_rules(RISCVCPU *cpu)
/* Enable the implied MISAs. */
for (i = 0; (rule = riscv_misa_ext_implied_rules[i]); i++) {
if (riscv_has_ext(&cpu->env, rule->ext)) {
- cpu_enable_implied_rule(cpu, rule);
+ cpu_enable_ext_rule(cpu, rule);
}
}
/* Enable the implied extensions. */
for (i = 0; (rule = riscv_multi_ext_implied_rules[i]); i++) {
if (isa_ext_is_enabled(cpu, rule->ext)) {
- cpu_enable_implied_rule(cpu, rule);
+ cpu_enable_ext_rule(cpu, rule);
}
}
}
@@ -872,8 +872,8 @@ void riscv_tcg_cpu_finalize_features(RISCVCPU *cpu, Error **errp)
CPURISCVState *env = &cpu->env;
Error *local_err = NULL;
- riscv_cpu_init_implied_exts_rules();
- riscv_cpu_enable_implied_rules(cpu);
+ riscv_cpu_init_ext_rules();
+ riscv_cpu_enable_ext_rules(cpu);
riscv_cpu_validate_misa_priv(env, &local_err);
if (local_err != NULL) {
@@ -1385,8 +1385,8 @@ static void riscv_tcg_cpu_instance_init(CPUState *cs)
misa_ext_implied_rules = g_hash_table_new(NULL, g_direct_equal);
}
- if (!multi_ext_implied_rules) {
- multi_ext_implied_rules = g_hash_table_new(NULL, g_direct_equal);
+ if (!multi_ext_enabling_rules) {
+ multi_ext_enabling_rules = g_hash_table_new(NULL, g_direct_equal);
}
riscv_cpu_add_user_properties(obj);
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* [PATCH v2 12/13] target/riscv: Add a preferred ISA extension rule
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
` (10 preceding siblings ...)
2024-07-23 23:30 ` [PATCH v2 11/13] target/riscv: Repurpose the implied rule startergy Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-08-06 1:48 ` Alistair Francis
2024-07-23 23:30 ` [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule Atish Patra
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
In addition to the implied rule, a preferred rule will be useful
where an ISA extension may require a list of ISA extension to be
enabled to use all the features defined in that extension. All
these extensions may not be implied in the ISA.
This patch just introduces a new preferred rule which allows
to enable multiple extensions together without burdening the qemu
commandline user.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.c | 4 ++++
target/riscv/cpu.h | 17 ++++++++++++++
target/riscv/tcg/tcg-cpu.c | 57 ++++++++++++++++++++++++++++++++++++++++------
3 files changed, 71 insertions(+), 7 deletions(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 393d1d67120e..22ba43c7ff2a 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -2665,6 +2665,10 @@ RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[] = {
NULL
};
+RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[] = {
+ NULL
+};
+
static Property riscv_cpu_properties[] = {
DEFINE_PROP_BOOL("debug", RISCVCPU, cfg.debug, true),
diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
index af25550a4a54..d775866344f5 100644
--- a/target/riscv/cpu.h
+++ b/target/riscv/cpu.h
@@ -142,10 +142,27 @@ typedef struct riscv_cpu_implied_exts_rule {
const uint32_t implied_multi_exts[];
} RISCVCPUImpliedExtsRule;
+typedef struct riscv_cpu_preferred_exts_rule {
+#ifndef CONFIG_USER_ONLY
+ /*
+ * Bitmask indicates the rule enabled status for the harts.
+ * This enhancement is only available in system-mode QEMU,
+ * as we don't have a good way (e.g. mhartid) to distinguish
+ * the SMP cores in user-mode QEMU.
+ */
+ unsigned long *enabled;
+#endif
+ /* multi extension offset. */
+ const uint32_t ext;
+ const uint32_t preferred_multi_exts[];
+} RISCVCPUPreferredExtsRule;
+
extern RISCVCPUImpliedExtsRule *riscv_misa_ext_implied_rules[];
extern RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[];
+extern RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[];
#define RISCV_IMPLIED_EXTS_RULE_END -1
+#define RISCV_PREFRRED_EXTS_RULE_END RISCV_IMPLIED_EXTS_RULE_END
#define MMU_USER_IDX 3
diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
index 1c9a87029423..d8f74720815a 100644
--- a/target/riscv/tcg/tcg-cpu.c
+++ b/target/riscv/tcg/tcg-cpu.c
@@ -32,6 +32,7 @@
#include "hw/core/accel-cpu.h"
#include "hw/core/tcg-cpu-ops.h"
#include "tcg/tcg.h"
+#include <stdio.h>
#ifndef CONFIG_USER_ONLY
#include "hw/boards.h"
#endif
@@ -733,6 +734,7 @@ static void riscv_cpu_validate_profiles(RISCVCPU *cpu)
static void riscv_cpu_init_ext_rules(void)
{
RISCVCPUImpliedExtsRule *rule;
+ RISCVCPUPreferredExtsRule *prule;
#ifndef CONFIG_USER_ONLY
MachineState *ms = MACHINE(qdev_get_machine());
#endif
@@ -760,22 +762,40 @@ static void riscv_cpu_init_ext_rules(void)
GUINT_TO_POINTER(rule->ext), (gpointer)rule);
}
+ for (i = 0; (prule = riscv_multi_ext_preferred_rules[i]); i++) {
+#ifndef CONFIG_USER_ONLY
+ prule->enabled = bitmap_new(ms->smp.cpus);
+#endif
+ g_hash_table_insert(multi_ext_enabling_rules,
+ GUINT_TO_POINTER(prule->ext), (gpointer)prule);
+ }
+
initialized = true;
}
static void cpu_enable_ext_rule(RISCVCPU *cpu,
- RISCVCPUImpliedExtsRule *rule)
+ RISCVCPUImpliedExtsRule *rule,
+ RISCVCPUPreferredExtsRule *prule)
{
CPURISCVState *env = &cpu->env;
RISCVCPUImpliedExtsRule *ir;
+ RISCVCPUPreferredExtsRule *pr;
bool enabled = false;
int i;
#ifndef CONFIG_USER_ONLY
- enabled = test_bit(cpu->env.mhartid, rule->enabled);
+ if (rule) {
+ enabled = test_bit(cpu->env.mhartid, rule->enabled);
+ } else if (prule) {
+ enabled = test_bit(cpu->env.mhartid, prule->enabled);
+ } else {
+ return;
+ }
#endif
+ if (enabled)
+ return;
- if (!enabled) {
+ if (rule) {
/* Enable the implied MISAs. */
if (rule->implied_misa_exts) {
riscv_cpu_set_misa_ext(env,
@@ -787,7 +807,7 @@ static void cpu_enable_ext_rule(RISCVCPU *cpu,
GUINT_TO_POINTER(misa_bits[i]));
if (ir) {
- cpu_enable_ext_rule(cpu, ir);
+ cpu_enable_ext_rule(cpu, ir, NULL);
}
}
}
@@ -803,12 +823,27 @@ static void cpu_enable_ext_rule(RISCVCPU *cpu,
rule->implied_multi_exts[i]));
if (ir) {
- cpu_enable_ext_rule(cpu, ir);
+ cpu_enable_ext_rule(cpu, ir, NULL);
}
}
#ifndef CONFIG_USER_ONLY
bitmap_set(rule->enabled, cpu->env.mhartid, 1);
+#endif
+ } else if (prule) {
+ /* Enable the preferred extensions. */
+ for (i = 0;
+ prule->preferred_multi_exts[i] != RISCV_PREFRRED_EXTS_RULE_END; i++) {
+ cpu_cfg_ext_auto_update(cpu, prule->preferred_multi_exts[i], true);
+ pr = g_hash_table_lookup(multi_ext_enabling_rules,
+ GUINT_TO_POINTER(
+ prule->preferred_multi_exts[i]));
+ if (pr) {
+ cpu_enable_ext_rule(cpu, NULL, prule);
+ }
+ }
+#ifndef CONFIG_USER_ONLY
+ bitmap_set(prule->enabled, cpu->env.mhartid, 1);
#endif
}
}
@@ -847,6 +882,7 @@ static void cpu_enable_zc_implied_rules(RISCVCPU *cpu)
static void riscv_cpu_enable_ext_rules(RISCVCPU *cpu)
{
RISCVCPUImpliedExtsRule *rule;
+ RISCVCPUPreferredExtsRule *prule;
int i;
/* Enable the implied extensions for Zc. */
@@ -855,14 +891,21 @@ static void riscv_cpu_enable_ext_rules(RISCVCPU *cpu)
/* Enable the implied MISAs. */
for (i = 0; (rule = riscv_misa_ext_implied_rules[i]); i++) {
if (riscv_has_ext(&cpu->env, rule->ext)) {
- cpu_enable_ext_rule(cpu, rule);
+ cpu_enable_ext_rule(cpu, rule, NULL);
}
}
/* Enable the implied extensions. */
for (i = 0; (rule = riscv_multi_ext_implied_rules[i]); i++) {
if (isa_ext_is_enabled(cpu, rule->ext)) {
- cpu_enable_ext_rule(cpu, rule);
+ cpu_enable_ext_rule(cpu, rule, NULL);
+ }
+ }
+
+ /* Enable the preferred extensions. */
+ for (i = 0; (prule = riscv_multi_ext_preferred_rules[i]); i++) {
+ if (isa_ext_is_enabled(cpu, prule->ext)) {
+ cpu_enable_ext_rule(cpu, NULL, prule);
}
}
}
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 12/13] target/riscv: Add a preferred ISA extension rule
2024-07-23 23:30 ` [PATCH v2 12/13] target/riscv: Add a preferred ISA extension rule Atish Patra
@ 2024-08-06 1:48 ` Alistair Francis
2024-08-07 7:46 ` Atish Kumar Patra
0 siblings, 1 reply; 41+ messages in thread
From: Alistair Francis @ 2024-08-06 1:48 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Wed, Jul 24, 2024 at 9:32 AM Atish Patra <atishp@rivosinc.com> wrote:
>
> In addition to the implied rule, a preferred rule will be useful
> where an ISA extension may require a list of ISA extension to be
> enabled to use all the features defined in that extension. All
> these extensions may not be implied in the ISA.
This seems practically the same as an implied rule, I'm not sure we
need a separate list of rules
Alistair
>
> This patch just introduces a new preferred rule which allows
> to enable multiple extensions together without burdening the qemu
> commandline user.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/cpu.c | 4 ++++
> target/riscv/cpu.h | 17 ++++++++++++++
> target/riscv/tcg/tcg-cpu.c | 57 ++++++++++++++++++++++++++++++++++++++++------
> 3 files changed, 71 insertions(+), 7 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 393d1d67120e..22ba43c7ff2a 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -2665,6 +2665,10 @@ RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[] = {
> NULL
> };
>
> +RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[] = {
> + NULL
> +};
> +
> static Property riscv_cpu_properties[] = {
> DEFINE_PROP_BOOL("debug", RISCVCPU, cfg.debug, true),
>
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index af25550a4a54..d775866344f5 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -142,10 +142,27 @@ typedef struct riscv_cpu_implied_exts_rule {
> const uint32_t implied_multi_exts[];
> } RISCVCPUImpliedExtsRule;
>
> +typedef struct riscv_cpu_preferred_exts_rule {
> +#ifndef CONFIG_USER_ONLY
> + /*
> + * Bitmask indicates the rule enabled status for the harts.
> + * This enhancement is only available in system-mode QEMU,
> + * as we don't have a good way (e.g. mhartid) to distinguish
> + * the SMP cores in user-mode QEMU.
> + */
> + unsigned long *enabled;
> +#endif
> + /* multi extension offset. */
> + const uint32_t ext;
> + const uint32_t preferred_multi_exts[];
> +} RISCVCPUPreferredExtsRule;
> +
> extern RISCVCPUImpliedExtsRule *riscv_misa_ext_implied_rules[];
> extern RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[];
> +extern RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[];
>
> #define RISCV_IMPLIED_EXTS_RULE_END -1
> +#define RISCV_PREFRRED_EXTS_RULE_END RISCV_IMPLIED_EXTS_RULE_END
>
> #define MMU_USER_IDX 3
>
> diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
> index 1c9a87029423..d8f74720815a 100644
> --- a/target/riscv/tcg/tcg-cpu.c
> +++ b/target/riscv/tcg/tcg-cpu.c
> @@ -32,6 +32,7 @@
> #include "hw/core/accel-cpu.h"
> #include "hw/core/tcg-cpu-ops.h"
> #include "tcg/tcg.h"
> +#include <stdio.h>
> #ifndef CONFIG_USER_ONLY
> #include "hw/boards.h"
> #endif
> @@ -733,6 +734,7 @@ static void riscv_cpu_validate_profiles(RISCVCPU *cpu)
> static void riscv_cpu_init_ext_rules(void)
> {
> RISCVCPUImpliedExtsRule *rule;
> + RISCVCPUPreferredExtsRule *prule;
> #ifndef CONFIG_USER_ONLY
> MachineState *ms = MACHINE(qdev_get_machine());
> #endif
> @@ -760,22 +762,40 @@ static void riscv_cpu_init_ext_rules(void)
> GUINT_TO_POINTER(rule->ext), (gpointer)rule);
> }
>
> + for (i = 0; (prule = riscv_multi_ext_preferred_rules[i]); i++) {
> +#ifndef CONFIG_USER_ONLY
> + prule->enabled = bitmap_new(ms->smp.cpus);
> +#endif
> + g_hash_table_insert(multi_ext_enabling_rules,
> + GUINT_TO_POINTER(prule->ext), (gpointer)prule);
> + }
> +
> initialized = true;
> }
>
> static void cpu_enable_ext_rule(RISCVCPU *cpu,
> - RISCVCPUImpliedExtsRule *rule)
> + RISCVCPUImpliedExtsRule *rule,
> + RISCVCPUPreferredExtsRule *prule)
> {
> CPURISCVState *env = &cpu->env;
> RISCVCPUImpliedExtsRule *ir;
> + RISCVCPUPreferredExtsRule *pr;
> bool enabled = false;
> int i;
>
> #ifndef CONFIG_USER_ONLY
> - enabled = test_bit(cpu->env.mhartid, rule->enabled);
> + if (rule) {
> + enabled = test_bit(cpu->env.mhartid, rule->enabled);
> + } else if (prule) {
> + enabled = test_bit(cpu->env.mhartid, prule->enabled);
> + } else {
> + return;
> + }
> #endif
> + if (enabled)
> + return;
>
> - if (!enabled) {
> + if (rule) {
> /* Enable the implied MISAs. */
> if (rule->implied_misa_exts) {
> riscv_cpu_set_misa_ext(env,
> @@ -787,7 +807,7 @@ static void cpu_enable_ext_rule(RISCVCPU *cpu,
> GUINT_TO_POINTER(misa_bits[i]));
>
> if (ir) {
> - cpu_enable_ext_rule(cpu, ir);
> + cpu_enable_ext_rule(cpu, ir, NULL);
> }
> }
> }
> @@ -803,12 +823,27 @@ static void cpu_enable_ext_rule(RISCVCPU *cpu,
> rule->implied_multi_exts[i]));
>
> if (ir) {
> - cpu_enable_ext_rule(cpu, ir);
> + cpu_enable_ext_rule(cpu, ir, NULL);
> }
> }
>
> #ifndef CONFIG_USER_ONLY
> bitmap_set(rule->enabled, cpu->env.mhartid, 1);
> +#endif
> + } else if (prule) {
> + /* Enable the preferred extensions. */
> + for (i = 0;
> + prule->preferred_multi_exts[i] != RISCV_PREFRRED_EXTS_RULE_END; i++) {
> + cpu_cfg_ext_auto_update(cpu, prule->preferred_multi_exts[i], true);
> + pr = g_hash_table_lookup(multi_ext_enabling_rules,
> + GUINT_TO_POINTER(
> + prule->preferred_multi_exts[i]));
> + if (pr) {
> + cpu_enable_ext_rule(cpu, NULL, prule);
> + }
> + }
> +#ifndef CONFIG_USER_ONLY
> + bitmap_set(prule->enabled, cpu->env.mhartid, 1);
> #endif
> }
> }
> @@ -847,6 +882,7 @@ static void cpu_enable_zc_implied_rules(RISCVCPU *cpu)
> static void riscv_cpu_enable_ext_rules(RISCVCPU *cpu)
> {
> RISCVCPUImpliedExtsRule *rule;
> + RISCVCPUPreferredExtsRule *prule;
> int i;
>
> /* Enable the implied extensions for Zc. */
> @@ -855,14 +891,21 @@ static void riscv_cpu_enable_ext_rules(RISCVCPU *cpu)
> /* Enable the implied MISAs. */
> for (i = 0; (rule = riscv_misa_ext_implied_rules[i]); i++) {
> if (riscv_has_ext(&cpu->env, rule->ext)) {
> - cpu_enable_ext_rule(cpu, rule);
> + cpu_enable_ext_rule(cpu, rule, NULL);
> }
> }
>
> /* Enable the implied extensions. */
> for (i = 0; (rule = riscv_multi_ext_implied_rules[i]); i++) {
> if (isa_ext_is_enabled(cpu, rule->ext)) {
> - cpu_enable_ext_rule(cpu, rule);
> + cpu_enable_ext_rule(cpu, rule, NULL);
> + }
> + }
> +
> + /* Enable the preferred extensions. */
> + for (i = 0; (prule = riscv_multi_ext_preferred_rules[i]); i++) {
> + if (isa_ext_is_enabled(cpu, prule->ext)) {
> + cpu_enable_ext_rule(cpu, NULL, prule);
> }
> }
> }
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 12/13] target/riscv: Add a preferred ISA extension rule
2024-08-06 1:48 ` Alistair Francis
@ 2024-08-07 7:46 ` Atish Kumar Patra
0 siblings, 0 replies; 41+ messages in thread
From: Atish Kumar Patra @ 2024-08-07 7:46 UTC (permalink / raw)
To: Alistair Francis
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Mon, Aug 5, 2024 at 6:48 PM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Wed, Jul 24, 2024 at 9:32 AM Atish Patra <atishp@rivosinc.com> wrote:
> >
> > In addition to the implied rule, a preferred rule will be useful
> > where an ISA extension may require a list of ISA extension to be
> > enabled to use all the features defined in that extension. All
> > these extensions may not be implied in the ISA.
>
> This seems practically the same as an implied rule, I'm not sure we
> need a separate list of rules
>
As per my understanding, the implied rule defines the mandatory
dependent extensions
specified by ISA. The preferred rule allows you to enable more related
extensions which
the user will commonly enable but necessary to.
These preferred extensions can be individually disabled too if a user
wants. This feature needs
be verified though as I just wanted to get feedback on the preferred rule thing.
Let me know if I misunderstood the intention of the implied rule.
> Alistair
>
> >
> > This patch just introduces a new preferred rule which allows
> > to enable multiple extensions together without burdening the qemu
> > commandline user.
> >
> > Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > ---
> > target/riscv/cpu.c | 4 ++++
> > target/riscv/cpu.h | 17 ++++++++++++++
> > target/riscv/tcg/tcg-cpu.c | 57 ++++++++++++++++++++++++++++++++++++++++------
> > 3 files changed, 71 insertions(+), 7 deletions(-)
> >
> > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > index 393d1d67120e..22ba43c7ff2a 100644
> > --- a/target/riscv/cpu.c
> > +++ b/target/riscv/cpu.c
> > @@ -2665,6 +2665,10 @@ RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[] = {
> > NULL
> > };
> >
> > +RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[] = {
> > + NULL
> > +};
> > +
> > static Property riscv_cpu_properties[] = {
> > DEFINE_PROP_BOOL("debug", RISCVCPU, cfg.debug, true),
> >
> > diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> > index af25550a4a54..d775866344f5 100644
> > --- a/target/riscv/cpu.h
> > +++ b/target/riscv/cpu.h
> > @@ -142,10 +142,27 @@ typedef struct riscv_cpu_implied_exts_rule {
> > const uint32_t implied_multi_exts[];
> > } RISCVCPUImpliedExtsRule;
> >
> > +typedef struct riscv_cpu_preferred_exts_rule {
> > +#ifndef CONFIG_USER_ONLY
> > + /*
> > + * Bitmask indicates the rule enabled status for the harts.
> > + * This enhancement is only available in system-mode QEMU,
> > + * as we don't have a good way (e.g. mhartid) to distinguish
> > + * the SMP cores in user-mode QEMU.
> > + */
> > + unsigned long *enabled;
> > +#endif
> > + /* multi extension offset. */
> > + const uint32_t ext;
> > + const uint32_t preferred_multi_exts[];
> > +} RISCVCPUPreferredExtsRule;
> > +
> > extern RISCVCPUImpliedExtsRule *riscv_misa_ext_implied_rules[];
> > extern RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[];
> > +extern RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[];
> >
> > #define RISCV_IMPLIED_EXTS_RULE_END -1
> > +#define RISCV_PREFRRED_EXTS_RULE_END RISCV_IMPLIED_EXTS_RULE_END
> >
> > #define MMU_USER_IDX 3
> >
> > diff --git a/target/riscv/tcg/tcg-cpu.c b/target/riscv/tcg/tcg-cpu.c
> > index 1c9a87029423..d8f74720815a 100644
> > --- a/target/riscv/tcg/tcg-cpu.c
> > +++ b/target/riscv/tcg/tcg-cpu.c
> > @@ -32,6 +32,7 @@
> > #include "hw/core/accel-cpu.h"
> > #include "hw/core/tcg-cpu-ops.h"
> > #include "tcg/tcg.h"
> > +#include <stdio.h>
> > #ifndef CONFIG_USER_ONLY
> > #include "hw/boards.h"
> > #endif
> > @@ -733,6 +734,7 @@ static void riscv_cpu_validate_profiles(RISCVCPU *cpu)
> > static void riscv_cpu_init_ext_rules(void)
> > {
> > RISCVCPUImpliedExtsRule *rule;
> > + RISCVCPUPreferredExtsRule *prule;
> > #ifndef CONFIG_USER_ONLY
> > MachineState *ms = MACHINE(qdev_get_machine());
> > #endif
> > @@ -760,22 +762,40 @@ static void riscv_cpu_init_ext_rules(void)
> > GUINT_TO_POINTER(rule->ext), (gpointer)rule);
> > }
> >
> > + for (i = 0; (prule = riscv_multi_ext_preferred_rules[i]); i++) {
> > +#ifndef CONFIG_USER_ONLY
> > + prule->enabled = bitmap_new(ms->smp.cpus);
> > +#endif
> > + g_hash_table_insert(multi_ext_enabling_rules,
> > + GUINT_TO_POINTER(prule->ext), (gpointer)prule);
> > + }
> > +
> > initialized = true;
> > }
> >
> > static void cpu_enable_ext_rule(RISCVCPU *cpu,
> > - RISCVCPUImpliedExtsRule *rule)
> > + RISCVCPUImpliedExtsRule *rule,
> > + RISCVCPUPreferredExtsRule *prule)
> > {
> > CPURISCVState *env = &cpu->env;
> > RISCVCPUImpliedExtsRule *ir;
> > + RISCVCPUPreferredExtsRule *pr;
> > bool enabled = false;
> > int i;
> >
> > #ifndef CONFIG_USER_ONLY
> > - enabled = test_bit(cpu->env.mhartid, rule->enabled);
> > + if (rule) {
> > + enabled = test_bit(cpu->env.mhartid, rule->enabled);
> > + } else if (prule) {
> > + enabled = test_bit(cpu->env.mhartid, prule->enabled);
> > + } else {
> > + return;
> > + }
> > #endif
> > + if (enabled)
> > + return;
> >
> > - if (!enabled) {
> > + if (rule) {
> > /* Enable the implied MISAs. */
> > if (rule->implied_misa_exts) {
> > riscv_cpu_set_misa_ext(env,
> > @@ -787,7 +807,7 @@ static void cpu_enable_ext_rule(RISCVCPU *cpu,
> > GUINT_TO_POINTER(misa_bits[i]));
> >
> > if (ir) {
> > - cpu_enable_ext_rule(cpu, ir);
> > + cpu_enable_ext_rule(cpu, ir, NULL);
> > }
> > }
> > }
> > @@ -803,12 +823,27 @@ static void cpu_enable_ext_rule(RISCVCPU *cpu,
> > rule->implied_multi_exts[i]));
> >
> > if (ir) {
> > - cpu_enable_ext_rule(cpu, ir);
> > + cpu_enable_ext_rule(cpu, ir, NULL);
> > }
> > }
> >
> > #ifndef CONFIG_USER_ONLY
> > bitmap_set(rule->enabled, cpu->env.mhartid, 1);
> > +#endif
> > + } else if (prule) {
> > + /* Enable the preferred extensions. */
> > + for (i = 0;
> > + prule->preferred_multi_exts[i] != RISCV_PREFRRED_EXTS_RULE_END; i++) {
> > + cpu_cfg_ext_auto_update(cpu, prule->preferred_multi_exts[i], true);
> > + pr = g_hash_table_lookup(multi_ext_enabling_rules,
> > + GUINT_TO_POINTER(
> > + prule->preferred_multi_exts[i]));
> > + if (pr) {
> > + cpu_enable_ext_rule(cpu, NULL, prule);
> > + }
> > + }
> > +#ifndef CONFIG_USER_ONLY
> > + bitmap_set(prule->enabled, cpu->env.mhartid, 1);
> > #endif
> > }
> > }
> > @@ -847,6 +882,7 @@ static void cpu_enable_zc_implied_rules(RISCVCPU *cpu)
> > static void riscv_cpu_enable_ext_rules(RISCVCPU *cpu)
> > {
> > RISCVCPUImpliedExtsRule *rule;
> > + RISCVCPUPreferredExtsRule *prule;
> > int i;
> >
> > /* Enable the implied extensions for Zc. */
> > @@ -855,14 +891,21 @@ static void riscv_cpu_enable_ext_rules(RISCVCPU *cpu)
> > /* Enable the implied MISAs. */
> > for (i = 0; (rule = riscv_misa_ext_implied_rules[i]); i++) {
> > if (riscv_has_ext(&cpu->env, rule->ext)) {
> > - cpu_enable_ext_rule(cpu, rule);
> > + cpu_enable_ext_rule(cpu, rule, NULL);
> > }
> > }
> >
> > /* Enable the implied extensions. */
> > for (i = 0; (rule = riscv_multi_ext_implied_rules[i]); i++) {
> > if (isa_ext_is_enabled(cpu, rule->ext)) {
> > - cpu_enable_ext_rule(cpu, rule);
> > + cpu_enable_ext_rule(cpu, rule, NULL);
> > + }
> > + }
> > +
> > + /* Enable the preferred extensions. */
> > + for (i = 0; (prule = riscv_multi_ext_preferred_rules[i]); i++) {
> > + if (isa_ext_is_enabled(cpu, prule->ext)) {
> > + cpu_enable_ext_rule(cpu, NULL, prule);
> > }
> > }
> > }
> >
> > --
> > 2.34.1
> >
> >
^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule
2024-07-23 23:29 [PATCH v2 00/13] Add RISC-V Counter delegation ISA extension support Atish Patra
` (11 preceding siblings ...)
2024-07-23 23:30 ` [PATCH v2 12/13] target/riscv: Add a preferred ISA extension rule Atish Patra
@ 2024-07-23 23:30 ` Atish Patra
2024-08-06 8:46 ` Andrew Jones
12 siblings, 1 reply; 41+ messages in thread
From: Atish Patra @ 2024-07-23 23:30 UTC (permalink / raw)
To: qemu-riscv, qemu-devel
Cc: palmer, liwei1518, zhiwei_liu, bin.meng, dbarboza,
alistair.francis
Counter delegation/configuration extension requires the following
extensions to be enabled.
1. Smcdeleg - To enable counter delegation from M to S
2. S[m|s]csrind - To enable indirect access CSRs
3. Smstateen - Indirect CSR extensions depend on it.
4. Sscofpmf - To enable counter overflow feature
5. S[m|s]aia - To enable counter overflow feature in virtualization
6. Smcntrpmf - To enable privilege mode filtering for cycle/instret
While first 3 are mandatory to enable the counter delegation,
next 3 set of extension are preferred to enable all the PMU related
features. That's why, enable all of these if Ssccfg extension is
enabled from the commandline.
Signed-off-by: Atish Patra <atishp@rivosinc.com>
---
target/riscv/cpu.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
index 22ba43c7ff2a..abebfcc46dea 100644
--- a/target/riscv/cpu.c
+++ b/target/riscv/cpu.c
@@ -2665,8 +2665,20 @@ RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[] = {
NULL
};
+static RISCVCPUPreferredExtsRule SSCCFG_PREFERRED = {
+ .ext = CPU_CFG_OFFSET(ext_ssccfg),
+ .preferred_multi_exts = {
+ CPU_CFG_OFFSET(ext_smcsrind), CPU_CFG_OFFSET(ext_sscsrind),
+ CPU_CFG_OFFSET(ext_ssaia), CPU_CFG_OFFSET(ext_smaia),
+ CPU_CFG_OFFSET(ext_smstateen), CPU_CFG_OFFSET(ext_sscofpmf),
+ CPU_CFG_OFFSET(ext_smcntrpmf), CPU_CFG_OFFSET(ext_smcdeleg),
+
+ RISCV_PREFRRED_EXTS_RULE_END
+ },
+};
+
RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[] = {
- NULL
+ &SSCCFG_PREFERRED, NULL
};
static Property riscv_cpu_properties[] = {
--
2.34.1
^ permalink raw reply related [flat|nested] 41+ messages in thread* Re: [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule
2024-07-23 23:30 ` [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule Atish Patra
@ 2024-08-06 8:46 ` Andrew Jones
2024-08-06 16:05 ` Daniel Henrique Barboza
0 siblings, 1 reply; 41+ messages in thread
From: Andrew Jones @ 2024-08-06 8:46 UTC (permalink / raw)
To: Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
dbarboza, alistair.francis
On Tue, Jul 23, 2024 at 04:30:10PM GMT, Atish Patra wrote:
> Counter delegation/configuration extension requires the following
> extensions to be enabled.
>
> 1. Smcdeleg - To enable counter delegation from M to S
> 2. S[m|s]csrind - To enable indirect access CSRs
> 3. Smstateen - Indirect CSR extensions depend on it.
> 4. Sscofpmf - To enable counter overflow feature
> 5. S[m|s]aia - To enable counter overflow feature in virtualization
> 6. Smcntrpmf - To enable privilege mode filtering for cycle/instret
>
> While first 3 are mandatory to enable the counter delegation,
> next 3 set of extension are preferred to enable all the PMU related
> features.
Just my 2 cents, but I think for the first three we can apply the concept
of extension bundles, which we need for other extensions as well. In those
cases we just auto enable all the dependencies. For the three preferred
extensions I think we can just leave them off for 'base', but we should
enable them by default for 'max' along with Ssccfg.
Thanks,
drew
> That's why, enable all of these if Ssccfg extension is
> enabled from the commandline.
>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
> target/riscv/cpu.c | 14 +++++++++++++-
> 1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 22ba43c7ff2a..abebfcc46dea 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -2665,8 +2665,20 @@ RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[] = {
> NULL
> };
>
> +static RISCVCPUPreferredExtsRule SSCCFG_PREFERRED = {
> + .ext = CPU_CFG_OFFSET(ext_ssccfg),
> + .preferred_multi_exts = {
> + CPU_CFG_OFFSET(ext_smcsrind), CPU_CFG_OFFSET(ext_sscsrind),
> + CPU_CFG_OFFSET(ext_ssaia), CPU_CFG_OFFSET(ext_smaia),
> + CPU_CFG_OFFSET(ext_smstateen), CPU_CFG_OFFSET(ext_sscofpmf),
> + CPU_CFG_OFFSET(ext_smcntrpmf), CPU_CFG_OFFSET(ext_smcdeleg),
> +
> + RISCV_PREFRRED_EXTS_RULE_END
> + },
> +};
> +
> RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[] = {
> - NULL
> + &SSCCFG_PREFERRED, NULL
> };
>
> static Property riscv_cpu_properties[] = {
>
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule
2024-08-06 8:46 ` Andrew Jones
@ 2024-08-06 16:05 ` Daniel Henrique Barboza
2024-08-07 2:01 ` Alistair Francis
0 siblings, 1 reply; 41+ messages in thread
From: Daniel Henrique Barboza @ 2024-08-06 16:05 UTC (permalink / raw)
To: Andrew Jones, Atish Patra
Cc: qemu-riscv, qemu-devel, palmer, liwei1518, zhiwei_liu, bin.meng,
alistair.francis
On 8/6/24 5:46 AM, Andrew Jones wrote:
> On Tue, Jul 23, 2024 at 04:30:10PM GMT, Atish Patra wrote:
>> Counter delegation/configuration extension requires the following
>> extensions to be enabled.
>>
>> 1. Smcdeleg - To enable counter delegation from M to S
>> 2. S[m|s]csrind - To enable indirect access CSRs
>> 3. Smstateen - Indirect CSR extensions depend on it.
>> 4. Sscofpmf - To enable counter overflow feature
>> 5. S[m|s]aia - To enable counter overflow feature in virtualization
>> 6. Smcntrpmf - To enable privilege mode filtering for cycle/instret
>>
>> While first 3 are mandatory to enable the counter delegation,
>> next 3 set of extension are preferred to enable all the PMU related
>> features.
>
> Just my 2 cents, but I think for the first three we can apply the concept
> of extension bundles, which we need for other extensions as well. In those
> cases we just auto enable all the dependencies. For the three preferred
> extensions I think we can just leave them off for 'base', but we should
> enable them by default for 'max' along with Ssccfg.
I like this idea. I would throw in all these 6 extensions in a 'pmu_advanced_ops'
(or any other better fitting name for the bundle) flag and then 'pmu_advanced_ops=true'
would enable all of those. 'pmu_advanced_ops=true,smcntrpmf=false' enables all but
'smcntrpmf' and so on.
As long as we document what the flag is enabling I don't see any problems with it.
This is how profiles are implemented after all.
With this bundle we can also use implied rule only if an extension really needs
(i.e. it breaks without) a dependency being enabled, instead of overloading it
with extensions that 'would be nice to have together' like it seems to be the
case for the last 3 extensions in that list.
I believe users would benefit more from a single flag to enable everything and
be done with it.
Thanks,
Daniel
>
> Thanks,
> drew
>
>> That's why, enable all of these if Ssccfg extension is
>> enabled from the commandline.
>>
>> Signed-off-by: Atish Patra <atishp@rivosinc.com>
>> ---
>> target/riscv/cpu.c | 14 +++++++++++++-
>> 1 file changed, 13 insertions(+), 1 deletion(-)
>>
>> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
>> index 22ba43c7ff2a..abebfcc46dea 100644
>> --- a/target/riscv/cpu.c
>> +++ b/target/riscv/cpu.c
>> @@ -2665,8 +2665,20 @@ RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[] = {
>> NULL
>> };
>>
>> +static RISCVCPUPreferredExtsRule SSCCFG_PREFERRED = {
>> + .ext = CPU_CFG_OFFSET(ext_ssccfg),
>> + .preferred_multi_exts = {
>> + CPU_CFG_OFFSET(ext_smcsrind), CPU_CFG_OFFSET(ext_sscsrind),
>> + CPU_CFG_OFFSET(ext_ssaia), CPU_CFG_OFFSET(ext_smaia),
>> + CPU_CFG_OFFSET(ext_smstateen), CPU_CFG_OFFSET(ext_sscofpmf),
>> + CPU_CFG_OFFSET(ext_smcntrpmf), CPU_CFG_OFFSET(ext_smcdeleg),
>> +
>> + RISCV_PREFRRED_EXTS_RULE_END
>> + },
>> +};
>> +
>> RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[] = {
>> - NULL
>> + &SSCCFG_PREFERRED, NULL
>> };
>>
>> static Property riscv_cpu_properties[] = {
>>
>> --
>> 2.34.1
>>
>>
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule
2024-08-06 16:05 ` Daniel Henrique Barboza
@ 2024-08-07 2:01 ` Alistair Francis
2024-08-07 7:43 ` Atish Kumar Patra
0 siblings, 1 reply; 41+ messages in thread
From: Alistair Francis @ 2024-08-07 2:01 UTC (permalink / raw)
To: Daniel Henrique Barboza
Cc: Andrew Jones, Atish Patra, qemu-riscv, qemu-devel, palmer,
liwei1518, zhiwei_liu, bin.meng, alistair.francis
On Wed, Aug 7, 2024 at 2:06 AM Daniel Henrique Barboza
<dbarboza@ventanamicro.com> wrote:
>
>
>
> On 8/6/24 5:46 AM, Andrew Jones wrote:
> > On Tue, Jul 23, 2024 at 04:30:10PM GMT, Atish Patra wrote:
> >> Counter delegation/configuration extension requires the following
> >> extensions to be enabled.
> >>
> >> 1. Smcdeleg - To enable counter delegation from M to S
> >> 2. S[m|s]csrind - To enable indirect access CSRs
> >> 3. Smstateen - Indirect CSR extensions depend on it.
> >> 4. Sscofpmf - To enable counter overflow feature
> >> 5. S[m|s]aia - To enable counter overflow feature in virtualization
> >> 6. Smcntrpmf - To enable privilege mode filtering for cycle/instret
> >>
> >> While first 3 are mandatory to enable the counter delegation,
> >> next 3 set of extension are preferred to enable all the PMU related
> >> features.
> >
> > Just my 2 cents, but I think for the first three we can apply the concept
> > of extension bundles, which we need for other extensions as well. In those
> > cases we just auto enable all the dependencies. For the three preferred
> > extensions I think we can just leave them off for 'base', but we should
> > enable them by default for 'max' along with Ssccfg.
Agreed
>
> I like this idea. I would throw in all these 6 extensions in a 'pmu_advanced_ops'
> (or any other better fitting name for the bundle) flag and then 'pmu_advanced_ops=true'
> would enable all of those. 'pmu_advanced_ops=true,smcntrpmf=false' enables all but
> 'smcntrpmf' and so on.
>
> As long as we document what the flag is enabling I don't see any problems with it.
> This is how profiles are implemented after all.
I only worry that we end up with a huge collection of flags that users
need to decipher.
I guess with some good documentation this wouldn't be too confusing though.
Alistair
>
> With this bundle we can also use implied rule only if an extension really needs
> (i.e. it breaks without) a dependency being enabled, instead of overloading it
> with extensions that 'would be nice to have together' like it seems to be the
> case for the last 3 extensions in that list.
>
> I believe users would benefit more from a single flag to enable everything and
> be done with it.
>
>
> Thanks,
>
> Daniel
>
>
>
> >
> > Thanks,
> > drew
> >
> >> That's why, enable all of these if Ssccfg extension is
> >> enabled from the commandline.
> >>
> >> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> >> ---
> >> target/riscv/cpu.c | 14 +++++++++++++-
> >> 1 file changed, 13 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> >> index 22ba43c7ff2a..abebfcc46dea 100644
> >> --- a/target/riscv/cpu.c
> >> +++ b/target/riscv/cpu.c
> >> @@ -2665,8 +2665,20 @@ RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[] = {
> >> NULL
> >> };
> >>
> >> +static RISCVCPUPreferredExtsRule SSCCFG_PREFERRED = {
> >> + .ext = CPU_CFG_OFFSET(ext_ssccfg),
> >> + .preferred_multi_exts = {
> >> + CPU_CFG_OFFSET(ext_smcsrind), CPU_CFG_OFFSET(ext_sscsrind),
> >> + CPU_CFG_OFFSET(ext_ssaia), CPU_CFG_OFFSET(ext_smaia),
> >> + CPU_CFG_OFFSET(ext_smstateen), CPU_CFG_OFFSET(ext_sscofpmf),
> >> + CPU_CFG_OFFSET(ext_smcntrpmf), CPU_CFG_OFFSET(ext_smcdeleg),
> >> +
> >> + RISCV_PREFRRED_EXTS_RULE_END
> >> + },
> >> +};
> >> +
> >> RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[] = {
> >> - NULL
> >> + &SSCCFG_PREFERRED, NULL
> >> };
> >>
> >> static Property riscv_cpu_properties[] = {
> >>
> >> --
> >> 2.34.1
> >>
> >>
>
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule
2024-08-07 2:01 ` Alistair Francis
@ 2024-08-07 7:43 ` Atish Kumar Patra
2024-08-08 0:27 ` Alistair Francis
0 siblings, 1 reply; 41+ messages in thread
From: Atish Kumar Patra @ 2024-08-07 7:43 UTC (permalink / raw)
To: Alistair Francis
Cc: Daniel Henrique Barboza, Andrew Jones, qemu-riscv, qemu-devel,
palmer, liwei1518, zhiwei_liu, bin.meng, alistair.francis
On Tue, Aug 6, 2024 at 7:01 PM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Wed, Aug 7, 2024 at 2:06 AM Daniel Henrique Barboza
> <dbarboza@ventanamicro.com> wrote:
> >
> >
> >
> > On 8/6/24 5:46 AM, Andrew Jones wrote:
> > > On Tue, Jul 23, 2024 at 04:30:10PM GMT, Atish Patra wrote:
> > >> Counter delegation/configuration extension requires the following
> > >> extensions to be enabled.
> > >>
> > >> 1. Smcdeleg - To enable counter delegation from M to S
> > >> 2. S[m|s]csrind - To enable indirect access CSRs
> > >> 3. Smstateen - Indirect CSR extensions depend on it.
> > >> 4. Sscofpmf - To enable counter overflow feature
> > >> 5. S[m|s]aia - To enable counter overflow feature in virtualization
> > >> 6. Smcntrpmf - To enable privilege mode filtering for cycle/instret
> > >>
> > >> While first 3 are mandatory to enable the counter delegation,
> > >> next 3 set of extension are preferred to enable all the PMU related
> > >> features.
> > >
> > > Just my 2 cents, but I think for the first three we can apply the concept
> > > of extension bundles, which we need for other extensions as well. In those
> > > cases we just auto enable all the dependencies. For the three preferred
> > > extensions I think we can just leave them off for 'base', but we should
> > > enable them by default for 'max' along with Ssccfg.
>
Max cpu will have everything enabled by default. The problem with max
cpu is that you
may not want to run all the available ISA extensions while testing perf.
> Agreed
>
> >
> > I like this idea. I would throw in all these 6 extensions in a 'pmu_advanced_ops'
> > (or any other better fitting name for the bundle) flag and then 'pmu_advanced_ops=true'
> > would enable all of those. 'pmu_advanced_ops=true,smcntrpmf=false' enables all but
> > 'smcntrpmf' and so on.
> >
I thought distinguishing preferred vs implied would be useful because
it would allow the user
to clearly understand which is mandated by ISA vs which would be good to have.
The good to have extensions can be disabled similar to above but not
the mandatory ones.
> > As long as we document what the flag is enabling I don't see any problems with it.
> > This is how profiles are implemented after all.
>
> I only worry that we end up with a huge collection of flags that users
> need to decipher.
>
My initial idea was a separate flag as well. But I was not sure if
that was good for the
above reason. This additional custom pmu related option would be lost
in that huge collection.
> I guess with some good documentation this wouldn't be too confusing though.
>
Sure. It won't be confusing but most users may not even know about it
without digging.
That's why I chose to use a standard extension which covers the basic
PMU access directly in S-mode.
The future extensions such as CTR or Sampling events would also
benefit by just adding the Ssccfg in the preferred rule
which in turn will enable other preferred/mandatory extensions.
> Alistair
>
> >
> > With this bundle we can also use implied rule only if an extension really needs
> > (i.e. it breaks without) a dependency being enabled, instead of overloading it
> > with extensions that 'would be nice to have together' like it seems to be the
> > case for the last 3 extensions in that list.
> >
> > I believe users would benefit more from a single flag to enable everything and
> > be done with it.
> >
> >
> > Thanks,
> >
> > Daniel
> >
> >
> >
> > >
> > > Thanks,
> > > drew
> > >
> > >> That's why, enable all of these if Ssccfg extension is
> > >> enabled from the commandline.
> > >>
> > >> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> > >> ---
> > >> target/riscv/cpu.c | 14 +++++++++++++-
> > >> 1 file changed, 13 insertions(+), 1 deletion(-)
> > >>
> > >> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> > >> index 22ba43c7ff2a..abebfcc46dea 100644
> > >> --- a/target/riscv/cpu.c
> > >> +++ b/target/riscv/cpu.c
> > >> @@ -2665,8 +2665,20 @@ RISCVCPUImpliedExtsRule *riscv_multi_ext_implied_rules[] = {
> > >> NULL
> > >> };
> > >>
> > >> +static RISCVCPUPreferredExtsRule SSCCFG_PREFERRED = {
> > >> + .ext = CPU_CFG_OFFSET(ext_ssccfg),
> > >> + .preferred_multi_exts = {
> > >> + CPU_CFG_OFFSET(ext_smcsrind), CPU_CFG_OFFSET(ext_sscsrind),
> > >> + CPU_CFG_OFFSET(ext_ssaia), CPU_CFG_OFFSET(ext_smaia),
> > >> + CPU_CFG_OFFSET(ext_smstateen), CPU_CFG_OFFSET(ext_sscofpmf),
> > >> + CPU_CFG_OFFSET(ext_smcntrpmf), CPU_CFG_OFFSET(ext_smcdeleg),
> > >> +
> > >> + RISCV_PREFRRED_EXTS_RULE_END
> > >> + },
> > >> +};
> > >> +
> > >> RISCVCPUPreferredExtsRule *riscv_multi_ext_preferred_rules[] = {
> > >> - NULL
> > >> + &SSCCFG_PREFERRED, NULL
> > >> };
> > >>
> > >> static Property riscv_cpu_properties[] = {
> > >>
> > >> --
> > >> 2.34.1
> > >>
> > >>
> >
^ permalink raw reply [flat|nested] 41+ messages in thread* Re: [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule
2024-08-07 7:43 ` Atish Kumar Patra
@ 2024-08-08 0:27 ` Alistair Francis
2024-08-08 8:12 ` Atish Kumar Patra
0 siblings, 1 reply; 41+ messages in thread
From: Alistair Francis @ 2024-08-08 0:27 UTC (permalink / raw)
To: Atish Kumar Patra
Cc: Daniel Henrique Barboza, Andrew Jones, qemu-riscv, qemu-devel,
palmer, liwei1518, zhiwei_liu, bin.meng, alistair.francis
On Wed, Aug 7, 2024 at 5:44 PM Atish Kumar Patra <atishp@rivosinc.com> wrote:
>
> On Tue, Aug 6, 2024 at 7:01 PM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Wed, Aug 7, 2024 at 2:06 AM Daniel Henrique Barboza
> > <dbarboza@ventanamicro.com> wrote:
> > >
> > >
> > >
> > > On 8/6/24 5:46 AM, Andrew Jones wrote:
> > > > On Tue, Jul 23, 2024 at 04:30:10PM GMT, Atish Patra wrote:
> > > >> Counter delegation/configuration extension requires the following
> > > >> extensions to be enabled.
> > > >>
> > > >> 1. Smcdeleg - To enable counter delegation from M to S
> > > >> 2. S[m|s]csrind - To enable indirect access CSRs
> > > >> 3. Smstateen - Indirect CSR extensions depend on it.
> > > >> 4. Sscofpmf - To enable counter overflow feature
> > > >> 5. S[m|s]aia - To enable counter overflow feature in virtualization
> > > >> 6. Smcntrpmf - To enable privilege mode filtering for cycle/instret
> > > >>
> > > >> While first 3 are mandatory to enable the counter delegation,
> > > >> next 3 set of extension are preferred to enable all the PMU related
> > > >> features.
> > > >
> > > > Just my 2 cents, but I think for the first three we can apply the concept
> > > > of extension bundles, which we need for other extensions as well. In those
> > > > cases we just auto enable all the dependencies. For the three preferred
> > > > extensions I think we can just leave them off for 'base', but we should
> > > > enable them by default for 'max' along with Ssccfg.
> >
>
> Max cpu will have everything enabled by default. The problem with max
> cpu is that you
> may not want to run all the available ISA extensions while testing perf.
>
> > Agreed
> >
> > >
> > > I like this idea. I would throw in all these 6 extensions in a 'pmu_advanced_ops'
> > > (or any other better fitting name for the bundle) flag and then 'pmu_advanced_ops=true'
> > > would enable all of those. 'pmu_advanced_ops=true,smcntrpmf=false' enables all but
> > > 'smcntrpmf' and so on.
> > >
>
> I thought distinguishing preferred vs implied would be useful because
> it would allow the user
> to clearly understand which is mandated by ISA vs which would be good to have.
It's not really clear though what extensions are good to have. Other
people might think differently about the extensions. It also then
means we end up with complex combinations of extensions to manage.
>
> The good to have extensions can be disabled similar to above but not
> the mandatory ones.
>
> > > As long as we document what the flag is enabling I don't see any problems with it.
> > > This is how profiles are implemented after all.
> >
> > I only worry that we end up with a huge collection of flags that users
> > need to decipher.
> >
>
> My initial idea was a separate flag as well. But I was not sure if
> that was good for the
> above reason. This additional custom pmu related option would be lost
> in that huge collection.
I do feel a separate flag is better than trying to guess what extra
extensions the user wants enabled.
I don't love either though, isn't this what profiles is supposed to fix!
>
> > I guess with some good documentation this wouldn't be too confusing though.
> >
>
> Sure. It won't be confusing but most users may not even know about it
> without digging.
At that point they can use the max CPU or just manually enable the
extensions though.
Alistair
> That's why I chose to use a standard extension which covers the basic
> PMU access directly in S-mode.
>
> The future extensions such as CTR or Sampling events would also
> benefit by just adding the Ssccfg in the preferred rule
> which in turn will enable other preferred/mandatory extensions.
>
> > Alistair
> >
> > >
> > > With this bundle we can also use implied rule only if an extension really needs
> > > (i.e. it breaks without) a dependency being enabled, instead of overloading it
> > > with extensions that 'would be nice to have together' like it seems to be the
> > > case for the last 3 extensions in that list.
> > >
> > > I believe users would benefit more from a single flag to enable everything and
> > > be done with it.
> > >
> > >
> > > Thanks,
> > >
> > > Daniel
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule
2024-08-08 0:27 ` Alistair Francis
@ 2024-08-08 8:12 ` Atish Kumar Patra
2024-08-08 8:24 ` Alistair Francis
0 siblings, 1 reply; 41+ messages in thread
From: Atish Kumar Patra @ 2024-08-08 8:12 UTC (permalink / raw)
To: Alistair Francis
Cc: Daniel Henrique Barboza, Andrew Jones, qemu-riscv, qemu-devel,
palmer, liwei1518, zhiwei_liu, bin.meng, alistair.francis
On Wed, Aug 7, 2024 at 5:27 PM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Wed, Aug 7, 2024 at 5:44 PM Atish Kumar Patra <atishp@rivosinc.com> wrote:
> >
> > On Tue, Aug 6, 2024 at 7:01 PM Alistair Francis <alistair23@gmail.com> wrote:
> > >
> > > On Wed, Aug 7, 2024 at 2:06 AM Daniel Henrique Barboza
> > > <dbarboza@ventanamicro.com> wrote:
> > > >
> > > >
> > > >
> > > > On 8/6/24 5:46 AM, Andrew Jones wrote:
> > > > > On Tue, Jul 23, 2024 at 04:30:10PM GMT, Atish Patra wrote:
> > > > >> Counter delegation/configuration extension requires the following
> > > > >> extensions to be enabled.
> > > > >>
> > > > >> 1. Smcdeleg - To enable counter delegation from M to S
> > > > >> 2. S[m|s]csrind - To enable indirect access CSRs
> > > > >> 3. Smstateen - Indirect CSR extensions depend on it.
> > > > >> 4. Sscofpmf - To enable counter overflow feature
> > > > >> 5. S[m|s]aia - To enable counter overflow feature in virtualization
> > > > >> 6. Smcntrpmf - To enable privilege mode filtering for cycle/instret
> > > > >>
> > > > >> While first 3 are mandatory to enable the counter delegation,
> > > > >> next 3 set of extension are preferred to enable all the PMU related
> > > > >> features.
> > > > >
> > > > > Just my 2 cents, but I think for the first three we can apply the concept
> > > > > of extension bundles, which we need for other extensions as well. In those
> > > > > cases we just auto enable all the dependencies. For the three preferred
> > > > > extensions I think we can just leave them off for 'base', but we should
> > > > > enable them by default for 'max' along with Ssccfg.
> > >
> >
> > Max cpu will have everything enabled by default. The problem with max
> > cpu is that you
> > may not want to run all the available ISA extensions while testing perf.
> >
> > > Agreed
> > >
> > > >
> > > > I like this idea. I would throw in all these 6 extensions in a 'pmu_advanced_ops'
> > > > (or any other better fitting name for the bundle) flag and then 'pmu_advanced_ops=true'
> > > > would enable all of those. 'pmu_advanced_ops=true,smcntrpmf=false' enables all but
> > > > 'smcntrpmf' and so on.
> > > >
> >
> > I thought distinguishing preferred vs implied would be useful because
> > it would allow the user
> > to clearly understand which is mandated by ISA vs which would be good to have.
>
> It's not really clear though what extensions are good to have. Other
> people might think differently about the extensions. It also then
> means we end up with complex combinations of extensions to manage.
>
> >
> > The good to have extensions can be disabled similar to above but not
> > the mandatory ones.
> >
> > > > As long as we document what the flag is enabling I don't see any problems with it.
> > > > This is how profiles are implemented after all.
> > >
> > > I only worry that we end up with a huge collection of flags that users
> > > need to decipher.
> > >
> >
> > My initial idea was a separate flag as well. But I was not sure if
> > that was good for the
> > above reason. This additional custom pmu related option would be lost
> > in that huge collection.
>
> I do feel a separate flag is better than trying to guess what extra
> extensions the user wants enabled.
>
Sure. A separate pmu flag that enables all available pmu related
extensions - Correct ?
Do you prefer to have those enabled via a separate preferred rule or
just reuse the implied
rule ? I can drop the preferred rule patches for the later case.
> I don't love either though, isn't this what profiles is supposed to fix!
>
Yeah. But given the optionality in profiles, I am sure if it will fix
the ever growing
extension dependency graph problem ;)
> >
> > > I guess with some good documentation this wouldn't be too confusing though.
> > >
> >
> > Sure. It won't be confusing but most users may not even know about it
> > without digging.
>
> At that point they can use the max CPU or just manually enable the
> extensions though.
>
If everybody thinks max CPU is going to be used more frequently in the
future, I am okay with
that as well. Implied rule will only specify mandatory extensions
defined by ISA.
It's up to the user to figure out the extensions names and enable them
individually if max CPU
is not used.
FYI: There are at least 6 more PMU related extensions that this series
did not specify.
~4 are being discussed in the RVI TG(precise event sampling, events)
and 2 are frozen (Smctr/Ssctr)
> Alistair
>
> > That's why I chose to use a standard extension which covers the basic
> > PMU access directly in S-mode.
> >
> > The future extensions such as CTR or Sampling events would also
> > benefit by just adding the Ssccfg in the preferred rule
> > which in turn will enable other preferred/mandatory extensions.
> >
> > > Alistair
> > >
> > > >
> > > > With this bundle we can also use implied rule only if an extension really needs
> > > > (i.e. it breaks without) a dependency being enabled, instead of overloading it
> > > > with extensions that 'would be nice to have together' like it seems to be the
> > > > case for the last 3 extensions in that list.
> > > >
> > > > I believe users would benefit more from a single flag to enable everything and
> > > > be done with it.
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Daniel
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule
2024-08-08 8:12 ` Atish Kumar Patra
@ 2024-08-08 8:24 ` Alistair Francis
2024-08-09 21:40 ` Atish Kumar Patra
0 siblings, 1 reply; 41+ messages in thread
From: Alistair Francis @ 2024-08-08 8:24 UTC (permalink / raw)
To: Atish Kumar Patra
Cc: Daniel Henrique Barboza, Andrew Jones, qemu-riscv, qemu-devel,
palmer, liwei1518, zhiwei_liu, bin.meng, alistair.francis
On Thu, Aug 8, 2024 at 6:12 PM Atish Kumar Patra <atishp@rivosinc.com> wrote:
>
> On Wed, Aug 7, 2024 at 5:27 PM Alistair Francis <alistair23@gmail.com> wrote:
> >
> > On Wed, Aug 7, 2024 at 5:44 PM Atish Kumar Patra <atishp@rivosinc.com> wrote:
> > >
> > > On Tue, Aug 6, 2024 at 7:01 PM Alistair Francis <alistair23@gmail.com> wrote:
> > > >
> > > > On Wed, Aug 7, 2024 at 2:06 AM Daniel Henrique Barboza
> > > > <dbarboza@ventanamicro.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > On 8/6/24 5:46 AM, Andrew Jones wrote:
> > > > > > On Tue, Jul 23, 2024 at 04:30:10PM GMT, Atish Patra wrote:
> > > > > >> Counter delegation/configuration extension requires the following
> > > > > >> extensions to be enabled.
> > > > > >>
> > > > > >> 1. Smcdeleg - To enable counter delegation from M to S
> > > > > >> 2. S[m|s]csrind - To enable indirect access CSRs
> > > > > >> 3. Smstateen - Indirect CSR extensions depend on it.
> > > > > >> 4. Sscofpmf - To enable counter overflow feature
> > > > > >> 5. S[m|s]aia - To enable counter overflow feature in virtualization
> > > > > >> 6. Smcntrpmf - To enable privilege mode filtering for cycle/instret
> > > > > >>
> > > > > >> While first 3 are mandatory to enable the counter delegation,
> > > > > >> next 3 set of extension are preferred to enable all the PMU related
> > > > > >> features.
> > > > > >
> > > > > > Just my 2 cents, but I think for the first three we can apply the concept
> > > > > > of extension bundles, which we need for other extensions as well. In those
> > > > > > cases we just auto enable all the dependencies. For the three preferred
> > > > > > extensions I think we can just leave them off for 'base', but we should
> > > > > > enable them by default for 'max' along with Ssccfg.
> > > >
> > >
> > > Max cpu will have everything enabled by default. The problem with max
> > > cpu is that you
> > > may not want to run all the available ISA extensions while testing perf.
> > >
> > > > Agreed
> > > >
> > > > >
> > > > > I like this idea. I would throw in all these 6 extensions in a 'pmu_advanced_ops'
> > > > > (or any other better fitting name for the bundle) flag and then 'pmu_advanced_ops=true'
> > > > > would enable all of those. 'pmu_advanced_ops=true,smcntrpmf=false' enables all but
> > > > > 'smcntrpmf' and so on.
> > > > >
> > >
> > > I thought distinguishing preferred vs implied would be useful because
> > > it would allow the user
> > > to clearly understand which is mandated by ISA vs which would be good to have.
> >
> > It's not really clear though what extensions are good to have. Other
> > people might think differently about the extensions. It also then
> > means we end up with complex combinations of extensions to manage.
> >
> > >
> > > The good to have extensions can be disabled similar to above but not
> > > the mandatory ones.
> > >
> > > > > As long as we document what the flag is enabling I don't see any problems with it.
> > > > > This is how profiles are implemented after all.
> > > >
> > > > I only worry that we end up with a huge collection of flags that users
> > > > need to decipher.
> > > >
> > >
> > > My initial idea was a separate flag as well. But I was not sure if
> > > that was good for the
> > > above reason. This additional custom pmu related option would be lost
> > > in that huge collection.
> >
> > I do feel a separate flag is better than trying to guess what extra
> > extensions the user wants enabled.
> >
>
> Sure. A separate pmu flag that enables all available pmu related
> extensions - Correct ?
That seems like the best option. Although just using the max CPU is
even better :)
> Do you prefer to have those enabled via a separate preferred rule or
> just reuse the implied
> rule ? I can drop the preferred rule patches for the later case.
As this is now a custom flag a separate rule is probably the way to
go. Something similar to the existing `RISCVCPUImpliedExtsRule` is
probably the way to go. Keep an implied rule for what is required by
the spec, but maybe a "helper" rule for the special flag?
>
>
> > I don't love either though, isn't this what profiles is supposed to fix!
> >
>
> Yeah. But given the optionality in profiles, I am sure if it will fix
> the ever growing
> extension dependency graph problem ;)
>
> > >
> > > > I guess with some good documentation this wouldn't be too confusing though.
> > > >
> > >
> > > Sure. It won't be confusing but most users may not even know about it
> > > without digging.
> >
> > At that point they can use the max CPU or just manually enable the
> > extensions though.
> >
>
> If everybody thinks max CPU is going to be used more frequently in the
> future, I am okay with
> that as well. Implied rule will only specify mandatory extensions
> defined by ISA.
>
> It's up to the user to figure out the extensions names and enable them
> individually if max CPU
> is not used.
A user can just specify max. It's just as much work as specifying this new flag.
Is the issue just the defaults? We can think about max CPU being the default.
> FYI: There are at least 6 more PMU related extensions that this series
> did not specify.
> ~4 are being discussed in the RVI TG(precise event sampling, events)
> and 2 are frozen (Smctr/Ssctr)
Urgh!
Alistair
^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH v2 13/13] target/riscv: Enable PMU related extensions to preferred rule
2024-08-08 8:24 ` Alistair Francis
@ 2024-08-09 21:40 ` Atish Kumar Patra
0 siblings, 0 replies; 41+ messages in thread
From: Atish Kumar Patra @ 2024-08-09 21:40 UTC (permalink / raw)
To: Alistair Francis
Cc: Daniel Henrique Barboza, Andrew Jones, qemu-riscv, qemu-devel,
palmer, liwei1518, zhiwei_liu, bin.meng, alistair.francis
On Thu, Aug 8, 2024 at 1:24 AM Alistair Francis <alistair23@gmail.com> wrote:
>
> On Thu, Aug 8, 2024 at 6:12 PM Atish Kumar Patra <atishp@rivosinc.com> wrote:
> >
> > On Wed, Aug 7, 2024 at 5:27 PM Alistair Francis <alistair23@gmail.com> wrote:
> > >
> > > On Wed, Aug 7, 2024 at 5:44 PM Atish Kumar Patra <atishp@rivosinc.com> wrote:
> > > >
> > > > On Tue, Aug 6, 2024 at 7:01 PM Alistair Francis <alistair23@gmail.com> wrote:
> > > > >
> > > > > On Wed, Aug 7, 2024 at 2:06 AM Daniel Henrique Barboza
> > > > > <dbarboza@ventanamicro.com> wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > On 8/6/24 5:46 AM, Andrew Jones wrote:
> > > > > > > On Tue, Jul 23, 2024 at 04:30:10PM GMT, Atish Patra wrote:
> > > > > > >> Counter delegation/configuration extension requires the following
> > > > > > >> extensions to be enabled.
> > > > > > >>
> > > > > > >> 1. Smcdeleg - To enable counter delegation from M to S
> > > > > > >> 2. S[m|s]csrind - To enable indirect access CSRs
> > > > > > >> 3. Smstateen - Indirect CSR extensions depend on it.
> > > > > > >> 4. Sscofpmf - To enable counter overflow feature
> > > > > > >> 5. S[m|s]aia - To enable counter overflow feature in virtualization
> > > > > > >> 6. Smcntrpmf - To enable privilege mode filtering for cycle/instret
> > > > > > >>
> > > > > > >> While first 3 are mandatory to enable the counter delegation,
> > > > > > >> next 3 set of extension are preferred to enable all the PMU related
> > > > > > >> features.
> > > > > > >
> > > > > > > Just my 2 cents, but I think for the first three we can apply the concept
> > > > > > > of extension bundles, which we need for other extensions as well. In those
> > > > > > > cases we just auto enable all the dependencies. For the three preferred
> > > > > > > extensions I think we can just leave them off for 'base', but we should
> > > > > > > enable them by default for 'max' along with Ssccfg.
> > > > >
> > > >
> > > > Max cpu will have everything enabled by default. The problem with max
> > > > cpu is that you
> > > > may not want to run all the available ISA extensions while testing perf.
> > > >
> > > > > Agreed
> > > > >
> > > > > >
> > > > > > I like this idea. I would throw in all these 6 extensions in a 'pmu_advanced_ops'
> > > > > > (or any other better fitting name for the bundle) flag and then 'pmu_advanced_ops=true'
> > > > > > would enable all of those. 'pmu_advanced_ops=true,smcntrpmf=false' enables all but
> > > > > > 'smcntrpmf' and so on.
> > > > > >
> > > >
> > > > I thought distinguishing preferred vs implied would be useful because
> > > > it would allow the user
> > > > to clearly understand which is mandated by ISA vs which would be good to have.
> > >
> > > It's not really clear though what extensions are good to have. Other
> > > people might think differently about the extensions. It also then
> > > means we end up with complex combinations of extensions to manage.
> > >
> > > >
> > > > The good to have extensions can be disabled similar to above but not
> > > > the mandatory ones.
> > > >
> > > > > > As long as we document what the flag is enabling I don't see any problems with it.
> > > > > > This is how profiles are implemented after all.
> > > > >
> > > > > I only worry that we end up with a huge collection of flags that users
> > > > > need to decipher.
> > > > >
> > > >
> > > > My initial idea was a separate flag as well. But I was not sure if
> > > > that was good for the
> > > > above reason. This additional custom pmu related option would be lost
> > > > in that huge collection.
> > >
> > > I do feel a separate flag is better than trying to guess what extra
> > > extensions the user wants enabled.
> > >
> >
> > Sure. A separate pmu flag that enables all available pmu related
> > extensions - Correct ?
>
> That seems like the best option. Although just using the max CPU is
> even better :)
>
> > Do you prefer to have those enabled via a separate preferred rule or
> > just reuse the implied
> > rule ? I can drop the preferred rule patches for the later case.
>
> As this is now a custom flag a separate rule is probably the way to
> go. Something similar to the existing `RISCVCPUImpliedExtsRule` is
> probably the way to go. Keep an implied rule for what is required by
> the spec, but maybe a "helper" rule for the special flag?
>
> >
> >
> > > I don't love either though, isn't this what profiles is supposed to fix!
> > >
> >
> > Yeah. But given the optionality in profiles, I am sure if it will fix
> > the ever growing
> > extension dependency graph problem ;)
> >
> > > >
> > > > > I guess with some good documentation this wouldn't be too confusing though.
> > > > >
> > > >
> > > > Sure. It won't be confusing but most users may not even know about it
> > > > without digging.
> > >
> > > At that point they can use the max CPU or just manually enable the
> > > extensions though.
> > >
> >
> > If everybody thinks max CPU is going to be used more frequently in the
> > future, I am okay with
> > that as well. Implied rule will only specify mandatory extensions
> > defined by ISA.
> >
> > It's up to the user to figure out the extensions names and enable them
> > individually if max CPU
> > is not used.
>
> A user can just specify max. It's just as much work as specifying this new flag.
>
> Is the issue just the defaults? We can think about max CPU being the default.
>
Yes. That would be great. I will drop the Preferred rule and use the implied
rule for the extensions mandated by the ISA for now in v3.
> > FYI: There are at least 6 more PMU related extensions that this series
> > did not specify.
> > ~4 are being discussed in the RVI TG(precise event sampling, events)
> > and 2 are frozen (Smctr/Ssctr)
>
> Urgh!
>
> Alistair
^ permalink raw reply [flat|nested] 41+ messages in thread