From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A0B8C04A68 for ; Wed, 27 Jul 2022 21:36:29 +0000 (UTC) Received: from localhost ([::1]:48136 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1oGohf-00080x-UW for qemu-devel@archiver.kernel.org; Wed, 27 Jul 2022 17:36:28 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48360) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1oGoeI-0006ZP-LB for qemu-devel@nongnu.org; Wed, 27 Jul 2022 17:32:58 -0400 Received: from mail-pj1-x1031.google.com ([2607:f8b0:4864:20::1031]:34307) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1oGoeC-0001xh-Nk for qemu-devel@nongnu.org; Wed, 27 Jul 2022 17:32:58 -0400 Received: by mail-pj1-x1031.google.com with SMTP id c19-20020a17090ae11300b001f2f94ed5c6so3790432pjz.1 for ; Wed, 27 Jul 2022 14:32:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=6Hy6TZn44kO+oHMDqzMA8cFsckktCV/d8B+KzS+Kxak=; b=PakachXrgsWgbQPQmm6z7fSZdVRis3QLr+zC5wnn0xFBJBsGgwc5YkUCH91YKKncNe qzMn3HQ2buw+IUk1n4uMJa9wMH/H5Zqc+HVHZy9M/xye3XhUak7mUd4Rvg5EUUmNiT6Z /HC+J+PGF0SEHR22vuVMNbiCc6NmoqyvErl7qJrxick6oKp9uk40eqaN2SvlwZM+GzmR JtHTE6vOp5kvpjrDFQOmZoY0YBJhIRelhHbmB/IKQ6aGpmebqUpjU/3nuQUOwh+PHIoZ rmCjLOHOkSoO/7Q62H3rv6+qn3Z7lyiQEsHZ43ofUa54lCeXOfa/rfc7GmYgDNBrs9ZT z59Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=6Hy6TZn44kO+oHMDqzMA8cFsckktCV/d8B+KzS+Kxak=; b=3RBWWMnNRWnQn2zpOVXZ0ibFG09aEpZscphIDylkDx0OZr9lW+y0GEt5od3YfKrQa+ aUe8KceH20nqE6jeckLvdrlz34HIaC409CQL3ODA/Y3DhjPwZx2phRRV/e1yUzM0X399 WwEdCk7UEj/+on21b9UFWz9k1gfLmyUP47nmkW88KHFuJ/evEC0fMUpsw3qYxBFz7sFa LeMEI98yMoHmtMs/OlwhSN/w00CLtfmuNAdRT3c0901T796qJXwWVXITtxvTb59aisWC rXXWNZlp845QVE7BGRfqWT0+GQnoTCnkM+6G8/KqhfPaBETRXNMYjAGmfxiSRyt2Ng4x GDMQ== X-Gm-Message-State: AJIora+Au+Dkn7i3M3A37YwJCuRGjNcLMCKB8RErWvRZ9MXZNdb7PjR2 RnNxMC3mJtsEMzeZzlec5uPMRDKIuNilnAaWL0puYQ== X-Google-Smtp-Source: AGRyM1sVESbSXhfr6sRMnNY0yM7Ht3TI15sVSC8c55Y3IdIEEyCT2jqJq8GFamtPsc5xwLkybUOvJrjob7nikIhBBkM= X-Received: by 2002:a17:902:e752:b0:16d:88ea:7ecc with SMTP id p18-20020a170902e75200b0016d88ea7eccmr12753152plf.17.1658957570679; Wed, 27 Jul 2022 14:32:50 -0700 (PDT) MIME-Version: 1.0 References: <20220727064913.1041427-1-atishp@rivosinc.com> <20220727064913.1041427-2-atishp@rivosinc.com> <2080cb1c-873b-c43f-e2c3-1d65412f0a32@iscas.ac.cn> In-Reply-To: <2080cb1c-873b-c43f-e2c3-1d65412f0a32@iscas.ac.cn> From: Atish Kumar Patra Date: Wed, 27 Jul 2022 14:32:38 -0700 Message-ID: Subject: Re: [PATCH v11 1/6] target/riscv: Add sscofpmf extension support To: Weiwei Li Cc: "qemu-devel@nongnu.org Developers" , Heiko Stuebner , Atish Patra , Alistair Francis , Bin Meng , Palmer Dabbelt , "open list:RISC-V" Content-Type: multipart/alternative; boundary="000000000000b2c20405e4d02826" Received-SPF: pass client-ip=2607:f8b0:4864:20::1031; envelope-from=atishp@rivosinc.com; helo=mail-pj1-x1031.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" --000000000000b2c20405e4d02826 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Wed, Jul 27, 2022 at 1:11 AM Weiwei Li wrote: > > =E5=9C=A8 2022/7/27 =E4=B8=8B=E5=8D=882:49, Atish Patra =E5=86=99=E9=81= =93: > > The Sscofpmf ('Ss' for Privileged arch and Supervisor-level extensions, > > and 'cofpmf' for Count OverFlow and Privilege Mode Filtering) > > extension allows the perf to handle overflow interrupts and filtering > > support. This patch provides a framework for programmable > > counters to leverage the extension. As the extension doesn't have any > > provision for the overflow bit for fixed counters, the fixed events > > can also be monitoring using programmable counters. The underlying > > counters for cycle and instruction counters are always running. Thus, > > a separate timer device is programmed to handle the overflow. > > > > Tested-by: Heiko Stuebner > > Signed-off-by: Atish Patra > > Signed-off-by: Atish Patra > > --- > > target/riscv/cpu.c | 11 ++ > > target/riscv/cpu.h | 25 +++ > > target/riscv/cpu_bits.h | 55 +++++++ > > target/riscv/csr.c | 166 ++++++++++++++++++- > > target/riscv/machine.c | 1 + > > target/riscv/pmu.c | 357 +++++++++++++++++++++++++++++++++++++++= - > > target/riscv/pmu.h | 7 + > > 7 files changed, 611 insertions(+), 11 deletions(-) > > > > diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c > > index 1bb3973806d2..c1d62b81a725 100644 > > --- a/target/riscv/cpu.c > > +++ b/target/riscv/cpu.c > > @@ -22,6 +22,7 @@ > > #include "qemu/ctype.h" > > #include "qemu/log.h" > > #include "cpu.h" > > +#include "pmu.h" > > #include "internals.h" > > #include "exec/exec-all.h" > > #include "qapi/error.h" > > @@ -779,6 +780,15 @@ static void riscv_cpu_realize(DeviceState *dev, > Error **errp) > > set_misa(env, env->misa_mxl, ext); > > } > > > > +#ifndef CONFIG_USER_ONLY > > + if (cpu->cfg.pmu_num) { > > + if (!riscv_pmu_init(cpu, cpu->cfg.pmu_num) && > cpu->cfg.ext_sscofpmf) { > > + cpu->pmu_timer =3D timer_new_ns(QEMU_CLOCK_VIRTUAL, > > + riscv_pmu_timer_cb, cpu); > > + } > > + } > > +#endif > > + > > riscv_cpu_register_gdb_regs_for_features(cs); > > > > qemu_init_vcpu(cs); > > @@ -883,6 +893,7 @@ static Property riscv_cpu_extensions[] =3D { > > DEFINE_PROP_BOOL("v", RISCVCPU, cfg.ext_v, false), > > DEFINE_PROP_BOOL("h", RISCVCPU, cfg.ext_h, true), > > DEFINE_PROP_UINT8("pmu-num", RISCVCPU, cfg.pmu_num, 16), > > + DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ext_sscofpmf, false), > > DEFINE_PROP_BOOL("Zifencei", RISCVCPU, cfg.ext_ifencei, true), > > DEFINE_PROP_BOOL("Zicsr", RISCVCPU, cfg.ext_icsr, true), > > DEFINE_PROP_BOOL("Zfh", RISCVCPU, cfg.ext_zfh, false), > > diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h > > index 5c7acc055ac9..2222db193c3d 100644 > > --- a/target/riscv/cpu.h > > +++ b/target/riscv/cpu.h > > @@ -137,6 +137,8 @@ typedef struct PMUCTRState { > > /* Snapshort value of a counter in RV32 */ > > target_ulong mhpmcounterh_prev; > > bool started; > > + /* Value beyond UINT32_MAX/UINT64_MAX before overflow interrupt > trigger */ > > + target_ulong irq_overflow_left; > > } PMUCTRState; > > > > struct CPUArchState { > > @@ -297,6 +299,9 @@ struct CPUArchState { > > /* PMU event selector configured values. First three are unused*/ > > target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS]; > > > > + /* PMU event selector configured values for RV32*/ > > + target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS]; > > + > > target_ulong sscratch; > > target_ulong mscratch; > > > > @@ -433,6 +438,7 @@ struct RISCVCPUConfig { > > bool ext_zve32f; > > bool ext_zve64f; > > bool ext_zmmul; > > + bool ext_sscofpmf; > > bool rvv_ta_all_1s; > > > > uint32_t mvendorid; > > @@ -479,6 +485,12 @@ struct ArchCPU { > > > > /* Configuration Settings */ > > RISCVCPUConfig cfg; > > + > > + QEMUTimer *pmu_timer; > > + /* A bitmask of Available programmable counters */ > > + uint32_t pmu_avail_ctrs; > > + /* Mapping of events to counters */ > > + GHashTable *pmu_event_ctr_map; > > }; > > > > static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext) > > @@ -738,6 +750,19 @@ enum { > > CSR_TABLE_SIZE =3D 0x1000 > > }; > > > > +/** > > + * The event id are encoded based on the encoding specified in the > > + * SBI specification v0.3 > > + */ > > + > > +enum riscv_pmu_event_idx { > > + RISCV_PMU_EVENT_HW_CPU_CYCLES =3D 0x01, > > + RISCV_PMU_EVENT_HW_INSTRUCTIONS =3D 0x02, > > + RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS =3D 0x10019, > > + RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS =3D 0x1001B, > > + RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS =3D 0x10021, > > +}; > > + > > /* CSR function table */ > > extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE]; > > > > diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h > > index 6be5a9e9f046..b63c586be563 100644 > > --- a/target/riscv/cpu_bits.h > > +++ b/target/riscv/cpu_bits.h > > @@ -382,6 +382,37 @@ > > #define CSR_MHPMEVENT29 0x33d > > #define CSR_MHPMEVENT30 0x33e > > #define CSR_MHPMEVENT31 0x33f > > + > > +#define CSR_MHPMEVENT3H 0x723 > > +#define CSR_MHPMEVENT4H 0x724 > > +#define CSR_MHPMEVENT5H 0x725 > > +#define CSR_MHPMEVENT6H 0x726 > > +#define CSR_MHPMEVENT7H 0x727 > > +#define CSR_MHPMEVENT8H 0x728 > > +#define CSR_MHPMEVENT9H 0x729 > > +#define CSR_MHPMEVENT10H 0x72a > > +#define CSR_MHPMEVENT11H 0x72b > > +#define CSR_MHPMEVENT12H 0x72c > > +#define CSR_MHPMEVENT13H 0x72d > > +#define CSR_MHPMEVENT14H 0x72e > > +#define CSR_MHPMEVENT15H 0x72f > > +#define CSR_MHPMEVENT16H 0x730 > > +#define CSR_MHPMEVENT17H 0x731 > > +#define CSR_MHPMEVENT18H 0x732 > > +#define CSR_MHPMEVENT19H 0x733 > > +#define CSR_MHPMEVENT20H 0x734 > > +#define CSR_MHPMEVENT21H 0x735 > > +#define CSR_MHPMEVENT22H 0x736 > > +#define CSR_MHPMEVENT23H 0x737 > > +#define CSR_MHPMEVENT24H 0x738 > > +#define CSR_MHPMEVENT25H 0x739 > > +#define CSR_MHPMEVENT26H 0x73a > > +#define CSR_MHPMEVENT27H 0x73b > > +#define CSR_MHPMEVENT28H 0x73c > > +#define CSR_MHPMEVENT29H 0x73d > > +#define CSR_MHPMEVENT30H 0x73e > > +#define CSR_MHPMEVENT31H 0x73f > > + > > #define CSR_MHPMCOUNTER3H 0xb83 > > #define CSR_MHPMCOUNTER4H 0xb84 > > #define CSR_MHPMCOUNTER5H 0xb85 > > @@ -443,6 +474,7 @@ > > #define CSR_VSMTE 0x2c0 > > #define CSR_VSPMMASK 0x2c1 > > #define CSR_VSPMBASE 0x2c2 > > +#define CSR_SCOUNTOVF 0xda0 > > > > /* Crypto Extension */ > > #define CSR_SEED 0x015 > > @@ -620,6 +652,7 @@ typedef enum RISCVException { > > #define IRQ_VS_EXT 10 > > #define IRQ_M_EXT 11 > > #define IRQ_S_GEXT 12 > > +#define IRQ_PMU_OVF 13 > > #define IRQ_LOCAL_MAX 16 > > #define IRQ_LOCAL_GUEST_MAX (TARGET_LONG_BITS - 1) > > > > @@ -637,11 +670,13 @@ typedef enum RISCVException { > > #define MIP_VSEIP (1 << IRQ_VS_EXT) > > #define MIP_MEIP (1 << IRQ_M_EXT) > > #define MIP_SGEIP (1 << IRQ_S_GEXT) > > +#define MIP_LCOFIP (1 << IRQ_PMU_OVF) > > > > /* sip masks */ > > #define SIP_SSIP MIP_SSIP > > #define SIP_STIP MIP_STIP > > #define SIP_SEIP MIP_SEIP > > +#define SIP_LCOFIP MIP_LCOFIP > > > > /* MIE masks */ > > #define MIE_SEIE (1 << IRQ_S_EXT) > > @@ -795,4 +830,24 @@ typedef enum RISCVException { > > #define SEED_OPST_WAIT (0b01 << 30) > > #define SEED_OPST_ES16 (0b10 << 30) > > #define SEED_OPST_DEAD (0b11 << 30) > > +/* PMU related bits */ > > +#define MIE_LCOFIE (1 << IRQ_PMU_OVF) > > + > > +#define MHPMEVENT_BIT_OF BIT_ULL(63) > > +#define MHPMEVENTH_BIT_OF BIT(31) > > +#define MHPMEVENT_BIT_MINH BIT_ULL(62) > > +#define MHPMEVENTH_BIT_MINH BIT(30) > > +#define MHPMEVENT_BIT_SINH BIT_ULL(61) > > +#define MHPMEVENTH_BIT_SINH BIT(29) > > +#define MHPMEVENT_BIT_UINH BIT_ULL(60) > > +#define MHPMEVENTH_BIT_UINH BIT(28) > > +#define MHPMEVENT_BIT_VSINH BIT_ULL(59) > > +#define MHPMEVENTH_BIT_VSINH BIT(27) > > +#define MHPMEVENT_BIT_VUINH BIT_ULL(58) > > +#define MHPMEVENTH_BIT_VUINH BIT(26) > > + > > +#define MHPMEVENT_SSCOF_MASK _ULL(0xFFFF000000000000) > > +#define MHPMEVENT_IDX_MASK 0xFFFFF > > +#define MHPMEVENT_SSCOF_RESVD 16 > > + > > #endif > > diff --git a/target/riscv/csr.c b/target/riscv/csr.c > > index 235f2a011e70..1233bfa0a726 100644 > > --- a/target/riscv/csr.c > > +++ b/target/riscv/csr.c > > @@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int > csrno) > > CPUState *cs =3D env_cpu(env); > > RISCVCPU *cpu =3D RISCV_CPU(cs); > > int ctr_index; > > - int base_csrno =3D CSR_HPMCOUNTER3; > > + int base_csrno =3D CSR_CYCLE; > > bool rv32 =3D riscv_cpu_mxl(env) =3D=3D MXL_RV32 ? true : false; > > > > if (rv32 && csrno >=3D CSR_CYCLEH) { > > @@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int > csrno) > > } > > ctr_index =3D csrno - base_csrno; > > > > - if (!cpu->cfg.pmu_num || ctr_index >=3D (cpu->cfg.pmu_num)) { > > + if ((csrno >=3D CSR_CYCLE && csrno <=3D CSR_INSTRET) || > > + (csrno >=3D CSR_CYCLEH && csrno <=3D CSR_INSTRETH)) { > > + goto skip_ext_pmu_check; > > + } > > + > > + if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs & BIT(ctr_index)))= ) > { > > /* No counter is enabled in PMU or the counter is out of rang= e > */ > > return RISCV_EXCP_ILLEGAL_INST; > > } > > > Maybe it's better to remove !cpu->cfg.pmu_num here, not in later commit. > > +skip_ext_pmu_check: > > + > > if (env->priv =3D=3D PRV_S) { > > switch (csrno) { > > case CSR_CYCLE: > > @@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int > csrno) > > } > > break; > > case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31: > > - ctr_index =3D csrno - CSR_CYCLE; > > if (!get_field(env->mcounteren, 1 << ctr_index)) { > > return RISCV_EXCP_ILLEGAL_INST; > > } > > @@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int > csrno) > > } > > break; > > case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H: > > - ctr_index =3D csrno - CSR_CYCLEH; > > if (!get_field(env->mcounteren, 1 << ctr_index)) { > > return RISCV_EXCP_ILLEGAL_INST; > > } > > @@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int > csrno) > > } > > break; > > case CSR_HPMCOUNTER3...CSR_HPMCOUNTER31: > > - ctr_index =3D csrno - CSR_CYCLE; > > if (!get_field(env->hcounteren, 1 << ctr_index) && > > get_field(env->mcounteren, 1 << ctr_index)) { > > return RISCV_EXCP_VIRT_INSTRUCTION_FAULT; > > @@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int > csrno) > > } > > break; > > case CSR_HPMCOUNTER3H...CSR_HPMCOUNTER31H: > > - ctr_index =3D csrno - CSR_CYCLEH; > > if (!get_field(env->hcounteren, 1 << ctr_index) && > > get_field(env->mcounteren, 1 << ctr_index)) { > > return RISCV_EXCP_VIRT_INSTRUCTION_FAULT; > > @@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env, > int csrno) > > return mctr(env, csrno); > > } > > > > +static RISCVException sscofpmf(CPURISCVState *env, int csrno) > > +{ > > + CPUState *cs =3D env_cpu(env); > > + RISCVCPU *cpu =3D RISCV_CPU(cs); > > + > > + if (!cpu->cfg.ext_sscofpmf) { > > + return RISCV_EXCP_ILLEGAL_INST; > > + } > > + > > + return RISCV_EXCP_NONE; > > +} > > + > > static RISCVException any(CPURISCVState *env, int csrno) > > { > > return RISCV_EXCP_NONE; > > @@ -663,9 +678,39 @@ static int read_mhpmevent(CPURISCVState *env, int > csrno, target_ulong *val) > > static int write_mhpmevent(CPURISCVState *env, int csrno, target_ulon= g > val) > > { > > int evt_index =3D csrno - CSR_MCOUNTINHIBIT; > > + uint64_t mhpmevt_val =3D val; > > > > env->mhpmevent_val[evt_index] =3D val; > > > > + if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) { > > + mhpmevt_val =3D mhpmevt_val | > > + ((uint64_t)env->mhpmeventh_val[evt_index] << 32)= ; > > + } > > + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index); > > + > > + return RISCV_EXCP_NONE; > > +} > > + > > +static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulong > *val) > > +{ > > + int evt_index =3D csrno - CSR_MHPMEVENT3H + 3; > > + > > + *val =3D env->mhpmeventh_val[evt_index]; > > + > > + return RISCV_EXCP_NONE; > > +} > > + > > +static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulon= g > val) > > +{ > > + int evt_index =3D csrno - CSR_MHPMEVENT3H + 3; > > + uint64_t mhpmevth_val =3D val; > > + uint64_t mhpmevt_val =3D env->mhpmevent_val[evt_index]; > > + > > + mhpmevt_val =3D mhpmevt_val | (mhpmevth_val << 32); > > + env->mhpmeventh_val[evt_index] =3D val; > > + > > + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index); > > + > > return RISCV_EXCP_NONE; > > } > > > > @@ -673,12 +718,20 @@ static int write_mhpmcounter(CPURISCVState *env, > int csrno, target_ulong val) > > { > > int ctr_idx =3D csrno - CSR_MCYCLE; > > PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]; > > + uint64_t mhpmctr_val =3D val; > > > > counter->mhpmcounter_val =3D val; > > if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) || > > riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) { > > counter->mhpmcounter_prev =3D get_ticks(false); > > - } else { > > + if (ctr_idx > 2) { > > + if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) { > > + mhpmctr_val =3D mhpmctr_val | > > + ((uint64_t)counter->mhpmcounterh_val << > 32); > > + } > > + riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx); > > + } > > + } else { > > /* Other counters can keep incrementing from the given value = */ > > counter->mhpmcounter_prev =3D val; > > } > > @@ -690,11 +743,17 @@ static int write_mhpmcounterh(CPURISCVState *env, > int csrno, target_ulong val) > > { > > int ctr_idx =3D csrno - CSR_MCYCLEH; > > PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]; > > + uint64_t mhpmctr_val =3D counter->mhpmcounter_val; > > + uint64_t mhpmctrh_val =3D val; > > > > counter->mhpmcounterh_val =3D val; > > + mhpmctr_val =3D mhpmctr_val | (mhpmctrh_val << 32); > > if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) || > > riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) { > > counter->mhpmcounterh_prev =3D get_ticks(true); > > + if (ctr_idx > 2) { > > + riscv_pmu_setup_timer(env, mhpmctr_val, ctr_idx); > > + } > > } else { > > counter->mhpmcounterh_prev =3D val; > > } > > @@ -770,6 +829,32 @@ static int read_hpmcounterh(CPURISCVState *env, in= t > csrno, target_ulong *val) > > return riscv_pmu_read_ctr(env, val, true, ctr_index); > > } > > > > +static int read_scountovf(CPURISCVState *env, int csrno, target_ulong > *val) > > +{ > > + int mhpmevt_start =3D CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT; > > + int i; > > + *val =3D 0; > > + target_ulong *mhpm_evt_val; > > + uint64_t of_bit_mask; > > + > > + if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) { > > + mhpm_evt_val =3D env->mhpmeventh_val; > > + of_bit_mask =3D MHPMEVENTH_BIT_OF; > > + } else { > > + mhpm_evt_val =3D env->mhpmevent_val; > > + of_bit_mask =3D MHPMEVENT_BIT_OF; > > + } > > + > > + for (i =3D mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++) { > > + if ((get_field(env->mcounteren, BIT(i))) && > > + (mhpm_evt_val[i] & of_bit_mask)) { > > + *val |=3D BIT(i); > > + } > > + } > > + > > + return RISCV_EXCP_NONE; > > +} > > + > > static RISCVException read_time(CPURISCVState *env, int csrno, > > target_ulong *val) > > { > > @@ -799,7 +884,8 @@ static RISCVException read_timeh(CPURISCVState *env= , > int csrno, > > /* Machine constants */ > > > > #define M_MODE_INTERRUPTS ((uint64_t)(MIP_MSIP | MIP_MTIP | MIP_MEIP= )) > > -#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP)= ) > > +#define S_MODE_INTERRUPTS ((uint64_t)(MIP_SSIP | MIP_STIP | MIP_SEIP = | > \ > > + MIP_LCOFIP)) > > It's better to align with MIP_SSIP here. > > > #define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VSTIP | > MIP_VSEIP)) > > #define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE_INTERRUPTS= )) > > > > @@ -840,7 +926,8 @@ static const target_ulong vs_delegable_excps =3D > DELEGABLE_EXCPS & > > static const target_ulong sstatus_v1_10_mask =3D SSTATUS_SIE | > SSTATUS_SPIE | > > SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | SSTATUS_FS | SSTATUS_X= S > | > > SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS; > > -static const target_ulong sip_writable_mask =3D SIP_SSIP | MIP_USIP | > MIP_UEIP; > > +static const target_ulong sip_writable_mask =3D SIP_SSIP | MIP_USIP | > MIP_UEIP | > > + SIP_LCOFIP; > > static const target_ulong hip_writable_mask =3D MIP_VSSIP; > > static const target_ulong hvip_writable_mask =3D MIP_VSSIP | MIP_VSTI= P | > MIP_VSEIP; > > static const target_ulong vsip_writable_mask =3D MIP_VSSIP; > > @@ -3861,6 +3948,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] =3D= { > > [CSR_MHPMEVENT31] =3D { "mhpmevent31", any, read_mhpmeve= nt, > > write_mhpmeven= t > }, > > > > + [CSR_MHPMEVENT3H] =3D { "mhpmevent3h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > The new lines have been updated to align with the last line in my > previous patchset(accepted). > > you can see it if rebase to riscv-to-apply.next. So it's better to make > write_mhpmeventh align with > > ' " '. The same to following new lines. > > Got it. Rebased it. But I aligned them '{' with CSR_HPMCOUNTER3H onwards. We probably should align CSR_MHPMEVENT3..CSR_MHPMEVENT31 accordingly as well. > Regards, > > Weiwei Li > > > + [CSR_MHPMEVENT4H] =3D { "mhpmevent4h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT5H] =3D { "mhpmevent5h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT6H] =3D { "mhpmevent6h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT7H] =3D { "mhpmevent7h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT8H] =3D { "mhpmevent8h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT9H] =3D { "mhpmevent9h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT10H] =3D { "mhpmevent10h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT11H] =3D { "mhpmevent11h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT12H] =3D { "mhpmevent12h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT13H] =3D { "mhpmevent13h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT14H] =3D { "mhpmevent14h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT15H] =3D { "mhpmevent15h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT16H] =3D { "mhpmevent16h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT17H] =3D { "mhpmevent17h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT18H] =3D { "mhpmevent18h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT19H] =3D { "mhpmevent19h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT20H] =3D { "mhpmevent20h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT21H] =3D { "mhpmevent21h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT22H] =3D { "mhpmevent22h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT23H] =3D { "mhpmevent23h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT24H] =3D { "mhpmevent24h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT25H] =3D { "mhpmevent25h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT26H] =3D { "mhpmevent26h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT27H] =3D { "mhpmevent27h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT28H] =3D { "mhpmevent28h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT29H] =3D { "mhpmevent29h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT30H] =3D { "mhpmevent30h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + [CSR_MHPMEVENT31H] =3D { "mhpmevent31h", sscofpmf, > read_mhpmeventh, > > + > write_mhpmeventh}, > > + > > [CSR_HPMCOUNTER3H] =3D { "hpmcounter3h", ctr32, > read_hpmcounterh }, > > [CSR_HPMCOUNTER4H] =3D { "hpmcounter4h", ctr32, > read_hpmcounterh }, > > [CSR_HPMCOUNTER5H] =3D { "hpmcounter5h", ctr32, > read_hpmcounterh }, > > @@ -3949,5 +4095,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] =3D = { > > > write_mhpmcounterh }, > > [CSR_MHPMCOUNTER31H] =3D { "mhpmcounter31h", mctr32, > read_hpmcounterh, > > > write_mhpmcounterh }, > > + [CSR_SCOUNTOVF] =3D { "scountovf", sscofpmf, read_scountovf = }, > > + > > #endif /* !CONFIG_USER_ONLY */ > > }; > > diff --git a/target/riscv/machine.c b/target/riscv/machine.c > > index dc182ca81119..33ef9b8e9908 100644 > > --- a/target/riscv/machine.c > > +++ b/target/riscv/machine.c > > @@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu =3D { > > VMSTATE_STRUCT_ARRAY(env.pmu_ctrs, RISCVCPU, > RV_MAX_MHPMCOUNTERS, 0, > > vmstate_pmu_ctr_state, PMUCTRState), > > VMSTATE_UINTTL_ARRAY(env.mhpmevent_val, RISCVCPU, > RV_MAX_MHPMEVENTS), > > + VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, RISCVCPU, > RV_MAX_MHPMEVENTS), > > VMSTATE_UINTTL(env.sscratch, RISCVCPU), > > VMSTATE_UINTTL(env.mscratch, RISCVCPU), > > VMSTATE_UINT64(env.mfromhost, RISCVCPU), > > diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c > > index 000fe8da45ef..34096941c0ce 100644 > > --- a/target/riscv/pmu.c > > +++ b/target/riscv/pmu.c > > @@ -19,14 +19,367 @@ > > #include "qemu/osdep.h" > > #include "cpu.h" > > #include "pmu.h" > > +#include "sysemu/cpu-timers.h" > > + > > +#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */ > > +#define MAKE_32BIT_MASK(shift, length) \ > > + (((uint32_t)(~0UL) >> (32 - (length))) << (shift)) > > + > > +static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx) > > +{ > > + if (ctr_idx < 3 || ctr_idx >=3D RV_MAX_MHPMCOUNTERS || > > + !(cpu->pmu_avail_ctrs & BIT(ctr_idx))) { > > + return false; > > + } else { > > + return true; > > + } > > +} > > + > > +static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx) > > +{ > > + CPURISCVState *env =3D &cpu->env; > > + > > + if (riscv_pmu_counter_valid(cpu, ctr_idx) && > > + !get_field(env->mcountinhibit, BIT(ctr_idx))) { > > + return true; > > + } else { > > + return false; > > + } > > +} > > + > > +static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx) > > +{ > > + CPURISCVState *env =3D &cpu->env; > > + target_ulong max_val =3D UINT32_MAX; > > + PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]; > > + bool virt_on =3D riscv_cpu_virt_enabled(env); > > + > > + /* Privilege mode filtering */ > > + if ((env->priv =3D=3D PRV_M && > > + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_MINH)) || > > + (env->priv =3D=3D PRV_S && virt_on && > > + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VSINH)) || > > + (env->priv =3D=3D PRV_U && virt_on && > > + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_VUINH)) || > > + (env->priv =3D=3D PRV_S && !virt_on && > > + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_SINH)) || > > + (env->priv =3D=3D PRV_U && !virt_on && > > + (env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_UINH))) { > > + return 0; > > + } > > + > > + /* Handle the overflow scenario */ > > + if (counter->mhpmcounter_val =3D=3D max_val) { > > + if (counter->mhpmcounterh_val =3D=3D max_val) { > > + counter->mhpmcounter_val =3D 0; > > + counter->mhpmcounterh_val =3D 0; > > + /* Generate interrupt only if OF bit is clear */ > > + if (!(env->mhpmeventh_val[ctr_idx] & MHPMEVENTH_BIT_OF)) { > > + env->mhpmeventh_val[ctr_idx] |=3D MHPMEVENTH_BIT_OF; > > + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1))= ; > > + } > > + } else { > > + counter->mhpmcounterh_val++; > > + } > > + } else { > > + counter->mhpmcounter_val++; > > + } > > + > > + return 0; > > +} > > + > > +static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx) > > +{ > > + CPURISCVState *env =3D &cpu->env; > > + PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]; > > + uint64_t max_val =3D UINT64_MAX; > > + bool virt_on =3D riscv_cpu_virt_enabled(env); > > + > > + /* Privilege mode filtering */ > > + if ((env->priv =3D=3D PRV_M && > > + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_MINH)) || > > + (env->priv =3D=3D PRV_S && virt_on && > > + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VSINH)) || > > + (env->priv =3D=3D PRV_U && virt_on && > > + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_VUINH)) || > > + (env->priv =3D=3D PRV_S && !virt_on && > > + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_SINH)) || > > + (env->priv =3D=3D PRV_U && !virt_on && > > + (env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_UINH))) { > > + return 0; > > + } > > + > > + /* Handle the overflow scenario */ > > + if (counter->mhpmcounter_val =3D=3D max_val) { > > + counter->mhpmcounter_val =3D 0; > > + /* Generate interrupt only if OF bit is clear */ > > + if (!(env->mhpmevent_val[ctr_idx] & MHPMEVENT_BIT_OF)) { > > + env->mhpmevent_val[ctr_idx] |=3D MHPMEVENT_BIT_OF; > > + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1)); > > + } > > + } else { > > + counter->mhpmcounter_val++; > > + } > > + return 0; > > +} > > + > > +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx > event_idx) > > +{ > > + uint32_t ctr_idx; > > + int ret; > > + CPURISCVState *env =3D &cpu->env; > > + gpointer value; > > + > > + value =3D g_hash_table_lookup(cpu->pmu_event_ctr_map, > > + GUINT_TO_POINTER(event_idx)); > > + if (!value) { > > + return -1; > > + } > > + > > + ctr_idx =3D GPOINTER_TO_UINT(value); > > + if (!riscv_pmu_counter_enabled(cpu, ctr_idx) || > > + get_field(env->mcountinhibit, BIT(ctr_idx))) { > > + return -1; > > + } > > + > > + if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) { > > + ret =3D riscv_pmu_incr_ctr_rv32(cpu, ctr_idx); > > + } else { > > + ret =3D riscv_pmu_incr_ctr_rv64(cpu, ctr_idx); > > + } > > + > > + return ret; > > +} > > > > bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env, > > uint32_t target_ctr) > > { > > - return (target_ctr =3D=3D 0) ? true : false; > > + RISCVCPU *cpu; > > + uint32_t event_idx; > > + uint32_t ctr_idx; > > + > > + /* Fixed instret counter */ > > + if (target_ctr =3D=3D 2) { > > + return true; > > + } > > + > > + cpu =3D RISCV_CPU(env_cpu(env)); > > + event_idx =3D RISCV_PMU_EVENT_HW_INSTRUCTIONS; > > + ctr_idx =3D > GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map, > > + GUINT_TO_POINTER(event_idx))); > > + if (!ctr_idx) { > > + return false; > > + } > > + > > + return target_ctr =3D=3D ctr_idx ? true : false; > > } > > > > bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint32_t > target_ctr) > > { > > - return (target_ctr =3D=3D 2) ? true : false; > > + RISCVCPU *cpu; > > + uint32_t event_idx; > > + uint32_t ctr_idx; > > + > > + /* Fixed mcycle counter */ > > + if (target_ctr =3D=3D 0) { > > + return true; > > + } > > + > > + cpu =3D RISCV_CPU(env_cpu(env)); > > + event_idx =3D RISCV_PMU_EVENT_HW_CPU_CYCLES; > > + ctr_idx =3D > GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map, > > + GUINT_TO_POINTER(event_idx))); > > + > > + /* Counter zero is not used for event_ctr_map */ > > + if (!ctr_idx) { > > + return false; > > + } > > + > > + return (target_ctr =3D=3D ctr_idx) ? true : false; > > +} > > + > > +static gboolean pmu_remove_event_map(gpointer key, gpointer value, > > + gpointer udata) > > +{ > > + return (GPOINTER_TO_UINT(value) =3D=3D GPOINTER_TO_UINT(udata)) ? = true > : false; > > +} > > + > > +static int64_t pmu_icount_ticks_to_ns(int64_t value) > > +{ > > + int64_t ret =3D 0; > > + > > + if (icount_enabled()) { > > + ret =3D icount_to_ns(value); > > + } else { > > + ret =3D (NANOSECONDS_PER_SECOND / RISCV_TIMEBASE_FREQ) * value= ; > > + } > > + > > + return ret; > > +} > > + > > +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value, > > + uint32_t ctr_idx) > > +{ > > + uint32_t event_idx; > > + RISCVCPU *cpu =3D RISCV_CPU(env_cpu(env)); > > + > > + if (!riscv_pmu_counter_valid(cpu, ctr_idx)) { > > + return -1; > > + } > > + > > + /** > > + * Expected mhpmevent value is zero for reset case. Remove the > current > > + * mapping. > > + */ > > + if (!value) { > > + g_hash_table_foreach_remove(cpu->pmu_event_ctr_map, > > + pmu_remove_event_map, > > + GUINT_TO_POINTER(ctr_idx)); > > + return 0; > > + } > > + > > + event_idx =3D value & MHPMEVENT_IDX_MASK; > > + if (g_hash_table_lookup(cpu->pmu_event_ctr_map, > > + GUINT_TO_POINTER(event_idx))) { > > + return 0; > > + } > > + > > + switch (event_idx) { > > + case RISCV_PMU_EVENT_HW_CPU_CYCLES: > > + case RISCV_PMU_EVENT_HW_INSTRUCTIONS: > > + case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS: > > + case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS: > > + case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS: > > + break; > > + default: > > + /* We don't support any raw events right now */ > > + return -1; > > + } > > + g_hash_table_insert(cpu->pmu_event_ctr_map, > GUINT_TO_POINTER(event_idx), > > + GUINT_TO_POINTER(ctr_idx)); > > + > > + return 0; > > +} > > + > > +static void pmu_timer_trigger_irq(RISCVCPU *cpu, > > + enum riscv_pmu_event_idx evt_idx) > > +{ > > + uint32_t ctr_idx; > > + CPURISCVState *env =3D &cpu->env; > > + PMUCTRState *counter; > > + target_ulong *mhpmevent_val; > > + uint64_t of_bit_mask; > > + int64_t irq_trigger_at; > > + > > + if (evt_idx !=3D RISCV_PMU_EVENT_HW_CPU_CYCLES && > > + evt_idx !=3D RISCV_PMU_EVENT_HW_INSTRUCTIONS) { > > + return; > > + } > > + > > + ctr_idx =3D > GPOINTER_TO_UINT(g_hash_table_lookup(cpu->pmu_event_ctr_map, > > + GUINT_TO_POINTER(evt_idx))); > > + if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) { > > + return; > > + } > > + > > + if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) { > > + mhpmevent_val =3D &env->mhpmeventh_val[ctr_idx]; > > + of_bit_mask =3D MHPMEVENTH_BIT_OF; > > + } else { > > + mhpmevent_val =3D &env->mhpmevent_val[ctr_idx]; > > + of_bit_mask =3D MHPMEVENT_BIT_OF; > > + } > > + > > + counter =3D &env->pmu_ctrs[ctr_idx]; > > + if (counter->irq_overflow_left > 0) { > > + irq_trigger_at =3D qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + > > + counter->irq_overflow_left; > > + timer_mod_anticipate_ns(cpu->pmu_timer, irq_trigger_at); > > + counter->irq_overflow_left =3D 0; > > + return; > > + } > > + > > + if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) { > > + /* Generate interrupt only if OF bit is clear */ > > + if (!(*mhpmevent_val & of_bit_mask)) { > > + *mhpmevent_val |=3D of_bit_mask; > > + riscv_cpu_update_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1)); > > + } > > + } > > +} > > + > > +/* Timer callback for instret and cycle counter overflow */ > > +void riscv_pmu_timer_cb(void *priv) > > +{ > > + RISCVCPU *cpu =3D priv; > > + > > + /* Timer event was triggered only for these events */ > > + pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLES); > > + pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTIONS); > > +} > > + > > +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_t > ctr_idx) > > +{ > > + uint64_t overflow_delta, overflow_at; > > + int64_t overflow_ns, overflow_left =3D 0; > > + RISCVCPU *cpu =3D RISCV_CPU(env_cpu(env)); > > + PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]; > > + > > + if (!riscv_pmu_counter_valid(cpu, ctr_idx) || > !cpu->cfg.ext_sscofpmf) { > > + return -1; > > + } > > + > > + if (value) { > > + overflow_delta =3D UINT64_MAX - value + 1; > > + } else { > > + overflow_delta =3D UINT64_MAX; > > + } > > + > > + /** > > + * QEMU supports only int64_t timers while RISC-V counters are > uint64_t. > > + * Compute the leftover and save it so that it can be reprogrammed > again > > + * when timer expires. > > + */ > > + if (overflow_delta > INT64_MAX) { > > + overflow_left =3D overflow_delta - INT64_MAX; > > + } > > + > > + if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) || > > + riscv_pmu_ctr_monitor_instructions(env, ctr_idx)) { > > + overflow_ns =3D pmu_icount_ticks_to_ns((int64_t)overflow_delta= ); > > + overflow_left =3D pmu_icount_ticks_to_ns(overflow_left) ; > > + } else { > > + return -1; > > + } > > + overflow_at =3D (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + > overflow_ns; > > + > > + if (overflow_at > INT64_MAX) { > > + overflow_left +=3D overflow_at - INT64_MAX; > > + counter->irq_overflow_left =3D overflow_left; > > + overflow_at =3D INT64_MAX; > > + } > > + timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at); > > + > > + return 0; > > +} > > + > > + > > +int riscv_pmu_init(RISCVCPU *cpu, int num_counters) > > +{ > > + if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) { > > + return -1; > > + } > > + > > + cpu->pmu_event_ctr_map =3D g_hash_table_new(g_direct_hash, > g_direct_equal); > > + if (!cpu->pmu_event_ctr_map) { > > + /* PMU support can not be enabled */ > > + qemu_log_mask(LOG_UNIMP, "PMU events can't be supported\n"); > > + cpu->cfg.pmu_num =3D 0; > > + return -1; > > + } > > + > > + /* Create a bitmask of available programmable counters */ > > + cpu->pmu_avail_ctrs =3D MAKE_32BIT_MASK(3, num_counters); > > + > > + return 0; > > } > > diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h > > index 58a5bc3a4089..036653627f78 100644 > > --- a/target/riscv/pmu.h > > +++ b/target/riscv/pmu.h > > @@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVStat= e > *env, > > uint32_t target_ctr); > > bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, > > uint32_t target_ctr); > > +void riscv_pmu_timer_cb(void *priv); > > +int riscv_pmu_init(RISCVCPU *cpu, int num_counters); > > +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value, > > + uint32_t ctr_idx); > > +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx > event_idx); > > +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, > > + uint32_t ctr_idx); > > --000000000000b2c20405e4d02826 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


=
On Wed, Jul 27, 2022 at 1:11 AM Weiwe= i Li <liweiwei@iscas.ac.cn&g= t; wrote:

=E5=9C=A8 2022/7/27 =E4=B8=8B=E5=8D=882:49, Atish Patra =E5=86=99=E9=81=93:=
> The Sscofpmf ('Ss' for Privileged arch and Supervisor-level ex= tensions,
> and 'cofpmf' for Count OverFlow and Privilege Mode Filtering)<= br> > extension allows the perf to handle overflow interrupts and filtering<= br> > support. This patch provides a framework for programmable
> counters to leverage the extension. As the extension doesn't have = any
> provision for the overflow bit for fixed counters, the fixed events > can also be monitoring using programmable counters. The underlying
> counters for cycle and instruction counters are always running. Thus,<= br> > a separate timer device is programmed to handle the overflow.
>
> Tested-by: Heiko Stuebner <heiko@sntech.de>
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> Signed-off-by: Atish Patra <atishp@rivosinc.com>
> ---
>=C2=A0 =C2=A0target/riscv/cpu.c=C2=A0 =C2=A0 =C2=A0 |=C2=A0 11 ++
>=C2=A0 =C2=A0target/riscv/cpu.h=C2=A0 =C2=A0 =C2=A0 |=C2=A0 25 +++
>=C2=A0 =C2=A0target/riscv/cpu_bits.h |=C2=A0 55 +++++++
>=C2=A0 =C2=A0target/riscv/csr.c=C2=A0 =C2=A0 =C2=A0 | 166 +++++++++++++= +++++-
>=C2=A0 =C2=A0target/riscv/machine.c=C2=A0 |=C2=A0 =C2=A01 +
>=C2=A0 =C2=A0target/riscv/pmu.c=C2=A0 =C2=A0 =C2=A0 | 357 +++++++++++++= ++++++++++++++++++++++++++-
>=C2=A0 =C2=A0target/riscv/pmu.h=C2=A0 =C2=A0 =C2=A0 |=C2=A0 =C2=A07 + >=C2=A0 =C2=A07 files changed, 611 insertions(+), 11 deletions(-)
>
> diff --git a/target/riscv/cpu.c b/target/riscv/cpu.c
> index 1bb3973806d2..c1d62b81a725 100644
> --- a/target/riscv/cpu.c
> +++ b/target/riscv/cpu.c
> @@ -22,6 +22,7 @@
>=C2=A0 =C2=A0#include "qemu/ctype.h"
>=C2=A0 =C2=A0#include "qemu/log.h"
>=C2=A0 =C2=A0#include "cpu.h"
> +#include "pmu.h"
>=C2=A0 =C2=A0#include "internals.h"
>=C2=A0 =C2=A0#include "exec/exec-all.h"
>=C2=A0 =C2=A0#include "qapi/error.h"
> @@ -779,6 +780,15 @@ static void riscv_cpu_realize(DeviceState *dev, E= rror **errp)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0set_misa(env, env->misa_mxl= , ext);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0}
>=C2=A0 =C2=A0
> +#ifndef CONFIG_USER_ONLY
> +=C2=A0 =C2=A0 if (cpu->cfg.pmu_num) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (!riscv_pmu_init(cpu, cpu->cfg.pmu_= num) && cpu->cfg.ext_sscofpmf) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu->pmu_timer =3D timer= _new_ns(QEMU_CLOCK_VIRTUAL,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 riscv_pmu_timer_cb, cpu);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 =C2=A0}
> +#endif
> +
>=C2=A0 =C2=A0 =C2=A0 =C2=A0riscv_cpu_register_gdb_regs_for_features(cs)= ;
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0qemu_init_vcpu(cs);
> @@ -883,6 +893,7 @@ static Property riscv_cpu_extensions[] =3D {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0DEFINE_PROP_BOOL("v", RISCVCPU, cf= g.ext_v, false),
>=C2=A0 =C2=A0 =C2=A0 =C2=A0DEFINE_PROP_BOOL("h", RISCVCPU, cf= g.ext_h, true),
>=C2=A0 =C2=A0 =C2=A0 =C2=A0DEFINE_PROP_UINT8("pmu-num", RISCV= CPU, cfg.pmu_num, 16),
> +=C2=A0 =C2=A0 DEFINE_PROP_BOOL("sscofpmf", RISCVCPU, cfg.ex= t_sscofpmf, false),
>=C2=A0 =C2=A0 =C2=A0 =C2=A0DEFINE_PROP_BOOL("Zifencei", RISCV= CPU, cfg.ext_ifencei, true),
>=C2=A0 =C2=A0 =C2=A0 =C2=A0DEFINE_PROP_BOOL("Zicsr", RISCVCPU= , cfg.ext_icsr, true),
>=C2=A0 =C2=A0 =C2=A0 =C2=A0DEFINE_PROP_BOOL("Zfh", RISCVCPU, = cfg.ext_zfh, false),
> diff --git a/target/riscv/cpu.h b/target/riscv/cpu.h
> index 5c7acc055ac9..2222db193c3d 100644
> --- a/target/riscv/cpu.h
> +++ b/target/riscv/cpu.h
> @@ -137,6 +137,8 @@ typedef struct PMUCTRState {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Snapshort value of a counter in RV32 */ >=C2=A0 =C2=A0 =C2=A0 =C2=A0target_ulong mhpmcounterh_prev;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0bool started;
> +=C2=A0 =C2=A0 /* Value beyond UINT32_MAX/UINT64_MAX before overflow i= nterrupt trigger */
> +=C2=A0 =C2=A0 target_ulong irq_overflow_left;
>=C2=A0 =C2=A0} PMUCTRState;
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0struct CPUArchState {
> @@ -297,6 +299,9 @@ struct CPUArchState {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0/* PMU event selector configured values. Fir= st three are unused*/
>=C2=A0 =C2=A0 =C2=A0 =C2=A0target_ulong mhpmevent_val[RV_MAX_MHPMEVENTS= ];
>=C2=A0 =C2=A0
> +=C2=A0 =C2=A0 /* PMU event selector configured values for RV32*/
> +=C2=A0 =C2=A0 target_ulong mhpmeventh_val[RV_MAX_MHPMEVENTS];
> +
>=C2=A0 =C2=A0 =C2=A0 =C2=A0target_ulong sscratch;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0target_ulong mscratch;
>=C2=A0 =C2=A0
> @@ -433,6 +438,7 @@ struct RISCVCPUConfig {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0bool ext_zve32f;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0bool ext_zve64f;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0bool ext_zmmul;
> +=C2=A0 =C2=A0 bool ext_sscofpmf;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0bool rvv_ta_all_1s;
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t mvendorid;
> @@ -479,6 +485,12 @@ struct ArchCPU {
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0/* Configuration Settings */
>=C2=A0 =C2=A0 =C2=A0 =C2=A0RISCVCPUConfig cfg;
> +
> +=C2=A0 =C2=A0 QEMUTimer *pmu_timer;
> +=C2=A0 =C2=A0 /* A bitmask of Available programmable counters */
> +=C2=A0 =C2=A0 uint32_t pmu_avail_ctrs;
> +=C2=A0 =C2=A0 /* Mapping of events to counters */
> +=C2=A0 =C2=A0 GHashTable *pmu_event_ctr_map;
>=C2=A0 =C2=A0};
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0static inline int riscv_has_ext(CPURISCVState *env, target= _ulong ext)
> @@ -738,6 +750,19 @@ enum {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0CSR_TABLE_SIZE =3D 0x1000
>=C2=A0 =C2=A0};
>=C2=A0 =C2=A0
> +/**
> + * The event id are encoded based on the encoding specified in the > + * SBI specification v0.3
> + */
> +
> +enum riscv_pmu_event_idx {
> +=C2=A0 =C2=A0 RISCV_PMU_EVENT_HW_CPU_CYCLES =3D 0x01,
> +=C2=A0 =C2=A0 RISCV_PMU_EVENT_HW_INSTRUCTIONS =3D 0x02,
> +=C2=A0 =C2=A0 RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS =3D 0x10019,
> +=C2=A0 =C2=A0 RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS =3D 0x1001B,
> +=C2=A0 =C2=A0 RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS =3D 0x10021, > +};
> +
>=C2=A0 =C2=A0/* CSR function table */
>=C2=A0 =C2=A0extern riscv_csr_operations csr_ops[CSR_TABLE_SIZE];
>=C2=A0 =C2=A0
> diff --git a/target/riscv/cpu_bits.h b/target/riscv/cpu_bits.h
> index 6be5a9e9f046..b63c586be563 100644
> --- a/target/riscv/cpu_bits.h
> +++ b/target/riscv/cpu_bits.h
> @@ -382,6 +382,37 @@
>=C2=A0 =C2=A0#define CSR_MHPMEVENT29=C2=A0 =C2=A0 =C2=A00x33d
>=C2=A0 =C2=A0#define CSR_MHPMEVENT30=C2=A0 =C2=A0 =C2=A00x33e
>=C2=A0 =C2=A0#define CSR_MHPMEVENT31=C2=A0 =C2=A0 =C2=A00x33f
> +
> +#define CSR_MHPMEVENT3H=C2=A0 =C2=A0 =C2=A00x723
> +#define CSR_MHPMEVENT4H=C2=A0 =C2=A0 =C2=A00x724
> +#define CSR_MHPMEVENT5H=C2=A0 =C2=A0 =C2=A00x725
> +#define CSR_MHPMEVENT6H=C2=A0 =C2=A0 =C2=A00x726
> +#define CSR_MHPMEVENT7H=C2=A0 =C2=A0 =C2=A00x727
> +#define CSR_MHPMEVENT8H=C2=A0 =C2=A0 =C2=A00x728
> +#define CSR_MHPMEVENT9H=C2=A0 =C2=A0 =C2=A00x729
> +#define CSR_MHPMEVENT10H=C2=A0 =C2=A0 0x72a
> +#define CSR_MHPMEVENT11H=C2=A0 =C2=A0 0x72b
> +#define CSR_MHPMEVENT12H=C2=A0 =C2=A0 0x72c
> +#define CSR_MHPMEVENT13H=C2=A0 =C2=A0 0x72d
> +#define CSR_MHPMEVENT14H=C2=A0 =C2=A0 0x72e
> +#define CSR_MHPMEVENT15H=C2=A0 =C2=A0 0x72f
> +#define CSR_MHPMEVENT16H=C2=A0 =C2=A0 0x730
> +#define CSR_MHPMEVENT17H=C2=A0 =C2=A0 0x731
> +#define CSR_MHPMEVENT18H=C2=A0 =C2=A0 0x732
> +#define CSR_MHPMEVENT19H=C2=A0 =C2=A0 0x733
> +#define CSR_MHPMEVENT20H=C2=A0 =C2=A0 0x734
> +#define CSR_MHPMEVENT21H=C2=A0 =C2=A0 0x735
> +#define CSR_MHPMEVENT22H=C2=A0 =C2=A0 0x736
> +#define CSR_MHPMEVENT23H=C2=A0 =C2=A0 0x737
> +#define CSR_MHPMEVENT24H=C2=A0 =C2=A0 0x738
> +#define CSR_MHPMEVENT25H=C2=A0 =C2=A0 0x739
> +#define CSR_MHPMEVENT26H=C2=A0 =C2=A0 0x73a
> +#define CSR_MHPMEVENT27H=C2=A0 =C2=A0 0x73b
> +#define CSR_MHPMEVENT28H=C2=A0 =C2=A0 0x73c
> +#define CSR_MHPMEVENT29H=C2=A0 =C2=A0 0x73d
> +#define CSR_MHPMEVENT30H=C2=A0 =C2=A0 0x73e
> +#define CSR_MHPMEVENT31H=C2=A0 =C2=A0 0x73f
> +
>=C2=A0 =C2=A0#define CSR_MHPMCOUNTER3H=C2=A0 =C2=A00xb83
>=C2=A0 =C2=A0#define CSR_MHPMCOUNTER4H=C2=A0 =C2=A00xb84
>=C2=A0 =C2=A0#define CSR_MHPMCOUNTER5H=C2=A0 =C2=A00xb85
> @@ -443,6 +474,7 @@
>=C2=A0 =C2=A0#define CSR_VSMTE=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= 0x2c0
>=C2=A0 =C2=A0#define CSR_VSPMMASK=C2=A0 =C2=A0 =C2=A0 =C2=A0 0x2c1
>=C2=A0 =C2=A0#define CSR_VSPMBASE=C2=A0 =C2=A0 =C2=A0 =C2=A0 0x2c2
> +#define CSR_SCOUNTOVF=C2=A0 =C2=A0 =C2=A0 =C2=A00xda0
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0/* Crypto Extension */
>=C2=A0 =C2=A0#define CSR_SEED=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = 0x015
> @@ -620,6 +652,7 @@ typedef enum RISCVException {
>=C2=A0 =C2=A0#define IRQ_VS_EXT=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A010
>=C2=A0 =C2=A0#define IRQ_M_EXT=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 11
>=C2=A0 =C2=A0#define IRQ_S_GEXT=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A012
> +#define IRQ_PMU_OVF=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 13
>=C2=A0 =C2=A0#define IRQ_LOCAL_MAX=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 16
>=C2=A0 =C2=A0#define IRQ_LOCAL_GUEST_MAX=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 (TARGET_LONG_BITS - 1)
>=C2=A0 =C2=A0
> @@ -637,11 +670,13 @@ typedef enum RISCVException {
>=C2=A0 =C2=A0#define MIP_VSEIP=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 (1 << IRQ_VS_EXT) >=C2=A0 =C2=A0#define MIP_MEIP=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(1 << IRQ_M_EX= T)
>=C2=A0 =C2=A0#define MIP_SGEIP=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 (1 << IRQ_S_GEXT) > +#define MIP_LCOFIP=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(1 << IRQ_PMU_OVF)
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0/* sip masks */
>=C2=A0 =C2=A0#define SIP_SSIP=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0MIP_SSIP
>=C2=A0 =C2=A0#define SIP_STIP=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0MIP_STIP
>=C2=A0 =C2=A0#define SIP_SEIP=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0MIP_SEIP
> +#define SIP_LCOFIP=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0MIP_LCOFIP
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0/* MIE masks */
>=C2=A0 =C2=A0#define MIE_SEIE=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(1 << IRQ_S_EX= T)
> @@ -795,4 +830,24 @@ typedef enum RISCVException {
>=C2=A0 =C2=A0#define SEED_OPST_WAIT=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(0b01 << 30)
>=C2=A0 =C2=A0#define SEED_OPST_ES16=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(0b10 << 30)
>=C2=A0 =C2=A0#define SEED_OPST_DEAD=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(0b11 << 30)
> +/* PMU related bits */
> +#define MIE_LCOFIE=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(1 << IRQ_PMU_OVF)
> +
> +#define MHPMEVENT_BIT_OF=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0BIT_ULL(63)
> +#define MHPMEVENTH_BIT_OF=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 BIT(31)
> +#define MHPMEVENT_BIT_MINH=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0BIT_ULL(62)
> +#define MHPMEVENTH_BIT_MINH=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 BIT(30)
> +#define MHPMEVENT_BIT_SINH=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0BIT_ULL(61)
> +#define MHPMEVENTH_BIT_SINH=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 BIT(29)
> +#define MHPMEVENT_BIT_UINH=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0BIT_ULL(60)
> +#define MHPMEVENTH_BIT_UINH=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 BIT(28)
> +#define MHPMEVENT_BIT_VSINH=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 BIT_ULL(59)
> +#define MHPMEVENTH_BIT_VSINH=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0BIT(27)
> +#define MHPMEVENT_BIT_VUINH=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 BIT_ULL(58)
> +#define MHPMEVENTH_BIT_VUINH=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0BIT(26)
> +
> +#define MHPMEVENT_SSCOF_MASK=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0_ULL(0xFFFF000000000000)
> +#define MHPMEVENT_IDX_MASK=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A00xFFFFF
> +#define MHPMEVENT_SSCOF_RESVD=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 16
> +
>=C2=A0 =C2=A0#endif
> diff --git a/target/riscv/csr.c b/target/riscv/csr.c
> index 235f2a011e70..1233bfa0a726 100644
> --- a/target/riscv/csr.c
> +++ b/target/riscv/csr.c
> @@ -74,7 +74,7 @@ static RISCVException ctr(CPURISCVState *env, int cs= rno)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0CPUState *cs =3D env_cpu(env);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0RISCVCPU *cpu =3D RISCV_CPU(cs);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0int ctr_index;
> -=C2=A0 =C2=A0 int base_csrno =3D CSR_HPMCOUNTER3;
> +=C2=A0 =C2=A0 int base_csrno =3D CSR_CYCLE;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0bool rv32 =3D riscv_cpu_mxl(env) =3D=3D MXL_= RV32 ? true : false;
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0if (rv32 && csrno >=3D CSR_CYCLEH= ) {
> @@ -83,11 +83,18 @@ static RISCVException ctr(CPURISCVState *env, int = csrno)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0}
>=C2=A0 =C2=A0 =C2=A0 =C2=A0ctr_index =3D csrno - base_csrno;
>=C2=A0 =C2=A0
> -=C2=A0 =C2=A0 if (!cpu->cfg.pmu_num || ctr_index >=3D (cpu->= cfg.pmu_num)) {
> +=C2=A0 =C2=A0 if ((csrno >=3D CSR_CYCLE && csrno <=3D C= SR_INSTRET) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (csrno >=3D CSR_CYCLEH && csrn= o <=3D CSR_INSTRETH)) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 goto skip_ext_pmu_check;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 if ((!cpu->cfg.pmu_num || !(cpu->pmu_avail_ctrs &= amp; BIT(ctr_index)))) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* No counter is enabled in PM= U or the counter is out of range */
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return RISCV_EXCP_ILLEGAL_INST= ;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0}
>=C2=A0 =C2=A0
Maybe it's better to remove !cpu->cfg.pmu_num here, not in later com= mit.
> +skip_ext_pmu_check:
> +
>=C2=A0 =C2=A0 =C2=A0 =C2=A0if (env->priv =3D=3D PRV_S) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0switch (csrno) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case CSR_CYCLE:
> @@ -106,7 +113,6 @@ static RISCVException ctr(CPURISCVState *env, int = csrno)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0break;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case CSR_HPMCOUNTER3...CSR_HPM= COUNTER31:
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ctr_index =3D csrno - CSR_C= YCLE;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!get_field(e= nv->mcounteren, 1 << ctr_index)) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0re= turn RISCV_EXCP_ILLEGAL_INST;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> @@ -130,7 +136,6 @@ static RISCVException ctr(CPURISCVState *env, int = csrno)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}<= br> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0br= eak;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case CSR_HPMCOUN= TER3H...CSR_HPMCOUNTER31H:
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ctr_index =3D= csrno - CSR_CYCLEH;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if= (!get_field(env->mcounteren, 1 << ctr_index)) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0return RISCV_EXCP_ILLEGAL_INST;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}<= br> > @@ -160,7 +165,6 @@ static RISCVException ctr(CPURISCVState *env, int = csrno)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0break;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case CSR_HPMCOUNTER3...CSR_HPM= COUNTER31:
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ctr_index =3D csrno - CSR_C= YCLE;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!get_field(e= nv->hcounteren, 1 << ctr_index) &&
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 g= et_field(env->mcounteren, 1 << ctr_index)) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0re= turn RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -188,7 +192,6 @@ static RISCVException ctr(CPURISCVState *env, int = csrno)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}<= br> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0br= eak;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0case CSR_HPMCOUN= TER3H...CSR_HPMCOUNTER31H:
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ctr_index =3D= csrno - CSR_CYCLEH;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if= (!get_field(env->hcounteren, 1 << ctr_index) &&
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 get_field(env->mcounteren, 1 << ctr_index)) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
> @@ -240,6 +243,18 @@ static RISCVException mctr32(CPURISCVState *env, = int csrno)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0return mctr(env, csrno);
>=C2=A0 =C2=A0}
>=C2=A0 =C2=A0
> +static RISCVException sscofpmf(CPURISCVState *env, int csrno)
> +{
> +=C2=A0 =C2=A0 CPUState *cs =3D env_cpu(env);
> +=C2=A0 =C2=A0 RISCVCPU *cpu =3D RISCV_CPU(cs);
> +
> +=C2=A0 =C2=A0 if (!cpu->cfg.ext_sscofpmf) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return RISCV_EXCP_ILLEGAL_INST;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 return RISCV_EXCP_NONE;
> +}
> +
>=C2=A0 =C2=A0static RISCVException any(CPURISCVState *env, int csrno) >=C2=A0 =C2=A0{
>=C2=A0 =C2=A0 =C2=A0 =C2=A0return RISCV_EXCP_NONE;
> @@ -663,9 +678,39 @@ static int read_mhpmevent(CPURISCVState *env, int= csrno, target_ulong *val)
>=C2=A0 =C2=A0static int write_mhpmevent(CPURISCVState *env, int csrno, = target_ulong val)
>=C2=A0 =C2=A0{
>=C2=A0 =C2=A0 =C2=A0 =C2=A0int evt_index =3D csrno - CSR_MCOUNTINHIBIT;=
> +=C2=A0 =C2=A0 uint64_t mhpmevt_val =3D val;
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0env->mhpmevent_val[evt_index] =3D val; >=C2=A0 =C2=A0
> +=C2=A0 =C2=A0 if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 mhpmevt_val =3D mhpmevt_val |
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 ((uint64_t)env->mhpmeventh_val[evt_index] << 32);
> +=C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 riscv_pmu_update_event_map(env, mhpmevt_val, evt_index)= ;
> +
> +=C2=A0 =C2=A0 return RISCV_EXCP_NONE;
> +}
> +
> +static int read_mhpmeventh(CPURISCVState *env, int csrno, target_ulon= g *val)
> +{
> +=C2=A0 =C2=A0 int evt_index =3D csrno - CSR_MHPMEVENT3H + 3;
> +
> +=C2=A0 =C2=A0 *val =3D env->mhpmeventh_val[evt_index];
> +
> +=C2=A0 =C2=A0 return RISCV_EXCP_NONE;
> +}
> +
> +static int write_mhpmeventh(CPURISCVState *env, int csrno, target_ulo= ng val)
> +{
> +=C2=A0 =C2=A0 int evt_index =3D csrno - CSR_MHPMEVENT3H + 3;
> +=C2=A0 =C2=A0 uint64_t mhpmevth_val =3D val;
> +=C2=A0 =C2=A0 uint64_t mhpmevt_val =3D env->mhpmevent_val[evt_inde= x];
> +
> +=C2=A0 =C2=A0 mhpmevt_val =3D mhpmevt_val | (mhpmevth_val << 32= );
> +=C2=A0 =C2=A0 env->mhpmeventh_val[evt_index] =3D val;
> +
> +=C2=A0 =C2=A0 riscv_pmu_update_event_map(env, mhpmevt_val, evt_index)= ;
> +
>=C2=A0 =C2=A0 =C2=A0 =C2=A0return RISCV_EXCP_NONE;
>=C2=A0 =C2=A0}
>=C2=A0 =C2=A0
> @@ -673,12 +718,20 @@ static int write_mhpmcounter(CPURISCVState *env,= int csrno, target_ulong val)
>=C2=A0 =C2=A0{
>=C2=A0 =C2=A0 =C2=A0 =C2=A0int ctr_idx =3D csrno - CSR_MCYCLE;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0PMUCTRState *counter =3D &env->pmu_ct= rs[ctr_idx];
> +=C2=A0 =C2=A0 uint64_t mhpmctr_val =3D val;
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0counter->mhpmcounter_val =3D val;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0if (riscv_pmu_ctr_monitor_cycles(env, ctr_id= x) ||
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0riscv_pmu_ctr_monitor_instruct= ions(env, ctr_idx)) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0counter->mhpmcounter_prev = =3D get_ticks(false);
> -=C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (ctr_idx > 2) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (riscv_cpu_mxl(env) =3D= =3D MXL_RV32) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mhpmctr_val = =3D mhpmctr_val |
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ((uint64_t)counter->mhpmcounterh_val= << 32);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 riscv_pmu_setup_timer(env, = mhpmctr_val, ctr_idx);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 =C2=A0} else {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* Other counters can keep inc= rementing from the given value */
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0counter->mhpmcounter_prev = =3D val;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0}
> @@ -690,11 +743,17 @@ static int write_mhpmcounterh(CPURISCVState *env= , int csrno, target_ulong val)
>=C2=A0 =C2=A0{
>=C2=A0 =C2=A0 =C2=A0 =C2=A0int ctr_idx =3D csrno - CSR_MCYCLEH;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0PMUCTRState *counter =3D &env->pmu_ct= rs[ctr_idx];
> +=C2=A0 =C2=A0 uint64_t mhpmctr_val =3D counter->mhpmcounter_val; > +=C2=A0 =C2=A0 uint64_t mhpmctrh_val =3D val;
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0counter->mhpmcounterh_val =3D val;
> +=C2=A0 =C2=A0 mhpmctr_val =3D mhpmctr_val | (mhpmctrh_val << 32= );
>=C2=A0 =C2=A0 =C2=A0 =C2=A0if (riscv_pmu_ctr_monitor_cycles(env, ctr_id= x) ||
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0riscv_pmu_ctr_monitor_instruct= ions(env, ctr_idx)) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0counter->mhpmcounterh_prev = =3D get_ticks(true);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (ctr_idx > 2) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 riscv_pmu_setup_timer(env, = mhpmctr_val, ctr_idx);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
>=C2=A0 =C2=A0 =C2=A0 =C2=A0} else {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0counter->mhpmcounterh_prev = =3D val;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0}
> @@ -770,6 +829,32 @@ static int read_hpmcounterh(CPURISCVState *env, i= nt csrno, target_ulong *val)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0return riscv_pmu_read_ctr(env, val, true, ct= r_index);
>=C2=A0 =C2=A0}
>=C2=A0 =C2=A0
> +static int read_scountovf(CPURISCVState *env, int csrno, target_ulong= *val)
> +{
> +=C2=A0 =C2=A0 int mhpmevt_start =3D CSR_MHPMEVENT3 - CSR_MCOUNTINHIBI= T;
> +=C2=A0 =C2=A0 int i;
> +=C2=A0 =C2=A0 *val =3D 0;
> +=C2=A0 =C2=A0 target_ulong *mhpm_evt_val;
> +=C2=A0 =C2=A0 uint64_t of_bit_mask;
> +
> +=C2=A0 =C2=A0 if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 mhpm_evt_val =3D env->mhpmeventh_val;<= br> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 of_bit_mask =3D MHPMEVENTH_BIT_OF;
> +=C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 mhpm_evt_val =3D env->mhpmevent_val; > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 of_bit_mask =3D MHPMEVENT_BIT_OF;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 for (i =3D mhpmevt_start; i < RV_MAX_MHPMEVENTS; i++= ) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ((get_field(env->mcounteren, BIT(i)= )) &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 (mhpm_evt_val[i] & of_b= it_mask)) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= *val |=3D BIT(i);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 return RISCV_EXCP_NONE;
> +}
> +
>=C2=A0 =C2=A0static RISCVException read_time(CPURISCVState *env, int cs= rno,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0target_ulong *val) >=C2=A0 =C2=A0{
> @@ -799,7 +884,8 @@ static RISCVException read_timeh(CPURISCVState *en= v, int csrno,
>=C2=A0 =C2=A0/* Machine constants */
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0#define M_MODE_INTERRUPTS=C2=A0 ((uint64_t)(MIP_MSIP | MIP= _MTIP | MIP_MEIP))
> -#define S_MODE_INTERRUPTS=C2=A0 ((uint64_t)(MIP_SSIP | MIP_STIP | MIP= _SEIP))
> +#define S_MODE_INTERRUPTS=C2=A0 ((uint64_t)(MIP_SSIP | MIP_STIP | MIP= _SEIP | \
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 MIP_LCOFIP)= )

It's better to align with MIP_SSIP here.

>=C2=A0 =C2=A0#define VS_MODE_INTERRUPTS ((uint64_t)(MIP_VSSIP | MIP_VST= IP | MIP_VSEIP))
>=C2=A0 =C2=A0#define HS_MODE_INTERRUPTS ((uint64_t)(MIP_SGEIP | VS_MODE= _INTERRUPTS))
>=C2=A0 =C2=A0
> @@ -840,7 +926,8 @@ static const target_ulong vs_delegable_excps =3D D= ELEGABLE_EXCPS &
>=C2=A0 =C2=A0static const target_ulong sstatus_v1_10_mask =3D SSTATUS_S= IE | SSTATUS_SPIE |
>=C2=A0 =C2=A0 =C2=A0 =C2=A0SSTATUS_UIE | SSTATUS_UPIE | SSTATUS_SPP | S= STATUS_FS | SSTATUS_XS |
>=C2=A0 =C2=A0 =C2=A0 =C2=A0SSTATUS_SUM | SSTATUS_MXR | SSTATUS_VS;
> -static const target_ulong sip_writable_mask =3D SIP_SSIP | MIP_USIP |= MIP_UEIP;
> +static const target_ulong sip_writable_mask =3D SIP_SSIP | MIP_USIP |= MIP_UEIP |
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 SIP_LCOFIP;
>=C2=A0 =C2=A0static const target_ulong hip_writable_mask =3D MIP_VSSIP;=
>=C2=A0 =C2=A0static const target_ulong hvip_writable_mask =3D MIP_VSSIP= | MIP_VSTIP | MIP_VSEIP;
>=C2=A0 =C2=A0static const target_ulong vsip_writable_mask =3D MIP_VSSIP= ;
> @@ -3861,6 +3948,65 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] = =3D {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0[CSR_MHPMEVENT31]=C2=A0 =C2=A0 =3D { "m= hpmevent31",=C2=A0 =C2=A0 any,=C2=A0 =C2=A0 read_mhpmevent,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 write_mhpmevent= },
>=C2=A0 =C2=A0
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT3H]=C2=A0 =C2=A0 =3D { "mhpmevent3h&= quot;,=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},

The new lines have been updated to align with the last line in my
previous patchset(accepted).

you can see it if rebase to=C2=A0 riscv-to-apply.next. So it's better t= o make
write_mhpmeventh align with=C2=A0

' " '.=C2=A0 The same to following new lines.


Got it. Rebased it. But I aligned them= '{' with=C2=A0CSR_HPMCOUNTER3H onwards.
We probably shou= ld align CSR_MHPMEVENT3..CSR_MHPMEVENT31 accordingly as well.
=C2= =A0
Regards,

Weiwei Li

> +=C2=A0 =C2=A0 [CSR_MHPMEVENT4H]=C2=A0 =C2=A0 =3D { "mhpmevent4h&= quot;,=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT5H]=C2=A0 =C2=A0 =3D { "mhpmevent5h&= quot;,=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT6H]=C2=A0 =C2=A0 =3D { "mhpmevent6h&= quot;,=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT7H]=C2=A0 =C2=A0 =3D { "mhpmevent7h&= quot;,=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT8H]=C2=A0 =C2=A0 =3D { "mhpmevent8h&= quot;,=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT9H]=C2=A0 =C2=A0 =3D { "mhpmevent9h&= quot;,=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT10H]=C2=A0 =C2=A0=3D { "mhpmevent10h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT11H]=C2=A0 =C2=A0=3D { "mhpmevent11h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT12H]=C2=A0 =C2=A0=3D { "mhpmevent12h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT13H]=C2=A0 =C2=A0=3D { "mhpmevent13h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT14H]=C2=A0 =C2=A0=3D { "mhpmevent14h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT15H]=C2=A0 =C2=A0=3D { "mhpmevent15h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT16H]=C2=A0 =C2=A0=3D { "mhpmevent16h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT17H]=C2=A0 =C2=A0=3D { "mhpmevent17h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT18H]=C2=A0 =C2=A0=3D { "mhpmevent18h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT19H]=C2=A0 =C2=A0=3D { "mhpmevent19h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT20H]=C2=A0 =C2=A0=3D { "mhpmevent20h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT21H]=C2=A0 =C2=A0=3D { "mhpmevent21h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT22H]=C2=A0 =C2=A0=3D { "mhpmevent22h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT23H]=C2=A0 =C2=A0=3D { "mhpmevent23h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT24H]=C2=A0 =C2=A0=3D { "mhpmevent24h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT25H]=C2=A0 =C2=A0=3D { "mhpmevent25h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT26H]=C2=A0 =C2=A0=3D { "mhpmevent26h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT27H]=C2=A0 =C2=A0=3D { "mhpmevent27h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT28H]=C2=A0 =C2=A0=3D { "mhpmevent28h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT29H]=C2=A0 =C2=A0=3D { "mhpmevent29h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT30H]=C2=A0 =C2=A0=3D { "mhpmevent30h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +=C2=A0 =C2=A0 [CSR_MHPMEVENT31H]=C2=A0 =C2=A0=3D { "mhpmevent31h= ",=C2=A0 =C2=A0 sscofpmf,=C2=A0 read_mhpmeventh,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0write_mhpmeventh},
> +
>=C2=A0 =C2=A0 =C2=A0 =C2=A0[CSR_HPMCOUNTER3H]=C2=A0 =C2=A0=3D { "h= pmcounter3h",=C2=A0 =C2=A0ctr32,=C2=A0 read_hpmcounterh },
>=C2=A0 =C2=A0 =C2=A0 =C2=A0[CSR_HPMCOUNTER4H]=C2=A0 =C2=A0=3D { "h= pmcounter4h",=C2=A0 =C2=A0ctr32,=C2=A0 read_hpmcounterh },
>=C2=A0 =C2=A0 =C2=A0 =C2=A0[CSR_HPMCOUNTER5H]=C2=A0 =C2=A0=3D { "h= pmcounter5h",=C2=A0 =C2=A0ctr32,=C2=A0 read_hpmcounterh },
> @@ -3949,5 +4095,7 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] =3D= {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 write_mhpmcount= erh },
>=C2=A0 =C2=A0 =C2=A0 =C2=A0[CSR_MHPMCOUNTER31H] =3D { "mhpmcounter= 31h", mctr32,=C2=A0 read_hpmcounterh,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 write_mhpmcount= erh },
> +=C2=A0 =C2=A0 [CSR_SCOUNTOVF]=C2=A0 =C2=A0 =C2=A0 =3D { "scounto= vf", sscofpmf,=C2=A0 read_scountovf },
> +
>=C2=A0 =C2=A0#endif /* !CONFIG_USER_ONLY */
>=C2=A0 =C2=A0};
> diff --git a/target/riscv/machine.c b/target/riscv/machine.c
> index dc182ca81119..33ef9b8e9908 100644
> --- a/target/riscv/machine.c
> +++ b/target/riscv/machine.c
> @@ -355,6 +355,7 @@ const VMStateDescription vmstate_riscv_cpu =3D { >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0VMSTATE_STRUCT_ARRAY(env.pmu_c= trs, RISCVCPU, RV_MAX_MHPMCOUNTERS, 0,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 vmstate_pmu_ctr_state, PMUCTRStat= e),
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0VMSTATE_UINTTL_ARRAY(env.mhpme= vent_val, RISCVCPU, RV_MAX_MHPMEVENTS),
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 VMSTATE_UINTTL_ARRAY(env.mhpmeventh_val, = RISCVCPU, RV_MAX_MHPMEVENTS),
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0VMSTATE_UINTTL(env.sscratch, R= ISCVCPU),
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0VMSTATE_UINTTL(env.mscratch, R= ISCVCPU),
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0VMSTATE_UINT64(env.mfromhost, = RISCVCPU),
> diff --git a/target/riscv/pmu.c b/target/riscv/pmu.c
> index 000fe8da45ef..34096941c0ce 100644
> --- a/target/riscv/pmu.c
> +++ b/target/riscv/pmu.c
> @@ -19,14 +19,367 @@
>=C2=A0 =C2=A0#include "qemu/osdep.h"
>=C2=A0 =C2=A0#include "cpu.h"
>=C2=A0 =C2=A0#include "pmu.h"
> +#include "sysemu/cpu-timers.h"
> +
> +#define RISCV_TIMEBASE_FREQ 1000000000 /* 1Ghz */
> +#define MAKE_32BIT_MASK(shift, length) \
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (((uint32_t)(~0UL) >> (32 - (length= ))) << (shift))
> +
> +static bool riscv_pmu_counter_valid(RISCVCPU *cpu, uint32_t ctr_idx)<= br> > +{
> +=C2=A0 =C2=A0 if (ctr_idx < 3 || ctr_idx >=3D RV_MAX_MHPMCOUNTE= RS ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 !(cpu->pmu_avail_ctrs & BIT(ctr_id= x))) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return false;
> +=C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return true;
> +=C2=A0 =C2=A0 }
> +}
> +
> +static bool riscv_pmu_counter_enabled(RISCVCPU *cpu, uint32_t ctr_idx= )
> +{
> +=C2=A0 =C2=A0 CPURISCVState *env =3D &cpu->env;
> +
> +=C2=A0 =C2=A0 if (riscv_pmu_counter_valid(cpu, ctr_idx) && > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 !get_field(env->mcountinhibit, BIT(ctr= _idx))) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return true;
> +=C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return false;
> +=C2=A0 =C2=A0 }
> +}
> +
> +static int riscv_pmu_incr_ctr_rv32(RISCVCPU *cpu, uint32_t ctr_idx) > +{
> +=C2=A0 =C2=A0 CPURISCVState *env =3D &cpu->env;
> +=C2=A0 =C2=A0 target_ulong max_val =3D UINT32_MAX;
> +=C2=A0 =C2=A0 PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]= ;
> +=C2=A0 =C2=A0 bool virt_on =3D riscv_cpu_virt_enabled(env);
> +
> +=C2=A0 =C2=A0 /* Privilege mode filtering */
> +=C2=A0 =C2=A0 if ((env->priv =3D=3D PRV_M &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->mhpmeventh_val[ctr_idx] & MH= PMEVENTH_BIT_MINH)) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->priv =3D=3D PRV_S && vir= t_on &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->mhpmeventh_val[ctr_idx] & MH= PMEVENTH_BIT_VSINH)) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->priv =3D=3D PRV_U && vir= t_on &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->mhpmeventh_val[ctr_idx] & MH= PMEVENTH_BIT_VUINH)) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->priv =3D=3D PRV_S && !vi= rt_on &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->mhpmeventh_val[ctr_idx] & MH= PMEVENTH_BIT_SINH)) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->priv =3D=3D PRV_U && !vi= rt_on &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->mhpmeventh_val[ctr_idx] & MH= PMEVENTH_BIT_UINH))) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return 0;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 /* Handle the overflow scenario */
> +=C2=A0 =C2=A0 if (counter->mhpmcounter_val =3D=3D max_val) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (counter->mhpmcounterh_val =3D=3D m= ax_val) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 counter->mhpmcounter_val= =3D 0;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 counter->mhpmcounterh_va= l =3D 0;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Generate interrupt only = if OF bit is clear */
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (!(env->mhpmeventh_va= l[ctr_idx] & MHPMEVENTH_BIT_OF)) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 env->mhpme= venth_val[ctr_idx] |=3D MHPMEVENTH_BIT_OF;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 riscv_cpu_upd= ate_mip(cpu, MIP_LCOFIP, BOOL_TO_MASK(1));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 counter->mhpmcounterh_va= l++;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 counter->mhpmcounter_val++;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 return 0;
> +}
> +
> +static int riscv_pmu_incr_ctr_rv64(RISCVCPU *cpu, uint32_t ctr_idx) > +{
> +=C2=A0 =C2=A0 CPURISCVState *env =3D &cpu->env;
> +=C2=A0 =C2=A0 PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]= ;
> +=C2=A0 =C2=A0 uint64_t max_val =3D UINT64_MAX;
> +=C2=A0 =C2=A0 bool virt_on =3D riscv_cpu_virt_enabled(env);
> +
> +=C2=A0 =C2=A0 /* Privilege mode filtering */
> +=C2=A0 =C2=A0 if ((env->priv =3D=3D PRV_M &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->mhpmevent_val[ctr_idx] & MHP= MEVENT_BIT_MINH)) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->priv =3D=3D PRV_S && vir= t_on &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->mhpmevent_val[ctr_idx] & MHP= MEVENT_BIT_VSINH)) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->priv =3D=3D PRV_U && vir= t_on &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->mhpmevent_val[ctr_idx] & MHP= MEVENT_BIT_VUINH)) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->priv =3D=3D PRV_S && !vi= rt_on &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->mhpmevent_val[ctr_idx] & MHP= MEVENT_BIT_SINH)) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->priv =3D=3D PRV_U && !vi= rt_on &&
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 (env->mhpmevent_val[ctr_idx] & MHP= MEVENT_BIT_UINH))) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return 0;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 /* Handle the overflow scenario */
> +=C2=A0 =C2=A0 if (counter->mhpmcounter_val =3D=3D max_val) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 counter->mhpmcounter_val =3D 0;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Generate interrupt only if OF bit is c= lear */
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (!(env->mhpmevent_val[ctr_idx] &= ; MHPMEVENT_BIT_OF)) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 env->mhpmevent_val[ctr_i= dx] |=3D MHPMEVENT_BIT_OF;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 riscv_cpu_update_mip(cpu, M= IP_LCOFIP, BOOL_TO_MASK(1));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 counter->mhpmcounter_val++;
> +=C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 return 0;
> +}
> +
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_= idx)
> +{
> +=C2=A0 =C2=A0 uint32_t ctr_idx;
> +=C2=A0 =C2=A0 int ret;
> +=C2=A0 =C2=A0 CPURISCVState *env =3D &cpu->env;
> +=C2=A0 =C2=A0 gpointer value;
> +
> +=C2=A0 =C2=A0 value =3D g_hash_table_lookup(cpu->pmu_event_ctr_map= ,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 GUINT_TO_POINTER(event_idx)); > +=C2=A0 =C2=A0 if (!value) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 ctr_idx =3D GPOINTER_TO_UINT(value);
> +=C2=A0 =C2=A0 if (!riscv_pmu_counter_enabled(cpu, ctr_idx) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 get_field(env->mcountinhibit, BIT(ctr_= idx))) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D riscv_pmu_incr_ctr_rv32(cpu, ctr_= idx);
> +=C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D riscv_pmu_incr_ctr_rv64(cpu, ctr_= idx);
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 return ret;
> +}
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0bool riscv_pmu_ctr_monitor_instructions(CPURISCVState *env= ,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0uint32_t target_ctr)
>=C2=A0 =C2=A0{
> -=C2=A0 =C2=A0 return (target_ctr =3D=3D 0) ? true : false;
> +=C2=A0 =C2=A0 RISCVCPU *cpu;
> +=C2=A0 =C2=A0 uint32_t event_idx;
> +=C2=A0 =C2=A0 uint32_t ctr_idx;
> +
> +=C2=A0 =C2=A0 /* Fixed instret counter */
> +=C2=A0 =C2=A0 if (target_ctr =3D=3D 2) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return true;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 cpu =3D RISCV_CPU(env_cpu(env));
> +=C2=A0 =C2=A0 event_idx =3D RISCV_PMU_EVENT_HW_INSTRUCTIONS;
> +=C2=A0 =C2=A0 ctr_idx =3D GPOINTER_TO_UINT(g_hash_table_lookup(cpu-&g= t;pmu_event_ctr_map,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0GUINT_TO_POINTER(event_idx))); > +=C2=A0 =C2=A0 if (!ctr_idx) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return false;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 return target_ctr =3D=3D ctr_idx ? true : false;
>=C2=A0 =C2=A0}
>=C2=A0 =C2=A0
>=C2=A0 =C2=A0bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env, uint= 32_t target_ctr)
>=C2=A0 =C2=A0{
> -=C2=A0 =C2=A0 return (target_ctr =3D=3D 2) ? true : false;
> +=C2=A0 =C2=A0 RISCVCPU *cpu;
> +=C2=A0 =C2=A0 uint32_t event_idx;
> +=C2=A0 =C2=A0 uint32_t ctr_idx;
> +
> +=C2=A0 =C2=A0 /* Fixed mcycle counter */
> +=C2=A0 =C2=A0 if (target_ctr =3D=3D 0) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return true;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 cpu =3D RISCV_CPU(env_cpu(env));
> +=C2=A0 =C2=A0 event_idx =3D RISCV_PMU_EVENT_HW_CPU_CYCLES;
> +=C2=A0 =C2=A0 ctr_idx =3D GPOINTER_TO_UINT(g_hash_table_lookup(cpu-&g= t;pmu_event_ctr_map,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0GUINT_TO_POINTER(event_idx))); > +
> +=C2=A0 =C2=A0 /* Counter zero is not used for event_ctr_map */
> +=C2=A0 =C2=A0 if (!ctr_idx) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return false;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 return (target_ctr =3D=3D ctr_idx) ? true : false;
> +}
> +
> +static gboolean pmu_remove_event_map(gpointer key, gpointer value, > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0gpointer uda= ta)
> +{
> +=C2=A0 =C2=A0 return (GPOINTER_TO_UINT(value) =3D=3D GPOINTER_TO_UINT= (udata)) ? true : false;
> +}
> +
> +static int64_t pmu_icount_ticks_to_ns(int64_t value)
> +{
> +=C2=A0 =C2=A0 int64_t ret =3D 0;
> +
> +=C2=A0 =C2=A0 if (icount_enabled()) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D icount_to_ns(value);
> +=C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D (NANOSECONDS_PER_SECOND / RISCV_T= IMEBASE_FREQ) * value;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 return ret;
> +}
> +
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value, > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t ctr_idx)
> +{
> +=C2=A0 =C2=A0 uint32_t event_idx;
> +=C2=A0 =C2=A0 RISCVCPU *cpu =3D RISCV_CPU(env_cpu(env));
> +
> +=C2=A0 =C2=A0 if (!riscv_pmu_counter_valid(cpu, ctr_idx)) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 /**
> +=C2=A0 =C2=A0 =C2=A0* Expected mhpmevent value is zero for reset case= . Remove the current
> +=C2=A0 =C2=A0 =C2=A0* mapping.
> +=C2=A0 =C2=A0 =C2=A0*/
> +=C2=A0 =C2=A0 if (!value) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 g_hash_table_foreach_remove(cpu->pmu_e= vent_ctr_map,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pmu_remove_event_m= ap,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 GUINT_TO_POINTER(c= tr_idx));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return 0;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 event_idx =3D value & MHPMEVENT_IDX_MASK;
> +=C2=A0 =C2=A0 if (g_hash_table_lookup(cpu->pmu_event_ctr_map,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 GUINT_TO_POINTER(event_idx))) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return 0;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 switch (event_idx) {
> +=C2=A0 =C2=A0 case RISCV_PMU_EVENT_HW_CPU_CYCLES:
> +=C2=A0 =C2=A0 case RISCV_PMU_EVENT_HW_INSTRUCTIONS:
> +=C2=A0 =C2=A0 case RISCV_PMU_EVENT_CACHE_DTLB_READ_MISS:
> +=C2=A0 =C2=A0 case RISCV_PMU_EVENT_CACHE_DTLB_WRITE_MISS:
> +=C2=A0 =C2=A0 case RISCV_PMU_EVENT_CACHE_ITLB_PREFETCH_MISS:
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 break;
> +=C2=A0 =C2=A0 default:
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 /* We don't support any raw events ri= ght now */
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1;
> +=C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 g_hash_table_insert(cpu->pmu_event_ctr_map, GUINT_TO= _POINTER(event_idx),
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 GUINT_TO_POINTER(ctr_idx));
> +
> +=C2=A0 =C2=A0 return 0;
> +}
> +
> +static void pmu_timer_trigger_irq(RISCVCPU *cpu,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 enum riscv_pmu_event_idx = evt_idx)
> +{
> +=C2=A0 =C2=A0 uint32_t ctr_idx;
> +=C2=A0 =C2=A0 CPURISCVState *env =3D &cpu->env;
> +=C2=A0 =C2=A0 PMUCTRState *counter;
> +=C2=A0 =C2=A0 target_ulong *mhpmevent_val;
> +=C2=A0 =C2=A0 uint64_t of_bit_mask;
> +=C2=A0 =C2=A0 int64_t irq_trigger_at;
> +
> +=C2=A0 =C2=A0 if (evt_idx !=3D RISCV_PMU_EVENT_HW_CPU_CYCLES &&am= p;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 evt_idx !=3D RISCV_PMU_EVENT_HW_INSTRUCTI= ONS) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 ctr_idx =3D GPOINTER_TO_UINT(g_hash_table_lookup(cpu-&g= t;pmu_event_ctr_map,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0GUINT_TO_POINTER(evt_idx)));
> +=C2=A0 =C2=A0 if (!riscv_pmu_counter_enabled(cpu, ctr_idx)) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 mhpmevent_val =3D &env->mhpmeventh= _val[ctr_idx];
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 of_bit_mask =3D MHPMEVENTH_BIT_OF;
> +=C2=A0 =C2=A0 =C2=A0} else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 mhpmevent_val =3D &env->mhpmevent_= val[ctr_idx];
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 of_bit_mask =3D MHPMEVENT_BIT_OF;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 counter =3D &env->pmu_ctrs[ctr_idx];
> +=C2=A0 =C2=A0 if (counter->irq_overflow_left > 0) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 irq_trigger_at =3D qemu_clock_get_ns(QEMU= _CLOCK_VIRTUAL) +
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 counter->irq_overflow_left;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 timer_mod_anticipate_ns(cpu->pmu_timer= , irq_trigger_at);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 counter->irq_overflow_left =3D 0;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 if (cpu->pmu_avail_ctrs & BIT(ctr_idx)) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 /* Generate interrupt only if OF bit is c= lear */
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (!(*mhpmevent_val & of_bit_mask)) = {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *mhpmevent_val |=3D of_bit_= mask;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 riscv_cpu_update_mip(cpu, M= IP_LCOFIP, BOOL_TO_MASK(1));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 }
> +}
> +
> +/* Timer callback for instret and cycle counter overflow */
> +void riscv_pmu_timer_cb(void *priv)
> +{
> +=C2=A0 =C2=A0 RISCVCPU *cpu =3D priv;
> +
> +=C2=A0 =C2=A0 /* Timer event was triggered only for these events */ > +=C2=A0 =C2=A0 pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_CPU_CYCLE= S);
> +=C2=A0 =C2=A0 pmu_timer_trigger_irq(cpu, RISCV_PMU_EVENT_HW_INSTRUCTI= ONS);
> +}
> +
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value, uint32_= t ctr_idx)
> +{
> +=C2=A0 =C2=A0 uint64_t overflow_delta, overflow_at;
> +=C2=A0 =C2=A0 int64_t overflow_ns, overflow_left =3D 0;
> +=C2=A0 =C2=A0 RISCVCPU *cpu =3D RISCV_CPU(env_cpu(env));
> +=C2=A0 =C2=A0 PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]= ;
> +
> +=C2=A0 =C2=A0 if (!riscv_pmu_counter_valid(cpu, ctr_idx) || !cpu->= cfg.ext_sscofpmf) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 if (value) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 overflow_delta =3D UINT64_MAX - value + 1= ;
> +=C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 overflow_delta =3D UINT64_MAX;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 /**
> +=C2=A0 =C2=A0 =C2=A0* QEMU supports only int64_t timers while RISC-V = counters are uint64_t.
> +=C2=A0 =C2=A0 =C2=A0* Compute the leftover and save it so that it can= be reprogrammed again
> +=C2=A0 =C2=A0 =C2=A0* when timer expires.
> +=C2=A0 =C2=A0 =C2=A0*/
> +=C2=A0 =C2=A0 if (overflow_delta > INT64_MAX) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 overflow_left =3D overflow_delta - INT64_= MAX;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 if (riscv_pmu_ctr_monitor_cycles(env, ctr_idx) ||
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 riscv_pmu_ctr_monitor_instructions(env, c= tr_idx)) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 overflow_ns =3D pmu_icount_ticks_to_ns((i= nt64_t)overflow_delta);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 overflow_left =3D pmu_icount_ticks_to_ns(= overflow_left) ;
> +=C2=A0 =C2=A0 } else {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1;
> +=C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 overflow_at =3D (uint64_t)qemu_clock_get_ns(QEMU_CLOCK_= VIRTUAL) + overflow_ns;
> +
> +=C2=A0 =C2=A0 if (overflow_at > INT64_MAX) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 overflow_left +=3D overflow_at - INT64_MA= X;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 counter->irq_overflow_left =3D overflo= w_left;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 overflow_at =3D INT64_MAX;
> +=C2=A0 =C2=A0 }
> +=C2=A0 =C2=A0 timer_mod_anticipate_ns(cpu->pmu_timer, overflow_at)= ;
> +
> +=C2=A0 =C2=A0 return 0;
> +}
> +
> +
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters)
> +{
> +=C2=A0 =C2=A0 if (num_counters > (RV_MAX_MHPMCOUNTERS - 3)) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 cpu->pmu_event_ctr_map =3D g_hash_table_new(g_direct= _hash, g_direct_equal);
> +=C2=A0 =C2=A0 if (!cpu->pmu_event_ctr_map) {
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 /* PMU support can not be enabled */
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 qemu_log_mask(LOG_UNIMP, "PMU events= can't be supported\n");
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu->cfg.pmu_num =3D 0;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -1;
> +=C2=A0 =C2=A0 }
> +
> +=C2=A0 =C2=A0 /* Create a bitmask of available programmable counters = */
> +=C2=A0 =C2=A0 cpu->pmu_avail_ctrs =3D MAKE_32BIT_MASK(3, num_count= ers);
> +
> +=C2=A0 =C2=A0 return 0;
>=C2=A0 =C2=A0}
> diff --git a/target/riscv/pmu.h b/target/riscv/pmu.h
> index 58a5bc3a4089..036653627f78 100644
> --- a/target/riscv/pmu.h
> +++ b/target/riscv/pmu.h
> @@ -26,3 +26,10 @@ bool riscv_pmu_ctr_monitor_instructions(CPURISCVSta= te *env,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0uint32_t target_ctr);
>=C2=A0 =C2=A0bool riscv_pmu_ctr_monitor_cycles(CPURISCVState *env,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t targ= et_ctr);
> +void riscv_pmu_timer_cb(void *priv);
> +int riscv_pmu_init(RISCVCPU *cpu, int num_counters);
> +int riscv_pmu_update_event_map(CPURISCVState *env, uint64_t value, > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t ctr_idx);
> +int riscv_pmu_incr_ctr(RISCVCPU *cpu, enum riscv_pmu_event_idx event_= idx);
> +int riscv_pmu_setup_timer(CPURISCVState *env, uint64_t value,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 uint32_t ctr_idx);

--000000000000b2c20405e4d02826--