From: Dapeng Mi <dapeng1.mi@linux.intel.com>
To: Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Jim Mattson <jmattson@google.com>,
Mingwei Zhang <mizhang@google.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
Xiong Zhang <xiong.y.zhang@intel.com>,
Zhenyu Wang <zhenyuw@linux.intel.com>,
Like Xu <like.xu.linux@gmail.com>,
Jinrong Liang <cloudliang@tencent.com>,
Dapeng Mi <dapeng1.mi@intel.com>,
Dapeng Mi <dapeng1.mi@linux.intel.com>
Subject: [kvm-unit-tests Patch v4 10/17] x86: pmu: Use macro to replace hard-coded instructions event index
Date: Fri, 19 Apr 2024 11:52:26 +0800 [thread overview]
Message-ID: <20240419035233.3837621-11-dapeng1.mi@linux.intel.com> (raw)
In-Reply-To: <20240419035233.3837621-1-dapeng1.mi@linux.intel.com>
Replace hard-coded instruction event index with macro to avoid possible
mismatch issue if new event is added in the future and cause
instructions event index changed, but forget to update the hard-coded
event index.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
x86/pmu.c | 34 +++++++++++++++++++++++++++-------
1 file changed, 27 insertions(+), 7 deletions(-)
diff --git a/x86/pmu.c b/x86/pmu.c
index 6ae46398d84b..20bc6de9c936 100644
--- a/x86/pmu.c
+++ b/x86/pmu.c
@@ -54,6 +54,7 @@ struct pmu_event {
* intel_gp_events[].
*/
enum {
+ INTEL_INSTRUCTIONS_IDX = 1,
INTEL_REF_CYCLES_IDX = 2,
INTEL_BRANCHES_IDX = 5,
};
@@ -63,6 +64,7 @@ enum {
* amd_gp_events[].
*/
enum {
+ AMD_INSTRUCTIONS_IDX = 1,
AMD_BRANCHES_IDX = 2,
};
@@ -317,11 +319,16 @@ static uint64_t measure_for_overflow(pmu_counter_t *cnt)
static void check_counter_overflow(void)
{
- uint64_t overflow_preset;
int i;
+ uint64_t overflow_preset;
+ int instruction_idx = pmu.is_intel ?
+ INTEL_INSTRUCTIONS_IDX :
+ AMD_INSTRUCTIONS_IDX;
+
pmu_counter_t cnt = {
.ctr = MSR_GP_COUNTERx(0),
- .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */,
+ .config = EVNTSEL_OS | EVNTSEL_USR |
+ gp_events[instruction_idx].unit_sel /* instructions */,
};
overflow_preset = measure_for_overflow(&cnt);
@@ -377,13 +384,18 @@ static void check_counter_overflow(void)
static void check_gp_counter_cmask(void)
{
+ int instruction_idx = pmu.is_intel ?
+ INTEL_INSTRUCTIONS_IDX :
+ AMD_INSTRUCTIONS_IDX;
+
pmu_counter_t cnt = {
.ctr = MSR_GP_COUNTERx(0),
- .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel /* instructions */,
+ .config = EVNTSEL_OS | EVNTSEL_USR |
+ gp_events[instruction_idx].unit_sel /* instructions */,
};
cnt.config |= (0x2 << EVNTSEL_CMASK_SHIFT);
measure_one(&cnt);
- report(cnt.count < gp_events[1].min, "cmask");
+ report(cnt.count < gp_events[instruction_idx].min, "cmask");
}
static void do_rdpmc_fast(void *ptr)
@@ -458,9 +470,14 @@ static void check_running_counter_wrmsr(void)
{
uint64_t status;
uint64_t count;
+ unsigned int instruction_idx = pmu.is_intel ?
+ INTEL_INSTRUCTIONS_IDX :
+ AMD_INSTRUCTIONS_IDX;
+
pmu_counter_t evt = {
.ctr = MSR_GP_COUNTERx(0),
- .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel,
+ .config = EVNTSEL_OS | EVNTSEL_USR |
+ gp_events[instruction_idx].unit_sel,
};
report_prefix_push("running counter wrmsr");
@@ -469,7 +486,7 @@ static void check_running_counter_wrmsr(void)
loop();
wrmsr(MSR_GP_COUNTERx(0), 0);
stop_event(&evt);
- report(evt.count < gp_events[1].min, "cntr");
+ report(evt.count < gp_events[instruction_idx].min, "cntr");
/* clear status before overflow test */
if (this_cpu_has_perf_global_status())
@@ -500,6 +517,9 @@ static void check_emulated_instr(void)
uint64_t gp_counter_width = (1ull << pmu.gp_counter_width) - 1;
unsigned int branch_idx = pmu.is_intel ?
INTEL_BRANCHES_IDX : AMD_BRANCHES_IDX;
+ unsigned int instruction_idx = pmu.is_intel ?
+ INTEL_INSTRUCTIONS_IDX :
+ AMD_INSTRUCTIONS_IDX;
pmu_counter_t brnch_cnt = {
.ctr = MSR_GP_COUNTERx(0),
/* branch instructions */
@@ -508,7 +528,7 @@ static void check_emulated_instr(void)
pmu_counter_t instr_cnt = {
.ctr = MSR_GP_COUNTERx(1),
/* instructions */
- .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[1].unit_sel,
+ .config = EVNTSEL_OS | EVNTSEL_USR | gp_events[instruction_idx].unit_sel,
};
report_prefix_push("emulated instruction");
--
2.34.1
next prev parent reply other threads:[~2024-04-19 3:46 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-19 3:52 [kvm-unit-tests Patch v4 00/17] pmu test bugs fix and improvements Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 01/17] x86: pmu: Remove duplicate code in pmu_init() Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 02/17] x86: pmu: Remove blank line and redundant space Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 03/17] x86: pmu: Refine fixed_events[] names Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 04/17] x86: pmu: Fix the issue that pmu_counter_t.config crosses cache line Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 05/17] x86: pmu: Enlarge cnt[] length to 48 in check_counters_many() Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 06/17] x86: pmu: Add asserts to warn inconsistent fixed events and counters Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 07/17] x86: pmu: Fix cycles event validation failure Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 08/17] x86: pmu: Use macro to replace hard-coded branches event index Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 09/17] x86: pmu: Use macro to replace hard-coded ref-cycles " Dapeng Mi
2024-04-19 3:52 ` Dapeng Mi [this message]
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 11/17] x86: pmu: Enable and disable PMCs in loop() asm blob Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 12/17] x86: pmu: Improve instruction and branches events verification Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 13/17] x86: pmu: Improve LLC misses event verification Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 14/17] x86: pmu: Adjust lower boundary of llc-misses event to 0 for legacy CPUs Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 15/17] x86: pmu: Add IBPB indirect jump asm blob Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 16/17] x86: pmu: Adjust lower boundary of branch-misses event Dapeng Mi
2024-04-19 3:52 ` [kvm-unit-tests Patch v4 17/17] x86: pmu: Optimize emulated instruction validation Dapeng Mi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240419035233.3837621-11-dapeng1.mi@linux.intel.com \
--to=dapeng1.mi@linux.intel.com \
--cc=cloudliang@tencent.com \
--cc=dapeng1.mi@intel.com \
--cc=jmattson@google.com \
--cc=kvm@vger.kernel.org \
--cc=like.xu.linux@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mizhang@google.com \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=xiong.y.zhang@intel.com \
--cc=zhenyuw@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox