* [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases
@ 2024-03-06 23:01 Sean Christopherson
2024-03-06 23:01 ` [kvm-unit-tests PATCH 1/4] x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support Sean Christopherson
` (5 more replies)
0 siblings, 6 replies; 15+ messages in thread
From: Sean Christopherson @ 2024-03-06 23:01 UTC (permalink / raw)
To: Paolo Bonzini
Cc: kvm, Sean Christopherson, Like Xu, Mingwei Zhang, Zhenyu Wang,
Zhang Xiong, Lv Zhiyuan, Dapeng Mi
One bug fix where pmu_pebs attempts to enable PEBS for fixed counter on
CPUs without Extended PEBS, and two new testcases to verify adaptive
PEBS functionality.
The new testcases are intended both to demonstrate that adaptive PEBS
virtualization is currently broken, and to serve as a gatekeeper for
re-enabling adapative PEBS in the future.
https://lore.kernel.org/all/ZeepGjHCeSfadANM@google.com
Sean Christopherson (4):
x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support
x86/pmu: Iterate over adaptive PEBS flag combinations
x86/pmu: Test adaptive PEBS without any adaptive counters
x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the
guest
lib/x86/pmu.h | 6 ++-
x86/pmu_pebs.c | 109 +++++++++++++++++++++++++++----------------------
2 files changed, 66 insertions(+), 49 deletions(-)
base-commit: 55dd01b4f066577b49f03fb7146723c4a65822c4
--
2.44.0.278.ge034bb2e1d-goog
^ permalink raw reply [flat|nested] 15+ messages in thread* [kvm-unit-tests PATCH 1/4] x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support 2024-03-06 23:01 [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases Sean Christopherson @ 2024-03-06 23:01 ` Sean Christopherson 2024-03-07 9:22 ` Mi, Dapeng 2024-03-06 23:01 ` [kvm-unit-tests PATCH 2/4] x86/pmu: Iterate over adaptive PEBS flag combinations Sean Christopherson ` (4 subsequent siblings) 5 siblings, 1 reply; 15+ messages in thread From: Sean Christopherson @ 2024-03-06 23:01 UTC (permalink / raw) To: Paolo Bonzini Cc: kvm, Sean Christopherson, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi Enabling PEBS for fixed counters is only allowed if "Extended PEBS" is supported, and unfortunately "Extended PEBS" is bundled with a pile of other PEBS features under the "Baseline" umbrella. KVM emulates this correctly and disallows enabling PEBS on fixed counters if the vCPU doesn't have Baseline PEBS support. Signed-off-by: Sean Christopherson <seanjc@google.com> --- x86/pmu_pebs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c index f7b52b90..050617cd 100644 --- a/x86/pmu_pebs.c +++ b/x86/pmu_pebs.c @@ -356,13 +356,13 @@ static void check_pebs_counters(u64 pebs_data_cfg) unsigned int idx; u64 bitmask = 0; - for (idx = 0; idx < pmu.nr_fixed_counters; idx++) + for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) check_one_counter(FIXED, idx, pebs_data_cfg); for (idx = 0; idx < max_nr_gp_events; idx++) check_one_counter(GP, idx, pebs_data_cfg); - for (idx = 0; idx < pmu.nr_fixed_counters; idx++) + for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) bitmask |= BIT_ULL(FIXED_CNT_INDEX + idx); for (idx = 0; idx < max_nr_gp_events; idx += 2) bitmask |= BIT_ULL(idx); -- 2.44.0.278.ge034bb2e1d-goog ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [kvm-unit-tests PATCH 1/4] x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support 2024-03-06 23:01 ` [kvm-unit-tests PATCH 1/4] x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support Sean Christopherson @ 2024-03-07 9:22 ` Mi, Dapeng 0 siblings, 0 replies; 15+ messages in thread From: Mi, Dapeng @ 2024-03-07 9:22 UTC (permalink / raw) To: Sean Christopherson, Paolo Bonzini Cc: kvm, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi On 3/7/2024 7:01 AM, Sean Christopherson wrote: > Enabling PEBS for fixed counters is only allowed if "Extended PEBS" is > supported, and unfortunately "Extended PEBS" is bundled with a pile of > other PEBS features under the "Baseline" umbrella. KVM emulates this > correctly and disallows enabling PEBS on fixed counters if the vCPU > doesn't have Baseline PEBS support. > > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > x86/pmu_pebs.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c > index f7b52b90..050617cd 100644 > --- a/x86/pmu_pebs.c > +++ b/x86/pmu_pebs.c > @@ -356,13 +356,13 @@ static void check_pebs_counters(u64 pebs_data_cfg) > unsigned int idx; > u64 bitmask = 0; > > - for (idx = 0; idx < pmu.nr_fixed_counters; idx++) > + for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) > check_one_counter(FIXED, idx, pebs_data_cfg); > > for (idx = 0; idx < max_nr_gp_events; idx++) > check_one_counter(GP, idx, pebs_data_cfg); > > - for (idx = 0; idx < pmu.nr_fixed_counters; idx++) > + for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) > bitmask |= BIT_ULL(FIXED_CNT_INDEX + idx); > for (idx = 0; idx < max_nr_gp_events; idx += 2) > bitmask |= BIT_ULL(idx); Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> ^ permalink raw reply [flat|nested] 15+ messages in thread
* [kvm-unit-tests PATCH 2/4] x86/pmu: Iterate over adaptive PEBS flag combinations 2024-03-06 23:01 [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases Sean Christopherson 2024-03-06 23:01 ` [kvm-unit-tests PATCH 1/4] x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support Sean Christopherson @ 2024-03-06 23:01 ` Sean Christopherson 2024-03-06 23:01 ` [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters Sean Christopherson ` (3 subsequent siblings) 5 siblings, 0 replies; 15+ messages in thread From: Sean Christopherson @ 2024-03-06 23:01 UTC (permalink / raw) To: Paolo Bonzini Cc: kvm, Sean Christopherson, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi Iterate over all possible combinations of adaptive PEBS flags, instead of simply testing each flag individually. There are currently only 16 possible combinations, i.e. there's no reason not to exhaustively test every one. Opportunistically rename PEBS_DATACFG_GP to PEBS_DATACFG_GPRS to differentiate it from general purposes *counters*, which KVM also tends to abbreviate as "GP". Signed-off-by: Sean Christopherson <seanjc@google.com> --- lib/x86/pmu.h | 6 +++++- x86/pmu_pebs.c | 20 +++++++++----------- 2 files changed, 14 insertions(+), 12 deletions(-) diff --git a/lib/x86/pmu.h b/lib/x86/pmu.h index 8465e3c9..f07fbd93 100644 --- a/lib/x86/pmu.h +++ b/lib/x86/pmu.h @@ -44,9 +44,13 @@ #define GLOBAL_STATUS_BUFFER_OVF BIT_ULL(GLOBAL_STATUS_BUFFER_OVF_BIT) #define PEBS_DATACFG_MEMINFO BIT_ULL(0) -#define PEBS_DATACFG_GP BIT_ULL(1) +#define PEBS_DATACFG_GPRS BIT_ULL(1) #define PEBS_DATACFG_XMMS BIT_ULL(2) #define PEBS_DATACFG_LBRS BIT_ULL(3) +#define PEBS_DATACFG_MASK (PEBS_DATACFG_MEMINFO | \ + PEBS_DATACFG_GPRS | \ + PEBS_DATACFG_XMMS | \ + PEBS_DATACFG_LBRS) #define ICL_EVENTSEL_ADAPTIVE (1ULL << 34) #define PEBS_DATACFG_LBR_SHIFT 24 diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c index 050617cd..dff1ed26 100644 --- a/x86/pmu_pebs.c +++ b/x86/pmu_pebs.c @@ -78,13 +78,6 @@ static uint32_t intel_arch_events[] = { 0x412e, /* PERF_COUNT_HW_CACHE_MISSES */ }; -static u64 pebs_data_cfgs[] = { - PEBS_DATACFG_MEMINFO, - PEBS_DATACFG_GP, - PEBS_DATACFG_XMMS, - PEBS_DATACFG_LBRS | ((MAX_NUM_LBR_ENTRY -1) << PEBS_DATACFG_LBR_SHIFT), -}; - /* Iterating each counter value is a waste of time, pick a few typical values. */ static u64 counter_start_values[] = { /* if PEBS counter doesn't overflow at all */ @@ -105,7 +98,7 @@ static unsigned int get_adaptive_pebs_record_size(u64 pebs_data_cfg) if (pebs_data_cfg & PEBS_DATACFG_MEMINFO) sz += sizeof(struct pebs_meminfo); - if (pebs_data_cfg & PEBS_DATACFG_GP) + if (pebs_data_cfg & PEBS_DATACFG_GPRS) sz += sizeof(struct pebs_gprs); if (pebs_data_cfg & PEBS_DATACFG_XMMS) sz += sizeof(struct pebs_xmm); @@ -419,9 +412,14 @@ int main(int ac, char **av) if (!has_baseline) continue; - for (j = 0; j < ARRAY_SIZE(pebs_data_cfgs); j++) { - report_prefix_pushf("Adaptive (0x%lx)", pebs_data_cfgs[j]); - check_pebs_counters(pebs_data_cfgs[j]); + for (j = 0; j <= PEBS_DATACFG_MASK; j++) { + u64 pebs_data_cfg = j; + + if (pebs_data_cfg & PEBS_DATACFG_LBRS) + pebs_data_cfg |= ((MAX_NUM_LBR_ENTRY -1) << PEBS_DATACFG_LBR_SHIFT); + + report_prefix_pushf("Adaptive (0x%lx)", pebs_data_cfg); + check_pebs_counters(pebs_data_cfg); report_prefix_pop(); } } -- 2.44.0.278.ge034bb2e1d-goog ^ permalink raw reply related [flat|nested] 15+ messages in thread
* [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters 2024-03-06 23:01 [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases Sean Christopherson 2024-03-06 23:01 ` [kvm-unit-tests PATCH 1/4] x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support Sean Christopherson 2024-03-06 23:01 ` [kvm-unit-tests PATCH 2/4] x86/pmu: Iterate over adaptive PEBS flag combinations Sean Christopherson @ 2024-03-06 23:01 ` Sean Christopherson 2024-03-07 9:08 ` Like Xu ` (2 more replies) 2024-03-06 23:01 ` [kvm-unit-tests PATCH 4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest Sean Christopherson ` (2 subsequent siblings) 5 siblings, 3 replies; 15+ messages in thread From: Sean Christopherson @ 2024-03-06 23:01 UTC (permalink / raw) To: Paolo Bonzini Cc: kvm, Sean Christopherson, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi If adaptive PEBS is supported, verify that only basic PEBS records are generated for counters without their ADAPTIVE flag set. Per the SDM, adaptive records are generated if and only if both the per-counter flag *and* the global enable(s) in MSR_PEBS_DATA_CFG are set. IA32_PERFEVTSELx.Adaptive_Record[34]: If this bit is set and IA32_PEBS_ENABLE.PEBS_EN_PMCx is set for the corresponding GP counter, an overflow of PMCx results in generation of an adaptive PEBS record with state information based on the selections made in MSR_PEBS_DATA_CFG. If this bit is not set, a basic record is generated. and IA32_FIXED_CTR_CTRL.FCx_Adaptive_Record: If this bit is set and IA32_PEBS_ENABLE.PEBS_EN_FIXEDx is set for the corresponding Fixed counter, an overflow of FixedCtrx results in generation of an adaptive PEBS record with state information based on the selections made in MSR_PEBS_DATA_CFG. If this bit is not set, a basic record is generated. Signed-off-by: Sean Christopherson <seanjc@google.com> --- x86/pmu_pebs.c | 74 ++++++++++++++++++++++++++------------------------ 1 file changed, 38 insertions(+), 36 deletions(-) diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c index dff1ed26..0e8d60c3 100644 --- a/x86/pmu_pebs.c +++ b/x86/pmu_pebs.c @@ -89,11 +89,11 @@ static u64 counter_start_values[] = { 0xffffffffffff, }; -static unsigned int get_adaptive_pebs_record_size(u64 pebs_data_cfg) +static unsigned int get_pebs_record_size(u64 pebs_data_cfg, bool use_adaptive) { unsigned int sz = sizeof(struct pebs_basic); - if (!has_baseline) + if (!use_adaptive) return sz; if (pebs_data_cfg & PEBS_DATACFG_MEMINFO) @@ -199,10 +199,10 @@ static void free_buffers(void) free_page(pebs_buffer); } -static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) +static void pebs_enable(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive) { static struct debug_store *ds; - u64 baseline_extra_ctrl = 0, fixed_ctr_ctrl = 0; + u64 adaptive_ctrl = 0, fixed_ctr_ctrl = 0; unsigned int idx; if (has_baseline) @@ -212,15 +212,15 @@ static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) ds->pebs_index = ds->pebs_buffer_base = (unsigned long)pebs_buffer; ds->pebs_absolute_maximum = (unsigned long)pebs_buffer + PAGE_SIZE; ds->pebs_interrupt_threshold = ds->pebs_buffer_base + - get_adaptive_pebs_record_size(pebs_data_cfg); + get_pebs_record_size(pebs_data_cfg, use_adaptive); for (idx = 0; idx < pmu.nr_fixed_counters; idx++) { if (!(BIT_ULL(FIXED_CNT_INDEX + idx) & bitmask)) continue; - if (has_baseline) - baseline_extra_ctrl = BIT(FIXED_CNT_INDEX + idx * 4); + if (use_adaptive) + adaptive_ctrl = BIT(FIXED_CNT_INDEX + idx * 4); wrmsr(MSR_PERF_FIXED_CTRx(idx), ctr_start_val); - fixed_ctr_ctrl |= (0xbULL << (idx * 4) | baseline_extra_ctrl); + fixed_ctr_ctrl |= (0xbULL << (idx * 4) | adaptive_ctrl); } if (fixed_ctr_ctrl) wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, fixed_ctr_ctrl); @@ -228,10 +228,10 @@ static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) for (idx = 0; idx < max_nr_gp_events; idx++) { if (!(BIT_ULL(idx) & bitmask)) continue; - if (has_baseline) - baseline_extra_ctrl = ICL_EVENTSEL_ADAPTIVE; + if (use_adaptive) + adaptive_ctrl = ICL_EVENTSEL_ADAPTIVE; wrmsr(MSR_GP_EVENT_SELECTx(idx), EVNTSEL_EN | EVNTSEL_OS | EVNTSEL_USR | - intel_arch_events[idx] | baseline_extra_ctrl); + intel_arch_events[idx] | adaptive_ctrl); wrmsr(MSR_GP_COUNTERx(idx), ctr_start_val); } @@ -268,11 +268,11 @@ static void pebs_disable(unsigned int idx) wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); } -static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) +static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive) { struct pebs_basic *pebs_rec = (struct pebs_basic *)pebs_buffer; struct debug_store *ds = (struct debug_store *)ds_bufer; - unsigned int pebs_record_size = get_adaptive_pebs_record_size(pebs_data_cfg); + unsigned int pebs_record_size; unsigned int count = 0; bool expected, pebs_idx_match, pebs_size_match, data_cfg_match; void *cur_record; @@ -293,12 +293,9 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) do { pebs_rec = (struct pebs_basic *)cur_record; pebs_record_size = pebs_rec->format_size >> RECORD_SIZE_OFFSET; - pebs_idx_match = - pebs_rec->applicable_counters & bitmask; - pebs_size_match = - pebs_record_size == get_adaptive_pebs_record_size(pebs_data_cfg); - data_cfg_match = - (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; + pebs_idx_match = pebs_rec->applicable_counters & bitmask; + pebs_size_match = pebs_record_size == get_pebs_record_size(pebs_data_cfg, use_adaptive); + data_cfg_match = (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; expected = pebs_idx_match && pebs_size_match && data_cfg_match; report(expected, "PEBS record (written seq %d) is verified (including size, counters and cfg).", count); @@ -311,56 +308,57 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) printf("FAIL: The applicable_counters (0x%lx) doesn't match with pmc_bitmask (0x%lx).\n", pebs_rec->applicable_counters, bitmask); if (!pebs_size_match) - printf("FAIL: The pebs_record_size (%d) doesn't match with MSR_PEBS_DATA_CFG (%d).\n", - pebs_record_size, get_adaptive_pebs_record_size(pebs_data_cfg)); + printf("FAIL: The pebs_record_size (%d) doesn't match with expected record size (%d).\n", + pebs_record_size, get_pebs_record_size(pebs_data_cfg, use_adaptive)); if (!data_cfg_match) - printf("FAIL: The pebs_data_cfg (0x%lx) doesn't match with MSR_PEBS_DATA_CFG (0x%lx).\n", - pebs_rec->format_size & 0xffffffffffff, pebs_data_cfg); + printf("FAIL: The pebs_data_cfg (0x%lx) doesn't match with the effective MSR_PEBS_DATA_CFG (0x%lx).\n", + pebs_rec->format_size & 0xffffffffffff, use_adaptive ? pebs_data_cfg : 0); } } -static void check_one_counter(enum pmc_type type, - unsigned int idx, u64 pebs_data_cfg) +static void check_one_counter(enum pmc_type type, unsigned int idx, + u64 pebs_data_cfg, bool use_adaptive) { int pebs_bit = BIT_ULL(type == FIXED ? FIXED_CNT_INDEX + idx : idx); report_prefix_pushf("%s counter %d (0x%lx)", type == FIXED ? "Extended Fixed" : "GP", idx, ctr_start_val); reset_pebs(); - pebs_enable(pebs_bit, pebs_data_cfg); + pebs_enable(pebs_bit, pebs_data_cfg, use_adaptive); workload(); pebs_disable(idx); - check_pebs_records(pebs_bit, pebs_data_cfg); + check_pebs_records(pebs_bit, pebs_data_cfg, use_adaptive); report_prefix_pop(); } /* more than one PEBS records will be generated. */ -static void check_multiple_counters(u64 bitmask, u64 pebs_data_cfg) +static void check_multiple_counters(u64 bitmask, u64 pebs_data_cfg, + bool use_adaptive) { reset_pebs(); - pebs_enable(bitmask, pebs_data_cfg); + pebs_enable(bitmask, pebs_data_cfg, use_adaptive); workload2(); pebs_disable(0); - check_pebs_records(bitmask, pebs_data_cfg); + check_pebs_records(bitmask, pebs_data_cfg, use_adaptive); } -static void check_pebs_counters(u64 pebs_data_cfg) +static void check_pebs_counters(u64 pebs_data_cfg, bool use_adaptive) { unsigned int idx; u64 bitmask = 0; for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) - check_one_counter(FIXED, idx, pebs_data_cfg); + check_one_counter(FIXED, idx, pebs_data_cfg, use_adaptive); for (idx = 0; idx < max_nr_gp_events; idx++) - check_one_counter(GP, idx, pebs_data_cfg); + check_one_counter(GP, idx, pebs_data_cfg, use_adaptive); for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) bitmask |= BIT_ULL(FIXED_CNT_INDEX + idx); for (idx = 0; idx < max_nr_gp_events; idx += 2) bitmask |= BIT_ULL(idx); report_prefix_pushf("Multiple (0x%lx)", bitmask); - check_multiple_counters(bitmask, pebs_data_cfg); + check_multiple_counters(bitmask, pebs_data_cfg, use_adaptive); report_prefix_pop(); } @@ -408,7 +406,7 @@ int main(int ac, char **av) for (i = 0; i < ARRAY_SIZE(counter_start_values); i++) { ctr_start_val = counter_start_values[i]; - check_pebs_counters(0); + check_pebs_counters(0, false); if (!has_baseline) continue; @@ -419,7 +417,11 @@ int main(int ac, char **av) pebs_data_cfg |= ((MAX_NUM_LBR_ENTRY -1) << PEBS_DATACFG_LBR_SHIFT); report_prefix_pushf("Adaptive (0x%lx)", pebs_data_cfg); - check_pebs_counters(pebs_data_cfg); + check_pebs_counters(pebs_data_cfg, true); + report_prefix_pop(); + + report_prefix_pushf("Ignored Adaptive (0x%lx)", pebs_data_cfg); + check_pebs_counters(pebs_data_cfg, false); report_prefix_pop(); } } -- 2.44.0.278.ge034bb2e1d-goog ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters 2024-03-06 23:01 ` [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters Sean Christopherson @ 2024-03-07 9:08 ` Like Xu 2024-03-07 9:28 ` Mi, Dapeng 2024-03-07 10:00 ` Mi, Dapeng 2 siblings, 0 replies; 15+ messages in thread From: Like Xu @ 2024-03-07 9:08 UTC (permalink / raw) To: Sean Christopherson Cc: kvm, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi, Paolo Bonzini On 7/3/2024 7:01 am, Sean Christopherson wrote: > If adaptive PEBS is supported, verify that only basic PEBS records are > generated for counters without their ADAPTIVE flag set. Per the SDM, > adaptive records are generated if and only if both the per-counter flag > *and* the global enable(s) in MSR_PEBS_DATA_CFG are set. > > IA32_PERFEVTSELx.Adaptive_Record[34]: If this bit is set and > IA32_PEBS_ENABLE.PEBS_EN_PMCx is set for the corresponding GP counter, > an overflow of PMCx results in generation of an adaptive PEBS record > with state information based on the selections made in MSR_PEBS_DATA_CFG. > If this bit is not set, a basic record is generated. > > and > > IA32_FIXED_CTR_CTRL.FCx_Adaptive_Record: If this bit is set and > IA32_PEBS_ENABLE.PEBS_EN_FIXEDx is set for the corresponding Fixed > counter, an overflow of FixedCtrx results in generation of an adaptive > PEBS record with state information based on the selections made in > MSR_PEBS_DATA_CFG. If this bit is not set, a basic record is generated. > > Signed-off-by: Sean Christopherson <seanjc@google.com> This test case is awesome. Also, I checked that this part of the behaviour has not changed since the first version of the specification description. Reviewed-by: Like Xu <likexu@tencent.com> > --- > x86/pmu_pebs.c | 74 ++++++++++++++++++++++++++------------------------ > 1 file changed, 38 insertions(+), 36 deletions(-) > > diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c > index dff1ed26..0e8d60c3 100644 > --- a/x86/pmu_pebs.c > +++ b/x86/pmu_pebs.c > @@ -89,11 +89,11 @@ static u64 counter_start_values[] = { > 0xffffffffffff, > }; > > -static unsigned int get_adaptive_pebs_record_size(u64 pebs_data_cfg) > +static unsigned int get_pebs_record_size(u64 pebs_data_cfg, bool use_adaptive) > { > unsigned int sz = sizeof(struct pebs_basic); > > - if (!has_baseline) > + if (!use_adaptive) > return sz; > > if (pebs_data_cfg & PEBS_DATACFG_MEMINFO) > @@ -199,10 +199,10 @@ static void free_buffers(void) > free_page(pebs_buffer); > } > > -static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) > +static void pebs_enable(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive) > { > static struct debug_store *ds; > - u64 baseline_extra_ctrl = 0, fixed_ctr_ctrl = 0; > + u64 adaptive_ctrl = 0, fixed_ctr_ctrl = 0; > unsigned int idx; > > if (has_baseline) > @@ -212,15 +212,15 @@ static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) > ds->pebs_index = ds->pebs_buffer_base = (unsigned long)pebs_buffer; > ds->pebs_absolute_maximum = (unsigned long)pebs_buffer + PAGE_SIZE; > ds->pebs_interrupt_threshold = ds->pebs_buffer_base + > - get_adaptive_pebs_record_size(pebs_data_cfg); > + get_pebs_record_size(pebs_data_cfg, use_adaptive); > > for (idx = 0; idx < pmu.nr_fixed_counters; idx++) { > if (!(BIT_ULL(FIXED_CNT_INDEX + idx) & bitmask)) > continue; > - if (has_baseline) > - baseline_extra_ctrl = BIT(FIXED_CNT_INDEX + idx * 4); > + if (use_adaptive) > + adaptive_ctrl = BIT(FIXED_CNT_INDEX + idx * 4); > wrmsr(MSR_PERF_FIXED_CTRx(idx), ctr_start_val); > - fixed_ctr_ctrl |= (0xbULL << (idx * 4) | baseline_extra_ctrl); > + fixed_ctr_ctrl |= (0xbULL << (idx * 4) | adaptive_ctrl); > } > if (fixed_ctr_ctrl) > wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, fixed_ctr_ctrl); > @@ -228,10 +228,10 @@ static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) > for (idx = 0; idx < max_nr_gp_events; idx++) { > if (!(BIT_ULL(idx) & bitmask)) > continue; > - if (has_baseline) > - baseline_extra_ctrl = ICL_EVENTSEL_ADAPTIVE; > + if (use_adaptive) > + adaptive_ctrl = ICL_EVENTSEL_ADAPTIVE; > wrmsr(MSR_GP_EVENT_SELECTx(idx), EVNTSEL_EN | EVNTSEL_OS | EVNTSEL_USR | > - intel_arch_events[idx] | baseline_extra_ctrl); > + intel_arch_events[idx] | adaptive_ctrl); > wrmsr(MSR_GP_COUNTERx(idx), ctr_start_val); > } > > @@ -268,11 +268,11 @@ static void pebs_disable(unsigned int idx) > wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); > } > > -static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) > +static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive) > { > struct pebs_basic *pebs_rec = (struct pebs_basic *)pebs_buffer; > struct debug_store *ds = (struct debug_store *)ds_bufer; > - unsigned int pebs_record_size = get_adaptive_pebs_record_size(pebs_data_cfg); > + unsigned int pebs_record_size; > unsigned int count = 0; > bool expected, pebs_idx_match, pebs_size_match, data_cfg_match; > void *cur_record; > @@ -293,12 +293,9 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) > do { > pebs_rec = (struct pebs_basic *)cur_record; > pebs_record_size = pebs_rec->format_size >> RECORD_SIZE_OFFSET; > - pebs_idx_match = > - pebs_rec->applicable_counters & bitmask; > - pebs_size_match = > - pebs_record_size == get_adaptive_pebs_record_size(pebs_data_cfg); > - data_cfg_match = > - (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; > + pebs_idx_match = pebs_rec->applicable_counters & bitmask; > + pebs_size_match = pebs_record_size == get_pebs_record_size(pebs_data_cfg, use_adaptive); > + data_cfg_match = (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; > expected = pebs_idx_match && pebs_size_match && data_cfg_match; > report(expected, > "PEBS record (written seq %d) is verified (including size, counters and cfg).", count); > @@ -311,56 +308,57 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) > printf("FAIL: The applicable_counters (0x%lx) doesn't match with pmc_bitmask (0x%lx).\n", > pebs_rec->applicable_counters, bitmask); > if (!pebs_size_match) > - printf("FAIL: The pebs_record_size (%d) doesn't match with MSR_PEBS_DATA_CFG (%d).\n", > - pebs_record_size, get_adaptive_pebs_record_size(pebs_data_cfg)); > + printf("FAIL: The pebs_record_size (%d) doesn't match with expected record size (%d).\n", > + pebs_record_size, get_pebs_record_size(pebs_data_cfg, use_adaptive)); > if (!data_cfg_match) > - printf("FAIL: The pebs_data_cfg (0x%lx) doesn't match with MSR_PEBS_DATA_CFG (0x%lx).\n", > - pebs_rec->format_size & 0xffffffffffff, pebs_data_cfg); > + printf("FAIL: The pebs_data_cfg (0x%lx) doesn't match with the effective MSR_PEBS_DATA_CFG (0x%lx).\n", > + pebs_rec->format_size & 0xffffffffffff, use_adaptive ? pebs_data_cfg : 0); > } > } > > -static void check_one_counter(enum pmc_type type, > - unsigned int idx, u64 pebs_data_cfg) > +static void check_one_counter(enum pmc_type type, unsigned int idx, > + u64 pebs_data_cfg, bool use_adaptive) > { > int pebs_bit = BIT_ULL(type == FIXED ? FIXED_CNT_INDEX + idx : idx); > > report_prefix_pushf("%s counter %d (0x%lx)", > type == FIXED ? "Extended Fixed" : "GP", idx, ctr_start_val); > reset_pebs(); > - pebs_enable(pebs_bit, pebs_data_cfg); > + pebs_enable(pebs_bit, pebs_data_cfg, use_adaptive); > workload(); > pebs_disable(idx); > - check_pebs_records(pebs_bit, pebs_data_cfg); > + check_pebs_records(pebs_bit, pebs_data_cfg, use_adaptive); > report_prefix_pop(); > } > > /* more than one PEBS records will be generated. */ > -static void check_multiple_counters(u64 bitmask, u64 pebs_data_cfg) > +static void check_multiple_counters(u64 bitmask, u64 pebs_data_cfg, > + bool use_adaptive) > { > reset_pebs(); > - pebs_enable(bitmask, pebs_data_cfg); > + pebs_enable(bitmask, pebs_data_cfg, use_adaptive); > workload2(); > pebs_disable(0); > - check_pebs_records(bitmask, pebs_data_cfg); > + check_pebs_records(bitmask, pebs_data_cfg, use_adaptive); > } > > -static void check_pebs_counters(u64 pebs_data_cfg) > +static void check_pebs_counters(u64 pebs_data_cfg, bool use_adaptive) > { > unsigned int idx; > u64 bitmask = 0; > > for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) > - check_one_counter(FIXED, idx, pebs_data_cfg); > + check_one_counter(FIXED, idx, pebs_data_cfg, use_adaptive); > > for (idx = 0; idx < max_nr_gp_events; idx++) > - check_one_counter(GP, idx, pebs_data_cfg); > + check_one_counter(GP, idx, pebs_data_cfg, use_adaptive); > > for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) > bitmask |= BIT_ULL(FIXED_CNT_INDEX + idx); > for (idx = 0; idx < max_nr_gp_events; idx += 2) > bitmask |= BIT_ULL(idx); > report_prefix_pushf("Multiple (0x%lx)", bitmask); > - check_multiple_counters(bitmask, pebs_data_cfg); > + check_multiple_counters(bitmask, pebs_data_cfg, use_adaptive); > report_prefix_pop(); > } > > @@ -408,7 +406,7 @@ int main(int ac, char **av) > > for (i = 0; i < ARRAY_SIZE(counter_start_values); i++) { > ctr_start_val = counter_start_values[i]; > - check_pebs_counters(0); > + check_pebs_counters(0, false); > if (!has_baseline) > continue; > > @@ -419,7 +417,11 @@ int main(int ac, char **av) > pebs_data_cfg |= ((MAX_NUM_LBR_ENTRY -1) << PEBS_DATACFG_LBR_SHIFT); > > report_prefix_pushf("Adaptive (0x%lx)", pebs_data_cfg); > - check_pebs_counters(pebs_data_cfg); > + check_pebs_counters(pebs_data_cfg, true); > + report_prefix_pop(); > + > + report_prefix_pushf("Ignored Adaptive (0x%lx)", pebs_data_cfg); > + check_pebs_counters(pebs_data_cfg, false); > report_prefix_pop(); > } > } ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters 2024-03-06 23:01 ` [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters Sean Christopherson 2024-03-07 9:08 ` Like Xu @ 2024-03-07 9:28 ` Mi, Dapeng 2024-06-05 16:17 ` Sean Christopherson 2024-03-07 10:00 ` Mi, Dapeng 2 siblings, 1 reply; 15+ messages in thread From: Mi, Dapeng @ 2024-03-07 9:28 UTC (permalink / raw) To: Sean Christopherson, Paolo Bonzini Cc: kvm, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi On 3/7/2024 7:01 AM, Sean Christopherson wrote: > If adaptive PEBS is supported, verify that only basic PEBS records are > generated for counters without their ADAPTIVE flag set. Per the SDM, > adaptive records are generated if and only if both the per-counter flag > *and* the global enable(s) in MSR_PEBS_DATA_CFG are set. > > IA32_PERFEVTSELx.Adaptive_Record[34]: If this bit is set and > IA32_PEBS_ENABLE.PEBS_EN_PMCx is set for the corresponding GP counter, > an overflow of PMCx results in generation of an adaptive PEBS record > with state information based on the selections made in MSR_PEBS_DATA_CFG. > If this bit is not set, a basic record is generated. > > and > > IA32_FIXED_CTR_CTRL.FCx_Adaptive_Record: If this bit is set and > IA32_PEBS_ENABLE.PEBS_EN_FIXEDx is set for the corresponding Fixed > counter, an overflow of FixedCtrx results in generation of an adaptive > PEBS record with state information based on the selections made in > MSR_PEBS_DATA_CFG. If this bit is not set, a basic record is generated. > > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > x86/pmu_pebs.c | 74 ++++++++++++++++++++++++++------------------------ > 1 file changed, 38 insertions(+), 36 deletions(-) > > diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c > index dff1ed26..0e8d60c3 100644 > --- a/x86/pmu_pebs.c > +++ b/x86/pmu_pebs.c > @@ -89,11 +89,11 @@ static u64 counter_start_values[] = { > 0xffffffffffff, > }; > > -static unsigned int get_adaptive_pebs_record_size(u64 pebs_data_cfg) > +static unsigned int get_pebs_record_size(u64 pebs_data_cfg, bool use_adaptive) > { > unsigned int sz = sizeof(struct pebs_basic); > > - if (!has_baseline) > + if (!use_adaptive) > return sz; > > if (pebs_data_cfg & PEBS_DATACFG_MEMINFO) > @@ -199,10 +199,10 @@ static void free_buffers(void) > free_page(pebs_buffer); > } > > -static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) > +static void pebs_enable(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive) > { > static struct debug_store *ds; > - u64 baseline_extra_ctrl = 0, fixed_ctr_ctrl = 0; > + u64 adaptive_ctrl = 0, fixed_ctr_ctrl = 0; > unsigned int idx; > > if (has_baseline) > @@ -212,15 +212,15 @@ static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) > ds->pebs_index = ds->pebs_buffer_base = (unsigned long)pebs_buffer; > ds->pebs_absolute_maximum = (unsigned long)pebs_buffer + PAGE_SIZE; > ds->pebs_interrupt_threshold = ds->pebs_buffer_base + > - get_adaptive_pebs_record_size(pebs_data_cfg); > + get_pebs_record_size(pebs_data_cfg, use_adaptive); > > for (idx = 0; idx < pmu.nr_fixed_counters; idx++) { > if (!(BIT_ULL(FIXED_CNT_INDEX + idx) & bitmask)) > continue; > - if (has_baseline) > - baseline_extra_ctrl = BIT(FIXED_CNT_INDEX + idx * 4); > + if (use_adaptive) > + adaptive_ctrl = BIT(FIXED_CNT_INDEX + idx * 4); > wrmsr(MSR_PERF_FIXED_CTRx(idx), ctr_start_val); > - fixed_ctr_ctrl |= (0xbULL << (idx * 4) | baseline_extra_ctrl); > + fixed_ctr_ctrl |= (0xbULL << (idx * 4) | adaptive_ctrl); > } > if (fixed_ctr_ctrl) > wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, fixed_ctr_ctrl); > @@ -228,10 +228,10 @@ static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) > for (idx = 0; idx < max_nr_gp_events; idx++) { > if (!(BIT_ULL(idx) & bitmask)) > continue; > - if (has_baseline) > - baseline_extra_ctrl = ICL_EVENTSEL_ADAPTIVE; > + if (use_adaptive) > + adaptive_ctrl = ICL_EVENTSEL_ADAPTIVE; > wrmsr(MSR_GP_EVENT_SELECTx(idx), EVNTSEL_EN | EVNTSEL_OS | EVNTSEL_USR | > - intel_arch_events[idx] | baseline_extra_ctrl); > + intel_arch_events[idx] | adaptive_ctrl); > wrmsr(MSR_GP_COUNTERx(idx), ctr_start_val); > } > > @@ -268,11 +268,11 @@ static void pebs_disable(unsigned int idx) > wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); > } > > -static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) > +static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive) > { > struct pebs_basic *pebs_rec = (struct pebs_basic *)pebs_buffer; > struct debug_store *ds = (struct debug_store *)ds_bufer; > - unsigned int pebs_record_size = get_adaptive_pebs_record_size(pebs_data_cfg); > + unsigned int pebs_record_size; > unsigned int count = 0; > bool expected, pebs_idx_match, pebs_size_match, data_cfg_match; > void *cur_record; > @@ -293,12 +293,9 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) > do { > pebs_rec = (struct pebs_basic *)cur_record; > pebs_record_size = pebs_rec->format_size >> RECORD_SIZE_OFFSET; > - pebs_idx_match = > - pebs_rec->applicable_counters & bitmask; > - pebs_size_match = > - pebs_record_size == get_adaptive_pebs_record_size(pebs_data_cfg); > - data_cfg_match = > - (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; > + pebs_idx_match = pebs_rec->applicable_counters & bitmask; > + pebs_size_match = pebs_record_size == get_pebs_record_size(pebs_data_cfg, use_adaptive); > + data_cfg_match = (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; Since there is already a macro RECORD_SIZE_OFFSET, we'd better use "RECORD_SIZE_OFFSET - 1" to replace the magic number 47. > expected = pebs_idx_match && pebs_size_match && data_cfg_match; > report(expected, > "PEBS record (written seq %d) is verified (including size, counters and cfg).", count); > @@ -311,56 +308,57 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) > printf("FAIL: The applicable_counters (0x%lx) doesn't match with pmc_bitmask (0x%lx).\n", > pebs_rec->applicable_counters, bitmask); > if (!pebs_size_match) > - printf("FAIL: The pebs_record_size (%d) doesn't match with MSR_PEBS_DATA_CFG (%d).\n", > - pebs_record_size, get_adaptive_pebs_record_size(pebs_data_cfg)); > + printf("FAIL: The pebs_record_size (%d) doesn't match with expected record size (%d).\n", > + pebs_record_size, get_pebs_record_size(pebs_data_cfg, use_adaptive)); > if (!data_cfg_match) > - printf("FAIL: The pebs_data_cfg (0x%lx) doesn't match with MSR_PEBS_DATA_CFG (0x%lx).\n", > - pebs_rec->format_size & 0xffffffffffff, pebs_data_cfg); > + printf("FAIL: The pebs_data_cfg (0x%lx) doesn't match with the effective MSR_PEBS_DATA_CFG (0x%lx).\n", > + pebs_rec->format_size & 0xffffffffffff, use_adaptive ? pebs_data_cfg : 0); > } > } > > -static void check_one_counter(enum pmc_type type, > - unsigned int idx, u64 pebs_data_cfg) > +static void check_one_counter(enum pmc_type type, unsigned int idx, > + u64 pebs_data_cfg, bool use_adaptive) > { > int pebs_bit = BIT_ULL(type == FIXED ? FIXED_CNT_INDEX + idx : idx); > > report_prefix_pushf("%s counter %d (0x%lx)", > type == FIXED ? "Extended Fixed" : "GP", idx, ctr_start_val); > reset_pebs(); > - pebs_enable(pebs_bit, pebs_data_cfg); > + pebs_enable(pebs_bit, pebs_data_cfg, use_adaptive); > workload(); > pebs_disable(idx); > - check_pebs_records(pebs_bit, pebs_data_cfg); > + check_pebs_records(pebs_bit, pebs_data_cfg, use_adaptive); > report_prefix_pop(); > } > > /* more than one PEBS records will be generated. */ > -static void check_multiple_counters(u64 bitmask, u64 pebs_data_cfg) > +static void check_multiple_counters(u64 bitmask, u64 pebs_data_cfg, > + bool use_adaptive) > { > reset_pebs(); > - pebs_enable(bitmask, pebs_data_cfg); > + pebs_enable(bitmask, pebs_data_cfg, use_adaptive); > workload2(); > pebs_disable(0); > - check_pebs_records(bitmask, pebs_data_cfg); > + check_pebs_records(bitmask, pebs_data_cfg, use_adaptive); > } > > -static void check_pebs_counters(u64 pebs_data_cfg) > +static void check_pebs_counters(u64 pebs_data_cfg, bool use_adaptive) > { > unsigned int idx; > u64 bitmask = 0; > > for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) > - check_one_counter(FIXED, idx, pebs_data_cfg); > + check_one_counter(FIXED, idx, pebs_data_cfg, use_adaptive); > > for (idx = 0; idx < max_nr_gp_events; idx++) > - check_one_counter(GP, idx, pebs_data_cfg); > + check_one_counter(GP, idx, pebs_data_cfg, use_adaptive); > > for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) > bitmask |= BIT_ULL(FIXED_CNT_INDEX + idx); > for (idx = 0; idx < max_nr_gp_events; idx += 2) > bitmask |= BIT_ULL(idx); > report_prefix_pushf("Multiple (0x%lx)", bitmask); > - check_multiple_counters(bitmask, pebs_data_cfg); > + check_multiple_counters(bitmask, pebs_data_cfg, use_adaptive); > report_prefix_pop(); > } > > @@ -408,7 +406,7 @@ int main(int ac, char **av) > > for (i = 0; i < ARRAY_SIZE(counter_start_values); i++) { > ctr_start_val = counter_start_values[i]; > - check_pebs_counters(0); > + check_pebs_counters(0, false); > if (!has_baseline) > continue; > > @@ -419,7 +417,11 @@ int main(int ac, char **av) > pebs_data_cfg |= ((MAX_NUM_LBR_ENTRY -1) << PEBS_DATACFG_LBR_SHIFT); > > report_prefix_pushf("Adaptive (0x%lx)", pebs_data_cfg); > - check_pebs_counters(pebs_data_cfg); > + check_pebs_counters(pebs_data_cfg, true); > + report_prefix_pop(); > + > + report_prefix_pushf("Ignored Adaptive (0x%lx)", pebs_data_cfg); > + check_pebs_counters(pebs_data_cfg, false); > report_prefix_pop(); > } > } Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters 2024-03-07 9:28 ` Mi, Dapeng @ 2024-06-05 16:17 ` Sean Christopherson 0 siblings, 0 replies; 15+ messages in thread From: Sean Christopherson @ 2024-06-05 16:17 UTC (permalink / raw) To: Dapeng Mi Cc: Paolo Bonzini, kvm, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi On Thu, Mar 07, 2024, Dapeng Mi wrote: > On 3/7/2024 7:01 AM, Sean Christopherson wrote: > > @@ -293,12 +293,9 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) > > do { > > pebs_rec = (struct pebs_basic *)cur_record; > > pebs_record_size = pebs_rec->format_size >> RECORD_SIZE_OFFSET; > > - pebs_idx_match = > > - pebs_rec->applicable_counters & bitmask; > > - pebs_size_match = > > - pebs_record_size == get_adaptive_pebs_record_size(pebs_data_cfg); > > - data_cfg_match = > > - (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; > > + pebs_idx_match = pebs_rec->applicable_counters & bitmask; > > + pebs_size_match = pebs_record_size == get_pebs_record_size(pebs_data_cfg, use_adaptive); > > + data_cfg_match = (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; > > Since there is already a macro RECORD_SIZE_OFFSET, we'd better use > "RECORD_SIZE_OFFSET - 1" to replace the magic number 47. Very belatedly, I disagree. That the data configuration mask isn't derived from record size. The fact that the record size is the _only_ info that's excluded is coincidental (sort of, that's not quite the right word), i.e. if we want to use a #define, then we should add an explicit define, not abuse RECORD_SIZE_OFFSET. ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters 2024-03-06 23:01 ` [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters Sean Christopherson 2024-03-07 9:08 ` Like Xu 2024-03-07 9:28 ` Mi, Dapeng @ 2024-03-07 10:00 ` Mi, Dapeng 2 siblings, 0 replies; 15+ messages in thread From: Mi, Dapeng @ 2024-03-07 10:00 UTC (permalink / raw) To: Sean Christopherson, Paolo Bonzini Cc: kvm, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi On 3/7/2024 7:01 AM, Sean Christopherson wrote: > If adaptive PEBS is supported, verify that only basic PEBS records are > generated for counters without their ADAPTIVE flag set. Per the SDM, > adaptive records are generated if and only if both the per-counter flag > *and* the global enable(s) in MSR_PEBS_DATA_CFG are set. > > IA32_PERFEVTSELx.Adaptive_Record[34]: If this bit is set and > IA32_PEBS_ENABLE.PEBS_EN_PMCx is set for the corresponding GP counter, > an overflow of PMCx results in generation of an adaptive PEBS record > with state information based on the selections made in MSR_PEBS_DATA_CFG. > If this bit is not set, a basic record is generated. > > and > > IA32_FIXED_CTR_CTRL.FCx_Adaptive_Record: If this bit is set and > IA32_PEBS_ENABLE.PEBS_EN_FIXEDx is set for the corresponding Fixed > counter, an overflow of FixedCtrx results in generation of an adaptive > PEBS record with state information based on the selections made in > MSR_PEBS_DATA_CFG. If this bit is not set, a basic record is generated. > > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > x86/pmu_pebs.c | 74 ++++++++++++++++++++++++++------------------------ > 1 file changed, 38 insertions(+), 36 deletions(-) > > diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c > index dff1ed26..0e8d60c3 100644 > --- a/x86/pmu_pebs.c > +++ b/x86/pmu_pebs.c > @@ -89,11 +89,11 @@ static u64 counter_start_values[] = { > 0xffffffffffff, > }; > > -static unsigned int get_adaptive_pebs_record_size(u64 pebs_data_cfg) > +static unsigned int get_pebs_record_size(u64 pebs_data_cfg, bool use_adaptive) > { > unsigned int sz = sizeof(struct pebs_basic); > > - if (!has_baseline) > + if (!use_adaptive) > return sz; > > if (pebs_data_cfg & PEBS_DATACFG_MEMINFO) > @@ -199,10 +199,10 @@ static void free_buffers(void) > free_page(pebs_buffer); > } > > -static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) > +static void pebs_enable(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive) > { > static struct debug_store *ds; > - u64 baseline_extra_ctrl = 0, fixed_ctr_ctrl = 0; > + u64 adaptive_ctrl = 0, fixed_ctr_ctrl = 0; > unsigned int idx; > > if (has_baseline) > @@ -212,15 +212,15 @@ static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) > ds->pebs_index = ds->pebs_buffer_base = (unsigned long)pebs_buffer; > ds->pebs_absolute_maximum = (unsigned long)pebs_buffer + PAGE_SIZE; > ds->pebs_interrupt_threshold = ds->pebs_buffer_base + > - get_adaptive_pebs_record_size(pebs_data_cfg); > + get_pebs_record_size(pebs_data_cfg, use_adaptive); > > for (idx = 0; idx < pmu.nr_fixed_counters; idx++) { > if (!(BIT_ULL(FIXED_CNT_INDEX + idx) & bitmask)) > continue; > - if (has_baseline) > - baseline_extra_ctrl = BIT(FIXED_CNT_INDEX + idx * 4); > + if (use_adaptive) > + adaptive_ctrl = BIT(FIXED_CNT_INDEX + idx * 4); > wrmsr(MSR_PERF_FIXED_CTRx(idx), ctr_start_val); > - fixed_ctr_ctrl |= (0xbULL << (idx * 4) | baseline_extra_ctrl); > + fixed_ctr_ctrl |= (0xbULL << (idx * 4) | adaptive_ctrl); > } > if (fixed_ctr_ctrl) > wrmsr(MSR_CORE_PERF_FIXED_CTR_CTRL, fixed_ctr_ctrl); > @@ -228,10 +228,10 @@ static void pebs_enable(u64 bitmask, u64 pebs_data_cfg) > for (idx = 0; idx < max_nr_gp_events; idx++) { > if (!(BIT_ULL(idx) & bitmask)) > continue; > - if (has_baseline) > - baseline_extra_ctrl = ICL_EVENTSEL_ADAPTIVE; > + if (use_adaptive) > + adaptive_ctrl = ICL_EVENTSEL_ADAPTIVE; > wrmsr(MSR_GP_EVENT_SELECTx(idx), EVNTSEL_EN | EVNTSEL_OS | EVNTSEL_USR | > - intel_arch_events[idx] | baseline_extra_ctrl); > + intel_arch_events[idx] | adaptive_ctrl); > wrmsr(MSR_GP_COUNTERx(idx), ctr_start_val); > } > > @@ -268,11 +268,11 @@ static void pebs_disable(unsigned int idx) > wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0); > } > > -static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) > +static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive) > { > struct pebs_basic *pebs_rec = (struct pebs_basic *)pebs_buffer; > struct debug_store *ds = (struct debug_store *)ds_bufer; > - unsigned int pebs_record_size = get_adaptive_pebs_record_size(pebs_data_cfg); > + unsigned int pebs_record_size; > unsigned int count = 0; > bool expected, pebs_idx_match, pebs_size_match, data_cfg_match; > void *cur_record; > @@ -293,12 +293,9 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) > do { > pebs_rec = (struct pebs_basic *)cur_record; > pebs_record_size = pebs_rec->format_size >> RECORD_SIZE_OFFSET; > - pebs_idx_match = > - pebs_rec->applicable_counters & bitmask; > - pebs_size_match = > - pebs_record_size == get_adaptive_pebs_record_size(pebs_data_cfg); > - data_cfg_match = > - (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; > + pebs_idx_match = pebs_rec->applicable_counters & bitmask; > + pebs_size_match = pebs_record_size == get_pebs_record_size(pebs_data_cfg, use_adaptive); > + data_cfg_match = (pebs_rec->format_size & GENMASK_ULL(47, 0)) == pebs_data_cfg; > expected = pebs_idx_match && pebs_size_match && data_cfg_match; > report(expected, > "PEBS record (written seq %d) is verified (including size, counters and cfg).", count); > @@ -311,56 +308,57 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg) > printf("FAIL: The applicable_counters (0x%lx) doesn't match with pmc_bitmask (0x%lx).\n", > pebs_rec->applicable_counters, bitmask); > if (!pebs_size_match) > - printf("FAIL: The pebs_record_size (%d) doesn't match with MSR_PEBS_DATA_CFG (%d).\n", > - pebs_record_size, get_adaptive_pebs_record_size(pebs_data_cfg)); > + printf("FAIL: The pebs_record_size (%d) doesn't match with expected record size (%d).\n", > + pebs_record_size, get_pebs_record_size(pebs_data_cfg, use_adaptive)); > if (!data_cfg_match) > - printf("FAIL: The pebs_data_cfg (0x%lx) doesn't match with MSR_PEBS_DATA_CFG (0x%lx).\n", > - pebs_rec->format_size & 0xffffffffffff, pebs_data_cfg); > + printf("FAIL: The pebs_data_cfg (0x%lx) doesn't match with the effective MSR_PEBS_DATA_CFG (0x%lx).\n", > + pebs_rec->format_size & 0xffffffffffff, use_adaptive ? pebs_data_cfg : 0); > } > } > > -static void check_one_counter(enum pmc_type type, > - unsigned int idx, u64 pebs_data_cfg) > +static void check_one_counter(enum pmc_type type, unsigned int idx, > + u64 pebs_data_cfg, bool use_adaptive) > { > int pebs_bit = BIT_ULL(type == FIXED ? FIXED_CNT_INDEX + idx : idx); > > report_prefix_pushf("%s counter %d (0x%lx)", > type == FIXED ? "Extended Fixed" : "GP", idx, ctr_start_val); > reset_pebs(); > - pebs_enable(pebs_bit, pebs_data_cfg); > + pebs_enable(pebs_bit, pebs_data_cfg, use_adaptive); > workload(); > pebs_disable(idx); > - check_pebs_records(pebs_bit, pebs_data_cfg); > + check_pebs_records(pebs_bit, pebs_data_cfg, use_adaptive); > report_prefix_pop(); > } > > /* more than one PEBS records will be generated. */ > -static void check_multiple_counters(u64 bitmask, u64 pebs_data_cfg) > +static void check_multiple_counters(u64 bitmask, u64 pebs_data_cfg, > + bool use_adaptive) > { > reset_pebs(); > - pebs_enable(bitmask, pebs_data_cfg); > + pebs_enable(bitmask, pebs_data_cfg, use_adaptive); > workload2(); > pebs_disable(0); > - check_pebs_records(bitmask, pebs_data_cfg); > + check_pebs_records(bitmask, pebs_data_cfg, use_adaptive); > } > > -static void check_pebs_counters(u64 pebs_data_cfg) > +static void check_pebs_counters(u64 pebs_data_cfg, bool use_adaptive) > { > unsigned int idx; > u64 bitmask = 0; > > for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) > - check_one_counter(FIXED, idx, pebs_data_cfg); > + check_one_counter(FIXED, idx, pebs_data_cfg, use_adaptive); > > for (idx = 0; idx < max_nr_gp_events; idx++) > - check_one_counter(GP, idx, pebs_data_cfg); > + check_one_counter(GP, idx, pebs_data_cfg, use_adaptive); > > for (idx = 0; has_baseline && idx < pmu.nr_fixed_counters; idx++) > bitmask |= BIT_ULL(FIXED_CNT_INDEX + idx); > for (idx = 0; idx < max_nr_gp_events; idx += 2) > bitmask |= BIT_ULL(idx); > report_prefix_pushf("Multiple (0x%lx)", bitmask); > - check_multiple_counters(bitmask, pebs_data_cfg); > + check_multiple_counters(bitmask, pebs_data_cfg, use_adaptive); > report_prefix_pop(); > } > > @@ -408,7 +406,7 @@ int main(int ac, char **av) > > for (i = 0; i < ARRAY_SIZE(counter_start_values); i++) { > ctr_start_val = counter_start_values[i]; After introducing the "Ignored adaptive" check and expanding the pebs_data_cfg, there would be almost 1800 test case, it's not easy to read the test output and locate a certain test case, we'd better add extra print at the beginning of each start_value to distinguish them. > - check_pebs_counters(0); > + check_pebs_counters(0, false); > if (!has_baseline) > continue; > > @@ -419,7 +417,11 @@ int main(int ac, char **av) > pebs_data_cfg |= ((MAX_NUM_LBR_ENTRY -1) << PEBS_DATACFG_LBR_SHIFT); > > report_prefix_pushf("Adaptive (0x%lx)", pebs_data_cfg); > - check_pebs_counters(pebs_data_cfg); > + check_pebs_counters(pebs_data_cfg, true); > + report_prefix_pop(); > + > + report_prefix_pushf("Ignored Adaptive (0x%lx)", pebs_data_cfg); > + check_pebs_counters(pebs_data_cfg, false); > report_prefix_pop(); > } > } ^ permalink raw reply [flat|nested] 15+ messages in thread
* [kvm-unit-tests PATCH 4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest 2024-03-06 23:01 [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases Sean Christopherson ` (2 preceding siblings ...) 2024-03-06 23:01 ` [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters Sean Christopherson @ 2024-03-06 23:01 ` Sean Christopherson 2024-03-07 9:23 ` Like Xu 2024-03-07 9:31 ` Mi, Dapeng 2024-03-07 9:22 ` [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases Mi, Dapeng 2024-06-05 23:20 ` Sean Christopherson 5 siblings, 2 replies; 15+ messages in thread From: Sean Christopherson @ 2024-03-06 23:01 UTC (permalink / raw) To: Paolo Bonzini Cc: kvm, Sean Christopherson, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi When using adaptive PEBS with LBR entries, verify that the LBR entries are all '0'. If KVM fails to context switch LBRs, e.g. when the guest isn't using LBRs, as is the case in the pmu_pebs test, then adaptive PEBS can be used to read the *host* LBRs as the CPU doesn't enforce the VMX MSR bitmaps when generating PEBS records, i.e. ignores KVM's interception of reads to LBR MSRs. This testcase is best exercised by simultaneously utilizing LBRs in the host, e.g. by running "perf record -b -e instructions", so that there is non-zero data in the LBR MSRs. Cc: Like Xu <like.xu.linux@gmail.com> Cc: Mingwei Zhang <mizhang@google.com> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Cc: Zhang Xiong <xiong.y.zhang@intel.com> Cc: Lv Zhiyuan <zhiyuan.lv@intel.com> Cc: Dapeng Mi <dapeng1.mi@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> --- x86/pmu_pebs.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c index 0e8d60c3..df8e736f 100644 --- a/x86/pmu_pebs.c +++ b/x86/pmu_pebs.c @@ -299,6 +299,19 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive expected = pebs_idx_match && pebs_size_match && data_cfg_match; report(expected, "PEBS record (written seq %d) is verified (including size, counters and cfg).", count); + if (use_adaptive && (pebs_data_cfg & PEBS_DATACFG_LBRS)) { + unsigned int lbrs_offset = get_pebs_record_size(pebs_data_cfg & ~PEBS_DATACFG_LBRS, true); + struct lbr_entry *pebs_lbrs = cur_record + lbrs_offset; + int i; + + for (i = 0; i < MAX_NUM_LBR_ENTRY; i++) { + if (!pebs_lbrs[i].from && !pebs_lbrs[i].to) + continue; + + report_fail("PEBS LBR record %u isn't empty, got from = '%lx', to = '%lx', info = '%lx'", + i, pebs_lbrs[i].from, pebs_lbrs[i].to, pebs_lbrs[i].info); + } + } cur_record = cur_record + pebs_record_size; count++; } while (expected && (void *)cur_record < (void *)ds->pebs_index); -- 2.44.0.278.ge034bb2e1d-goog ^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [kvm-unit-tests PATCH 4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest 2024-03-06 23:01 ` [kvm-unit-tests PATCH 4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest Sean Christopherson @ 2024-03-07 9:23 ` Like Xu 2024-03-07 9:31 ` Mi, Dapeng 1 sibling, 0 replies; 15+ messages in thread From: Like Xu @ 2024-03-07 9:23 UTC (permalink / raw) To: Sean Christopherson Cc: kvm, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi, Paolo Bonzini On 7/3/2024 7:01 am, Sean Christopherson wrote: > When using adaptive PEBS with LBR entries, verify that the LBR entries are > all '0'. If KVM fails to context switch LBRs, e.g. when the guest isn't > using LBRs, as is the case in the pmu_pebs test, then adaptive PEBS can be > used to read the *host* LBRs as the CPU doesn't enforce the VMX MSR bitmaps > when generating PEBS records, i.e. ignores KVM's interception of reads to > LBR MSRs. > > This testcase is best exercised by simultaneously utilizing LBRs in the > host, e.g. by running "perf record -b -e instructions", so that there is > non-zero data in the LBR MSRs. > > Cc: Like Xu <like.xu.linux@gmail.com> > Cc: Mingwei Zhang <mizhang@google.com> > Cc: Zhenyu Wang <zhenyuw@linux.intel.com> > Cc: Zhang Xiong <xiong.y.zhang@intel.com> > Cc: Lv Zhiyuan <zhiyuan.lv@intel.com> > Cc: Dapeng Mi <dapeng1.mi@intel.com> > Signed-off-by: Sean Christopherson <seanjc@google.com> As I recall, the conclusion of the internal discussions was "low priority." This issue has been lying on my to-do list for a long time, and finally we have a test case to hit it right in the heart. Reviewed-by: Like Xu <likexu@tencent.com> > --- > x86/pmu_pebs.c | 13 +++++++++++++ > 1 file changed, 13 insertions(+) > > diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c > index 0e8d60c3..df8e736f 100644 > --- a/x86/pmu_pebs.c > +++ b/x86/pmu_pebs.c > @@ -299,6 +299,19 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive > expected = pebs_idx_match && pebs_size_match && data_cfg_match; > report(expected, > "PEBS record (written seq %d) is verified (including size, counters and cfg).", count); > + if (use_adaptive && (pebs_data_cfg & PEBS_DATACFG_LBRS)) { > + unsigned int lbrs_offset = get_pebs_record_size(pebs_data_cfg & ~PEBS_DATACFG_LBRS, true); > + struct lbr_entry *pebs_lbrs = cur_record + lbrs_offset; > + int i; > + > + for (i = 0; i < MAX_NUM_LBR_ENTRY; i++) { > + if (!pebs_lbrs[i].from && !pebs_lbrs[i].to) > + continue; > + > + report_fail("PEBS LBR record %u isn't empty, got from = '%lx', to = '%lx', info = '%lx'", > + i, pebs_lbrs[i].from, pebs_lbrs[i].to, pebs_lbrs[i].info); > + } > + } > cur_record = cur_record + pebs_record_size; > count++; > } while (expected && (void *)cur_record < (void *)ds->pebs_index); ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [kvm-unit-tests PATCH 4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest 2024-03-06 23:01 ` [kvm-unit-tests PATCH 4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest Sean Christopherson 2024-03-07 9:23 ` Like Xu @ 2024-03-07 9:31 ` Mi, Dapeng 1 sibling, 0 replies; 15+ messages in thread From: Mi, Dapeng @ 2024-03-07 9:31 UTC (permalink / raw) To: Sean Christopherson, Paolo Bonzini Cc: kvm, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi On 3/7/2024 7:01 AM, Sean Christopherson wrote: > When using adaptive PEBS with LBR entries, verify that the LBR entries are > all '0'. If KVM fails to context switch LBRs, e.g. when the guest isn't > using LBRs, as is the case in the pmu_pebs test, then adaptive PEBS can be > used to read the *host* LBRs as the CPU doesn't enforce the VMX MSR bitmaps > when generating PEBS records, i.e. ignores KVM's interception of reads to > LBR MSRs. > > This testcase is best exercised by simultaneously utilizing LBRs in the > host, e.g. by running "perf record -b -e instructions", so that there is > non-zero data in the LBR MSRs. > > Cc: Like Xu <like.xu.linux@gmail.com> > Cc: Mingwei Zhang <mizhang@google.com> > Cc: Zhenyu Wang <zhenyuw@linux.intel.com> > Cc: Zhang Xiong <xiong.y.zhang@intel.com> > Cc: Lv Zhiyuan <zhiyuan.lv@intel.com> > Cc: Dapeng Mi <dapeng1.mi@intel.com> > Signed-off-by: Sean Christopherson <seanjc@google.com> > --- > x86/pmu_pebs.c | 13 +++++++++++++ > 1 file changed, 13 insertions(+) > > diff --git a/x86/pmu_pebs.c b/x86/pmu_pebs.c > index 0e8d60c3..df8e736f 100644 > --- a/x86/pmu_pebs.c > +++ b/x86/pmu_pebs.c > @@ -299,6 +299,19 @@ static void check_pebs_records(u64 bitmask, u64 pebs_data_cfg, bool use_adaptive > expected = pebs_idx_match && pebs_size_match && data_cfg_match; > report(expected, > "PEBS record (written seq %d) is verified (including size, counters and cfg).", count); > + if (use_adaptive && (pebs_data_cfg & PEBS_DATACFG_LBRS)) { > + unsigned int lbrs_offset = get_pebs_record_size(pebs_data_cfg & ~PEBS_DATACFG_LBRS, true); > + struct lbr_entry *pebs_lbrs = cur_record + lbrs_offset; > + int i; > + > + for (i = 0; i < MAX_NUM_LBR_ENTRY; i++) { > + if (!pebs_lbrs[i].from && !pebs_lbrs[i].to) > + continue; > + > + report_fail("PEBS LBR record %u isn't empty, got from = '%lx', to = '%lx', info = '%lx'", > + i, pebs_lbrs[i].from, pebs_lbrs[i].to, pebs_lbrs[i].info); > + } > + } > cur_record = cur_record + pebs_record_size; > count++; > } while (expected && (void *)cur_record < (void *)ds->pebs_index); Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases 2024-03-06 23:01 [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases Sean Christopherson ` (3 preceding siblings ...) 2024-03-06 23:01 ` [kvm-unit-tests PATCH 4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest Sean Christopherson @ 2024-03-07 9:22 ` Mi, Dapeng 2024-06-05 23:20 ` Sean Christopherson 5 siblings, 0 replies; 15+ messages in thread From: Mi, Dapeng @ 2024-03-07 9:22 UTC (permalink / raw) To: Sean Christopherson, Paolo Bonzini Cc: kvm, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi On 3/7/2024 7:01 AM, Sean Christopherson wrote: > One bug fix where pmu_pebs attempts to enable PEBS for fixed counter on > CPUs without Extended PEBS, and two new testcases to verify adaptive > PEBS functionality. > > The new testcases are intended both to demonstrate that adaptive PEBS > virtualization is currently broken, and to serve as a gatekeeper for > re-enabling adapative PEBS in the future. > > https://lore.kernel.org/all/ZeepGjHCeSfadANM@google.com > > Sean Christopherson (4): > x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support > x86/pmu: Iterate over adaptive PEBS flag combinations > x86/pmu: Test adaptive PEBS without any adaptive counters > x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the > guest > > lib/x86/pmu.h | 6 ++- > x86/pmu_pebs.c | 109 +++++++++++++++++++++++++++---------------------- > 2 files changed, 66 insertions(+), 49 deletions(-) Tested-by: Dapeng Mi <dapeng1.mi@linux.intel.com> > > base-commit: 55dd01b4f066577b49f03fb7146723c4a65822c4 ^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases 2024-03-06 23:01 [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases Sean Christopherson ` (4 preceding siblings ...) 2024-03-07 9:22 ` [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases Mi, Dapeng @ 2024-06-05 23:20 ` Sean Christopherson 2024-06-06 0:51 ` Mi, Dapeng1 5 siblings, 1 reply; 15+ messages in thread From: Sean Christopherson @ 2024-06-05 23:20 UTC (permalink / raw) To: Sean Christopherson, Paolo Bonzini Cc: kvm, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang Xiong, Lv Zhiyuan, Dapeng Mi On Wed, 06 Mar 2024 15:01:49 -0800, Sean Christopherson wrote: > One bug fix where pmu_pebs attempts to enable PEBS for fixed counter on > CPUs without Extended PEBS, and two new testcases to verify adaptive > PEBS functionality. > > The new testcases are intended both to demonstrate that adaptive PEBS > virtualization is currently broken, and to serve as a gatekeeper for > re-enabling adapative PEBS in the future. > > [...] Applied to kvm-x86 next. Dapeng, I didn't address your feedback about adding finer grained message prefixes. I'm not opposed to doing so, I'm just extremely short on cycles and want to get the fixes landed. [1/4] x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support https://github.com/kvm-x86/kvm-unit-tests/commit/79aa106cd427 [2/4] x86/pmu: Iterate over adaptive PEBS flag combinations https://github.com/kvm-x86/kvm-unit-tests/commit/fc17d5276b38 [3/4] x86/pmu: Test adaptive PEBS without any adaptive counters https://github.com/kvm-x86/kvm-unit-tests/commit/2cb2af7f53db [4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest https://github.com/kvm-x86/kvm-unit-tests/commit/8d0f574f4e4d -- https://github.com/kvm-x86/kvm-unit-tests/tree/next ^ permalink raw reply [flat|nested] 15+ messages in thread
* RE: [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases 2024-06-05 23:20 ` Sean Christopherson @ 2024-06-06 0:51 ` Mi, Dapeng1 0 siblings, 0 replies; 15+ messages in thread From: Mi, Dapeng1 @ 2024-06-06 0:51 UTC (permalink / raw) To: Sean Christopherson, Paolo Bonzini Cc: kvm@vger.kernel.org, Like Xu, Mingwei Zhang, Zhenyu Wang, Zhang, Xiong Y, Lv, Zhiyuan > -----Original Message----- > From: Sean Christopherson <seanjc@google.com> > Sent: Thursday, June 6, 2024 7:21 AM > To: Sean Christopherson <seanjc@google.com>; Paolo Bonzini > <pbonzini@redhat.com> > Cc: kvm@vger.kernel.org; Like Xu <like.xu.linux@gmail.com>; Mingwei Zhang > <mizhang@google.com>; Zhenyu Wang <zhenyuw@linux.intel.com>; Zhang, > Xiong Y <xiong.y.zhang@intel.com>; Lv, Zhiyuan <zhiyuan.lv@intel.com>; Mi, > Dapeng1 <dapeng1.mi@intel.com> > Subject: Re: [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases > > On Wed, 06 Mar 2024 15:01:49 -0800, Sean Christopherson wrote: > > One bug fix where pmu_pebs attempts to enable PEBS for fixed counter > > on CPUs without Extended PEBS, and two new testcases to verify > > adaptive PEBS functionality. > > > > The new testcases are intended both to demonstrate that adaptive PEBS > > virtualization is currently broken, and to serve as a gatekeeper for > > re-enabling adapative PEBS in the future. > > > > [...] > > Applied to kvm-x86 next. Dapeng, I didn't address your feedback about adding > finer grained message prefixes. I'm not opposed to doing so, I'm just extremely > short on cycles and want to get the fixes landed. It's fine. I think I can help to add it. 😊 > > [1/4] x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support > https://github.com/kvm-x86/kvm-unit-tests/commit/79aa106cd427 > [2/4] x86/pmu: Iterate over adaptive PEBS flag combinations > https://github.com/kvm-x86/kvm-unit-tests/commit/fc17d5276b38 > [3/4] x86/pmu: Test adaptive PEBS without any adaptive counters > https://github.com/kvm-x86/kvm-unit-tests/commit/2cb2af7f53db > [4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest > https://github.com/kvm-x86/kvm-unit-tests/commit/8d0f574f4e4d > > -- > https://github.com/kvm-x86/kvm-unit-tests/tree/next ^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2024-06-06 0:51 UTC | newest] Thread overview: 15+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-03-06 23:01 [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases Sean Christopherson 2024-03-06 23:01 ` [kvm-unit-tests PATCH 1/4] x86/pmu: Enable PEBS on fixed counters iff baseline PEBS is support Sean Christopherson 2024-03-07 9:22 ` Mi, Dapeng 2024-03-06 23:01 ` [kvm-unit-tests PATCH 2/4] x86/pmu: Iterate over adaptive PEBS flag combinations Sean Christopherson 2024-03-06 23:01 ` [kvm-unit-tests PATCH 3/4] x86/pmu: Test adaptive PEBS without any adaptive counters Sean Christopherson 2024-03-07 9:08 ` Like Xu 2024-03-07 9:28 ` Mi, Dapeng 2024-06-05 16:17 ` Sean Christopherson 2024-03-07 10:00 ` Mi, Dapeng 2024-03-06 23:01 ` [kvm-unit-tests PATCH 4/4] x86/pmu: Add a PEBS test to verify the host LBRs aren't leaked to the guest Sean Christopherson 2024-03-07 9:23 ` Like Xu 2024-03-07 9:31 ` Mi, Dapeng 2024-03-07 9:22 ` [kvm-unit-tests PATCH 0/4] x86/pmu: PEBS fixes and new testcases Mi, Dapeng 2024-06-05 23:20 ` Sean Christopherson 2024-06-06 0:51 ` Mi, Dapeng1
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox