public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Liang, Kan" <kan.liang@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>,
	x86@kernel.org, eranian@google.com, ravi.bangoria@amd.com
Cc: linux-kernel@vger.kernel.org, acme@kernel.org,
	mark.rutland@arm.com, alexander.shishkin@linux.intel.com,
	jolsa@kernel.org, namhyung@kernel.org
Subject: Re: [PATCH v2 2/9] perf/x86/intel: Move the topdown stuff into the intel driver
Date: Wed, 31 Aug 2022 09:41:06 -0400	[thread overview]
Message-ID: <30dfae24-887d-128f-3172-d52c90c95f86@linux.intel.com> (raw)
In-Reply-To: <20220829101321.505933457@infradead.org>



On 2022-08-29 6:10 a.m., Peter Zijlstra wrote:
> Use the new x86_pmu::{set_period,update}() methods to push the topdown
> stuff into the Intel driver, where it belongs.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  arch/x86/events/core.c       |    7 -------
>  arch/x86/events/intel/core.c |   28 +++++++++++++++++++++++++---
>  2 files changed, 25 insertions(+), 10 deletions(-)
> 
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -119,9 +119,6 @@ u64 x86_perf_event_update(struct perf_ev
>  	if (unlikely(!hwc->event_base))
>  		return 0;
>  
> -	if (unlikely(is_topdown_count(event)) && x86_pmu.update_topdown_event)
> -		return x86_pmu.update_topdown_event(event);
> -
>  	/*
>  	 * Careful: an NMI might modify the previous event value.
>  	 *
> @@ -1373,10 +1370,6 @@ int x86_perf_event_set_period(struct per
>  	if (unlikely(!hwc->event_base))
>  		return 0;
>  
> -	if (unlikely(is_topdown_count(event)) &&
> -	    x86_pmu.set_topdown_event_period)
> -		return x86_pmu.set_topdown_event_period(event);
> -
>  	/*
>  	 * If we are way outside a reasonable range then just skip forward:
>  	 */
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -2301,7 +2301,7 @@ static void intel_pmu_nhm_workaround(voi
>  	for (i = 0; i < 4; i++) {
>  		event = cpuc->events[i];
>  		if (event)
> -			x86_perf_event_update(event);
> +			static_call(x86_pmu_update)(event);
>  	}
>  
>  	for (i = 0; i < 4; i++) {
> @@ -2316,7 +2316,7 @@ static void intel_pmu_nhm_workaround(voi
>  		event = cpuc->events[i];
>  
>  		if (event) {
> -			x86_perf_event_set_period(event);
> +			static_call(x86_pmu_set_period)(event);
>  			__x86_pmu_enable_event(&event->hw,
>  					ARCH_PERFMON_EVENTSEL_ENABLE);
>  		} else
> @@ -2793,7 +2793,7 @@ static void intel_pmu_add_event(struct p
>   */
>  int intel_pmu_save_and_restart(struct perf_event *event)
>  {
> -	x86_perf_event_update(event);
> +	static_call(x86_pmu_update)(event);
>  	/*
>  	 * For a checkpointed counter always reset back to 0.  This
>  	 * avoids a situation where the counter overflows, aborts the
> @@ -2805,9 +2805,27 @@ int intel_pmu_save_and_restart(struct pe
>  		wrmsrl(event->hw.event_base, 0);
>  		local64_set(&event->hw.prev_count, 0);
>  	}
> +	return static_call(x86_pmu_set_period)(event);
> +}
> +
> +static int intel_pmu_set_period(struct perf_event *event)
> +{
> +	if (unlikely(is_topdown_count(event)) &&
> +	    x86_pmu.set_topdown_event_period)
> +		return x86_pmu.set_topdown_event_period(event);
> +
>  	return x86_perf_event_set_period(event);
>  }
>  
> +static u64 intel_pmu_update(struct perf_event *event)
> +{
> +	if (unlikely(is_topdown_count(event)) &&
> +	    x86_pmu.update_topdown_event)
> +		return x86_pmu.update_topdown_event(event);
> +
> +	return x86_perf_event_update(event);
> +}
> +
>  static void intel_pmu_reset(void)
>  {
>  	struct debug_store *ds = __this_cpu_read(cpu_hw_events.ds);
> @@ -4635,6 +4653,10 @@ static __initconst const struct x86_pmu
>  	.enable_all		= core_pmu_enable_all,
>  	.enable			= core_pmu_enable_event,
>  	.disable		= x86_pmu_disable_event,
> +
> +	.set_period		= intel_pmu_set_period,
> +	.update			= intel_pmu_update,

I tried the patch, but it impacts the topdown.
The root cause is that these should be added for intel_pmu rather than
core_pmu.

Thanks,
Kan
>  	.hw_config		= core_pmu_hw_config,
>  	.schedule_events	= x86_schedule_events,
>  	.eventsel		= MSR_ARCH_PERFMON_EVENTSEL0,
> 
> 

  reply	other threads:[~2022-08-31 13:41 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-29 10:09 [PATCH v2 0/9] perf/x86: Some cleanups Peter Zijlstra
2022-08-29 10:10 ` [PATCH v2 1/9] perf/x86: Add two more x86_pmu methods Peter Zijlstra
2022-09-09  8:52   ` [tip: perf/core] " tip-bot2 for Peter Zijlstra
2022-08-29 10:10 ` [PATCH v2 2/9] perf/x86/intel: Move the topdown stuff into the intel driver Peter Zijlstra
2022-08-31 13:41   ` Liang, Kan [this message]
2022-09-01  9:06     ` Peter Zijlstra
2022-09-09  8:52   ` [tip: perf/core] " tip-bot2 for Peter Zijlstra
2022-08-29 10:10 ` [PATCH v2 3/9] perf/x86: Change x86_pmu::limit_period signature Peter Zijlstra
2022-09-09  8:52   ` [tip: perf/core] " tip-bot2 for Peter Zijlstra
2022-08-29 10:10 ` [PATCH v2 4/9] perf/x86: Add a x86_pmu::limit_period static_call Peter Zijlstra
2022-09-09  8:52   ` [tip: perf/core] " tip-bot2 for Peter Zijlstra
2022-08-29 10:10 ` [PATCH v2 5/9] perf/x86/intel: Remove x86_pmu::set_topdown_event_period Peter Zijlstra
2022-09-09  8:52   ` [tip: perf/core] " tip-bot2 for Peter Zijlstra
2022-08-29 10:10 ` [PATCH v2 6/9] perf/x86/intel: Remove x86_pmu::update_topdown_event Peter Zijlstra
2022-09-09  8:52   ` [tip: perf/core] " tip-bot2 for Peter Zijlstra
2022-08-29 10:10 ` [PATCH v2 7/9] perf/x86/p4: Remove perfctr_second_write quirk Peter Zijlstra
2022-09-09  8:52   ` [tip: perf/core] " tip-bot2 for Peter Zijlstra
2022-08-29 10:10 ` [PATCH v2 8/9] perf/x86/intel: Shadow MSR_ARCH_PERFMON_FIXED_CTR_CTRL Peter Zijlstra
2022-08-31 13:52   ` Liang, Kan
2022-09-01  9:10     ` Peter Zijlstra
2022-09-01 10:04       ` Peter Zijlstra
2022-09-01 11:37         ` Liang, Kan
2022-08-29 10:10 ` [PATCH v2 9/9] perf/x86/intel: Optimize short PEBS counters Peter Zijlstra
2022-08-29 15:55   ` Liang, Kan
2022-08-29 21:12     ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=30dfae24-887d-128f-3172-d52c90c95f86@linux.intel.com \
    --to=kan.liang@linux.intel.com \
    --cc=acme@kernel.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=eranian@google.com \
    --cc=jolsa@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=ravi.bangoria@amd.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox