From: "Mi, Dapeng" <dapeng1.mi@linux.intel.com>
To: Ian Rogers <irogers@google.com>, dapeng1.mi@intel.com
Cc: acme@kernel.org, adrian.hunter@intel.com, ak@linux.intel.com,
alexander.shishkin@linux.intel.com, eranian@google.com,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org,
mingo@redhat.com, namhyung@kernel.org, peterz@infradead.org,
thomas.falcon@intel.com, xudong.hao@intel.com,
zide.chen@intel.com
Subject: Re: [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu
Date: Thu, 12 Mar 2026 14:43:43 +0800 [thread overview]
Message-ID: <a61eae6d-7a6d-40bd-83ec-bd4ea7657b9d@linux.intel.com> (raw)
In-Reply-To: <20260312054810.1571020-1-irogers@google.com>
On 3/12/2026 1:48 PM, Ian Rogers wrote:
> The patch:
> https://lore.kernel.org/lkml/20260311075201.2951073-2-dapeng1.mi@linux.intel.com/
> showed it was pretty easy to accidentally cast non-x86 PMUs to
> x86_hybrid_pmus. Add a BUG_ON for that case. Restructure is_x86_event
> and add an is_x86_pmu to facilitate this.
>
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
> Only build tested.
> ---
> arch/x86/events/core.c | 16 ----------------
> arch/x86/events/perf_event.h | 19 ++++++++++++++++++-
> 2 files changed, 18 insertions(+), 17 deletions(-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 03ce1bc7ef2e..6c6567dc6c88 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -774,22 +774,6 @@ void x86_pmu_enable_all(int added)
> }
> }
>
> -int is_x86_event(struct perf_event *event)
> -{
> - /*
> - * For a non-hybrid platforms, the type of X86 pmu is
> - * always PERF_TYPE_RAW.
> - * For a hybrid platform, the PERF_PMU_CAP_EXTENDED_HW_TYPE
> - * is a unique capability for the X86 PMU.
> - * Use them to detect a X86 event.
> - */
> - if (event->pmu->type == PERF_TYPE_RAW ||
> - event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE)
> - return true;
> -
> - return false;
> -}
> -
> struct pmu *x86_get_pmu(unsigned int cpu)
> {
> struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index fad87d3c8b2c..f1123c95d174 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -115,7 +115,23 @@ static inline bool is_topdown_event(struct perf_event *event)
> return is_metric_event(event) || is_slots_event(event);
> }
>
> -int is_x86_event(struct perf_event *event);
> +static inline bool is_x86_pmu(struct pmu *pmu)
> +{
> + /*
> + * For a non-hybrid platforms, the type of X86 pmu is
> + * always PERF_TYPE_RAW.
> + * For a hybrid platform, the PERF_PMU_CAP_EXTENDED_HW_TYPE
> + * is a unique capability for the X86 PMU.
> + * Use them to detect a X86 event.
> + */
> + return pmu->type == PERF_TYPE_RAW ||
> + (pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE);
> +}
> +
> +static inline bool is_x86_event(struct perf_event *event)
> +{
> + return is_x86_pmu(event->pmu);
> +}
>
> static inline bool check_leader_group(struct perf_event *leader, int flags)
> {
> @@ -779,6 +795,7 @@ struct x86_hybrid_pmu {
>
> static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
> {
> + BUG_ON(!is_x86_pmu(pmu));
> return container_of(pmu, struct x86_hybrid_pmu, pmu);
> }
>
LGTM.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
next prev parent reply other threads:[~2026-03-12 6:43 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-11 7:52 [PATCH 1/2] perf/x86/intel: Fix OMR snoop information parsing issues Dapeng Mi
2026-03-11 7:52 ` [PATCH 2/2] perf/x86: Update cap_user_rdpmc base on rdpmc user disable state Dapeng Mi
2026-03-12 4:44 ` Ian Rogers
2026-03-12 5:04 ` Ian Rogers
2026-03-12 5:48 ` [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu Ian Rogers
2026-03-12 5:48 ` [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu Ian Rogers
2026-03-12 6:44 ` Mi, Dapeng
2026-03-12 8:40 ` Peter Zijlstra
2026-03-12 15:06 ` Ian Rogers
2026-03-12 6:43 ` Mi, Dapeng [this message]
2026-03-12 8:25 ` [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu Mi, Dapeng
2026-03-12 8:31 ` Peter Zijlstra
2026-03-12 9:44 ` Mi, Dapeng
2026-03-12 15:16 ` Ian Rogers
2026-03-13 0:48 ` Mi, Dapeng
2026-03-12 6:23 ` [PATCH 2/2] perf/x86: Update cap_user_rdpmc base on rdpmc user disable state Mi, Dapeng
2026-03-12 6:17 ` Mi, Dapeng
2026-03-12 4:07 ` [PATCH 1/2] perf/x86/intel: Fix OMR snoop information parsing issues Ian Rogers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a61eae6d-7a6d-40bd-83ec-bd4ea7657b9d@linux.intel.com \
--to=dapeng1.mi@linux.intel.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=ak@linux.intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=dapeng1.mi@intel.com \
--cc=eranian@google.com \
--cc=irogers@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=thomas.falcon@intel.com \
--cc=xudong.hao@intel.com \
--cc=zide.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox