* [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu
2026-03-12 5:48 ` [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu Ian Rogers
@ 2026-03-12 5:48 ` Ian Rogers
2026-03-12 6:44 ` Mi, Dapeng
2026-03-12 8:40 ` Peter Zijlstra
2026-03-12 6:43 ` [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu Mi, Dapeng
` (2 subsequent siblings)
3 siblings, 2 replies; 18+ messages in thread
From: Ian Rogers @ 2026-03-12 5:48 UTC (permalink / raw)
To: dapeng1.mi, dapeng1.mi
Cc: irogers, acme, adrian.hunter, ak, alexander.shishkin, eranian,
linux-kernel, linux-perf-users, mingo, namhyung, peterz,
thomas.falcon, xudong.hao, zide.chen
Use the capabilities of the PMU rather than the global variable
perf_is_hybrid to determine if a hybrid pmu has been passed to the
main accessors. As the pmu capabilities check mirrors that in
is_x86_pmu, the BUG_ON(!is_x86_pmu...) in hybrid_pmu can be elided as
it is provably always false (with sufficient function inlining, common
sub-expression elimination, etc.) in its most common uses.
Signed-off-by: Ian Rogers <irogers@google.com>
---
Only build tested.
---
arch/x86/events/perf_event.h | 52 +++++++++++++++++++-----------------
1 file changed, 28 insertions(+), 24 deletions(-)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index f1123c95d174..7990d86ef233 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -802,34 +802,38 @@ static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
extern struct static_key_false perf_is_hybrid;
#define is_hybrid() static_branch_unlikely(&perf_is_hybrid)
-#define hybrid(_pmu, _field) \
-(*({ \
- typeof(&x86_pmu._field) __Fp = &x86_pmu._field; \
- \
- if (is_hybrid() && (_pmu)) \
- __Fp = &hybrid_pmu(_pmu)->_field; \
- \
- __Fp; \
+
+#define hybrid(_pmu, _field) \
+(*({ \
+ typeof(&x86_pmu._field) __Fp = &x86_pmu._field; \
+ struct pmu *__pmu = _pmu; \
+ \
+ if (__pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE) \
+ __Fp = &hybrid_pmu(__pmu)->_field; \
+ \
+ __Fp; \
}))
-#define hybrid_var(_pmu, _var) \
-(*({ \
- typeof(&_var) __Fp = &_var; \
- \
- if (is_hybrid() && (_pmu)) \
- __Fp = &hybrid_pmu(_pmu)->_var; \
- \
- __Fp; \
+#define hybrid_var(_pmu, _var) \
+(*({ \
+ typeof(&_var) __Fp = &_var; \
+ struct pmu *__pmu = _pmu; \
+ \
+ if (__pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE) \
+ __Fp = &hybrid_pmu(__pmu)->_var; \
+ \
+ __Fp; \
}))
-#define hybrid_bit(_pmu, _field) \
-({ \
- bool __Fp = x86_pmu._field; \
- \
- if (is_hybrid() && (_pmu)) \
- __Fp = hybrid_pmu(_pmu)->_field; \
- \
- __Fp; \
+#define hybrid_bit(_pmu, _field) \
+({ \
+ bool __Fp = x86_pmu._field; \
+ struct pmu *__pmu = _pmu; \
+ \
+ if (__pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE) \
+ __Fp = hybrid_pmu(__pmu)->_field; \
+ \
+ __Fp; \
})
/*
--
2.53.0.851.ga537e3e6e9-goog
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu
2026-03-12 5:48 ` [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu Ian Rogers
@ 2026-03-12 6:44 ` Mi, Dapeng
2026-03-12 8:40 ` Peter Zijlstra
1 sibling, 0 replies; 18+ messages in thread
From: Mi, Dapeng @ 2026-03-12 6:44 UTC (permalink / raw)
To: Ian Rogers, dapeng1.mi
Cc: acme, adrian.hunter, ak, alexander.shishkin, eranian,
linux-kernel, linux-perf-users, mingo, namhyung, peterz,
thomas.falcon, xudong.hao, zide.chen
LGTM.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
On 3/12/2026 1:48 PM, Ian Rogers wrote:
> Use the capabilities of the PMU rather than the global variable
> perf_is_hybrid to determine if a hybrid pmu has been passed to the
> main accessors. As the pmu capabilities check mirrors that in
> is_x86_pmu, the BUG_ON(!is_x86_pmu...) in hybrid_pmu can be elided as
> it is provably always false (with sufficient function inlining, common
> sub-expression elimination, etc.) in its most common uses.
>
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
> Only build tested.
> ---
> arch/x86/events/perf_event.h | 52 +++++++++++++++++++-----------------
> 1 file changed, 28 insertions(+), 24 deletions(-)
>
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index f1123c95d174..7990d86ef233 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -802,34 +802,38 @@ static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
> extern struct static_key_false perf_is_hybrid;
> #define is_hybrid() static_branch_unlikely(&perf_is_hybrid)
>
> -#define hybrid(_pmu, _field) \
> -(*({ \
> - typeof(&x86_pmu._field) __Fp = &x86_pmu._field; \
> - \
> - if (is_hybrid() && (_pmu)) \
> - __Fp = &hybrid_pmu(_pmu)->_field; \
> - \
> - __Fp; \
> +
> +#define hybrid(_pmu, _field) \
> +(*({ \
> + typeof(&x86_pmu._field) __Fp = &x86_pmu._field; \
> + struct pmu *__pmu = _pmu; \
> + \
> + if (__pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE) \
> + __Fp = &hybrid_pmu(__pmu)->_field; \
> + \
> + __Fp; \
> }))
>
> -#define hybrid_var(_pmu, _var) \
> -(*({ \
> - typeof(&_var) __Fp = &_var; \
> - \
> - if (is_hybrid() && (_pmu)) \
> - __Fp = &hybrid_pmu(_pmu)->_var; \
> - \
> - __Fp; \
> +#define hybrid_var(_pmu, _var) \
> +(*({ \
> + typeof(&_var) __Fp = &_var; \
> + struct pmu *__pmu = _pmu; \
> + \
> + if (__pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE) \
> + __Fp = &hybrid_pmu(__pmu)->_var; \
> + \
> + __Fp; \
> }))
>
> -#define hybrid_bit(_pmu, _field) \
> -({ \
> - bool __Fp = x86_pmu._field; \
> - \
> - if (is_hybrid() && (_pmu)) \
> - __Fp = hybrid_pmu(_pmu)->_field; \
> - \
> - __Fp; \
> +#define hybrid_bit(_pmu, _field) \
> +({ \
> + bool __Fp = x86_pmu._field; \
> + struct pmu *__pmu = _pmu; \
> + \
> + if (__pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE) \
> + __Fp = hybrid_pmu(__pmu)->_field; \
> + \
> + __Fp; \
> })
>
> /*
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu
2026-03-12 5:48 ` [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu Ian Rogers
2026-03-12 6:44 ` Mi, Dapeng
@ 2026-03-12 8:40 ` Peter Zijlstra
2026-03-12 15:06 ` Ian Rogers
1 sibling, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2026-03-12 8:40 UTC (permalink / raw)
To: Ian Rogers
Cc: dapeng1.mi, dapeng1.mi, acme, adrian.hunter, ak,
alexander.shishkin, eranian, linux-kernel, linux-perf-users,
mingo, namhyung, thomas.falcon, xudong.hao, zide.chen
On Wed, Mar 11, 2026 at 10:48:10PM -0700, Ian Rogers wrote:
> Use the capabilities of the PMU rather than the global variable
> perf_is_hybrid to determine if a hybrid pmu has been passed to the
> main accessors. As the pmu capabilities check mirrors that in
> is_x86_pmu, the BUG_ON(!is_x86_pmu...) in hybrid_pmu can be elided as
> it is provably always false (with sufficient function inlining, common
> sub-expression elimination, etc.) in its most common uses.
perf_is_hybrid is not a variable, its a static_branch. You're adding a
runtime branch here, for no appreciable benefit.
And while (Intel) client seems flooded with this hybrid nonsense,
servers are still sane. Also AMD have uniform PMU across their regular
and compact cores and don't need this either.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu
2026-03-12 8:40 ` Peter Zijlstra
@ 2026-03-12 15:06 ` Ian Rogers
0 siblings, 0 replies; 18+ messages in thread
From: Ian Rogers @ 2026-03-12 15:06 UTC (permalink / raw)
To: Peter Zijlstra
Cc: dapeng1.mi, dapeng1.mi, acme, adrian.hunter, ak,
alexander.shishkin, eranian, linux-kernel, linux-perf-users,
mingo, namhyung, thomas.falcon, xudong.hao, zide.chen
On Thu, Mar 12, 2026 at 1:40 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Mar 11, 2026 at 10:48:10PM -0700, Ian Rogers wrote:
> > Use the capabilities of the PMU rather than the global variable
> > perf_is_hybrid to determine if a hybrid pmu has been passed to the
> > main accessors. As the pmu capabilities check mirrors that in
> > is_x86_pmu, the BUG_ON(!is_x86_pmu...) in hybrid_pmu can be elided as
> > it is provably always false (with sufficient function inlining, common
> > sub-expression elimination, etc.) in its most common uses.
>
> perf_is_hybrid is not a variable, its a static_branch. You're adding a
> runtime branch here, for no appreciable benefit.
>
> And while (Intel) client seems flooded with this hybrid nonsense,
> servers are still sane. Also AMD have uniform PMU across their regular
> and compact cores and don't need this either.
Agreed, and the AI review (set up similarly to how Dapeng suggested)
also agrees with you:
Additionally, does replacing the is_hybrid() static branch with a dynamic
pointer dereference and bitwise check introduce unnecessary overhead on hot
paths (like NMIs and context switches) for non-hybrid x86 CPUs?
This is missing what the next patch does: it introduces a dominating
branch, meaning the BUG_ON condition is known to be false at compile
time if reasonable compiler optimizations are enabled - possibly
better than a nop, although perhaps the static_branch makes the
initial capabilities check a nop too. Anyway, this series was
primarily meant to catch other inadvertent container_ofs and isn't
something that needs to be in production code. I just couldn't see an
easy way to do the equivalent of "ifndef NDEBUG". Given the comments
on the next patch I suggest we don't adopt the changes, but I'll add
some more feedback in that patch.
Thanks,
Ian
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu
2026-03-12 5:48 ` [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu Ian Rogers
2026-03-12 5:48 ` [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu Ian Rogers
@ 2026-03-12 6:43 ` Mi, Dapeng
2026-03-12 8:25 ` Mi, Dapeng
2026-03-12 8:31 ` Peter Zijlstra
3 siblings, 0 replies; 18+ messages in thread
From: Mi, Dapeng @ 2026-03-12 6:43 UTC (permalink / raw)
To: Ian Rogers, dapeng1.mi
Cc: acme, adrian.hunter, ak, alexander.shishkin, eranian,
linux-kernel, linux-perf-users, mingo, namhyung, peterz,
thomas.falcon, xudong.hao, zide.chen
On 3/12/2026 1:48 PM, Ian Rogers wrote:
> The patch:
> https://lore.kernel.org/lkml/20260311075201.2951073-2-dapeng1.mi@linux.intel.com/
> showed it was pretty easy to accidentally cast non-x86 PMUs to
> x86_hybrid_pmus. Add a BUG_ON for that case. Restructure is_x86_event
> and add an is_x86_pmu to facilitate this.
>
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
> Only build tested.
> ---
> arch/x86/events/core.c | 16 ----------------
> arch/x86/events/perf_event.h | 19 ++++++++++++++++++-
> 2 files changed, 18 insertions(+), 17 deletions(-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 03ce1bc7ef2e..6c6567dc6c88 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -774,22 +774,6 @@ void x86_pmu_enable_all(int added)
> }
> }
>
> -int is_x86_event(struct perf_event *event)
> -{
> - /*
> - * For a non-hybrid platforms, the type of X86 pmu is
> - * always PERF_TYPE_RAW.
> - * For a hybrid platform, the PERF_PMU_CAP_EXTENDED_HW_TYPE
> - * is a unique capability for the X86 PMU.
> - * Use them to detect a X86 event.
> - */
> - if (event->pmu->type == PERF_TYPE_RAW ||
> - event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE)
> - return true;
> -
> - return false;
> -}
> -
> struct pmu *x86_get_pmu(unsigned int cpu)
> {
> struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index fad87d3c8b2c..f1123c95d174 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -115,7 +115,23 @@ static inline bool is_topdown_event(struct perf_event *event)
> return is_metric_event(event) || is_slots_event(event);
> }
>
> -int is_x86_event(struct perf_event *event);
> +static inline bool is_x86_pmu(struct pmu *pmu)
> +{
> + /*
> + * For a non-hybrid platforms, the type of X86 pmu is
> + * always PERF_TYPE_RAW.
> + * For a hybrid platform, the PERF_PMU_CAP_EXTENDED_HW_TYPE
> + * is a unique capability for the X86 PMU.
> + * Use them to detect a X86 event.
> + */
> + return pmu->type == PERF_TYPE_RAW ||
> + (pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE);
> +}
> +
> +static inline bool is_x86_event(struct perf_event *event)
> +{
> + return is_x86_pmu(event->pmu);
> +}
>
> static inline bool check_leader_group(struct perf_event *leader, int flags)
> {
> @@ -779,6 +795,7 @@ struct x86_hybrid_pmu {
>
> static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
> {
> + BUG_ON(!is_x86_pmu(pmu));
> return container_of(pmu, struct x86_hybrid_pmu, pmu);
> }
>
LGTM.
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu
2026-03-12 5:48 ` [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu Ian Rogers
2026-03-12 5:48 ` [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu Ian Rogers
2026-03-12 6:43 ` [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu Mi, Dapeng
@ 2026-03-12 8:25 ` Mi, Dapeng
2026-03-12 8:31 ` Peter Zijlstra
3 siblings, 0 replies; 18+ messages in thread
From: Mi, Dapeng @ 2026-03-12 8:25 UTC (permalink / raw)
To: Ian Rogers, dapeng1.mi
Cc: acme, adrian.hunter, ak, alexander.shishkin, eranian,
linux-kernel, linux-perf-users, mingo, namhyung, peterz,
thomas.falcon, xudong.hao, zide.chen
On 3/12/2026 1:48 PM, Ian Rogers wrote:
> The patch:
> https://lore.kernel.org/lkml/20260311075201.2951073-2-dapeng1.mi@linux.intel.com/
> showed it was pretty easy to accidentally cast non-x86 PMUs to
> x86_hybrid_pmus. Add a BUG_ON for that case. Restructure is_x86_event
> and add an is_x86_pmu to facilitate this.
>
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
> Only build tested.
> ---
> arch/x86/events/core.c | 16 ----------------
> arch/x86/events/perf_event.h | 19 ++++++++++++++++++-
> 2 files changed, 18 insertions(+), 17 deletions(-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 03ce1bc7ef2e..6c6567dc6c88 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -774,22 +774,6 @@ void x86_pmu_enable_all(int added)
> }
> }
>
> -int is_x86_event(struct perf_event *event)
> -{
> - /*
> - * For a non-hybrid platforms, the type of X86 pmu is
> - * always PERF_TYPE_RAW.
> - * For a hybrid platform, the PERF_PMU_CAP_EXTENDED_HW_TYPE
> - * is a unique capability for the X86 PMU.
> - * Use them to detect a X86 event.
> - */
> - if (event->pmu->type == PERF_TYPE_RAW ||
> - event->pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE)
> - return true;
> -
> - return false;
> -}
> -
> struct pmu *x86_get_pmu(unsigned int cpu)
> {
> struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index fad87d3c8b2c..f1123c95d174 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -115,7 +115,23 @@ static inline bool is_topdown_event(struct perf_event *event)
> return is_metric_event(event) || is_slots_event(event);
> }
>
> -int is_x86_event(struct perf_event *event);
> +static inline bool is_x86_pmu(struct pmu *pmu)
> +{
> + /*
> + * For a non-hybrid platforms, the type of X86 pmu is
> + * always PERF_TYPE_RAW.
> + * For a hybrid platform, the PERF_PMU_CAP_EXTENDED_HW_TYPE
> + * is a unique capability for the X86 PMU.
> + * Use them to detect a X86 event.
> + */
> + return pmu->type == PERF_TYPE_RAW ||
> + (pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE);
> +}
> +
> +static inline bool is_x86_event(struct perf_event *event)
> +{
> + return is_x86_pmu(event->pmu);
> +}
>
> static inline bool check_leader_group(struct perf_event *leader, int flags)
> {
> @@ -779,6 +795,7 @@ struct x86_hybrid_pmu {
>
> static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
> {
> + BUG_ON(!is_x86_pmu(pmu));
With the change, I see the kernel crash is triggered on NVL, I still don't
figure out which exact place triggers it. I would debug it later. Thanks.
> return container_of(pmu, struct x86_hybrid_pmu, pmu);
> }
>
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu
2026-03-12 5:48 ` [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu Ian Rogers
` (2 preceding siblings ...)
2026-03-12 8:25 ` Mi, Dapeng
@ 2026-03-12 8:31 ` Peter Zijlstra
2026-03-12 9:44 ` Mi, Dapeng
3 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2026-03-12 8:31 UTC (permalink / raw)
To: Ian Rogers
Cc: dapeng1.mi, dapeng1.mi, acme, adrian.hunter, ak,
alexander.shishkin, eranian, linux-kernel, linux-perf-users,
mingo, namhyung, thomas.falcon, xudong.hao, zide.chen
On Wed, Mar 11, 2026 at 10:48:09PM -0700, Ian Rogers wrote:
> The patch:
> https://lore.kernel.org/lkml/20260311075201.2951073-2-dapeng1.mi@linux.intel.com/
> showed it was pretty easy to accidentally cast non-x86 PMUs to
> x86_hybrid_pmus. Add a BUG_ON for that case. Restructure is_x86_event
> and add an is_x86_pmu to facilitate this.
>
> @@ -779,6 +795,7 @@ struct x86_hybrid_pmu {
>
> static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
> {
> + BUG_ON(!is_x86_pmu(pmu));
> return container_of(pmu, struct x86_hybrid_pmu, pmu);
> }
Given that hybrid_pmu will have PERF_PMU_CAP_EXTENDED_HW_TYPE, and we
should really only use hyrid_pmu() on one of those, would not the
simpler patch be so?
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index fad87d3c8b2c..13ec623617a9 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -779,6 +779,7 @@ struct x86_hybrid_pmu {
static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
{
+ BUG_ON(!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE));
return container_of(pmu, struct x86_hybrid_pmu, pmu);
}
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu
2026-03-12 8:31 ` Peter Zijlstra
@ 2026-03-12 9:44 ` Mi, Dapeng
2026-03-12 15:16 ` Ian Rogers
0 siblings, 1 reply; 18+ messages in thread
From: Mi, Dapeng @ 2026-03-12 9:44 UTC (permalink / raw)
To: Peter Zijlstra, Ian Rogers
Cc: dapeng1.mi, acme, adrian.hunter, ak, alexander.shishkin, eranian,
linux-kernel, linux-perf-users, mingo, namhyung, thomas.falcon,
xudong.hao, zide.chen
On 3/12/2026 4:31 PM, Peter Zijlstra wrote:
> On Wed, Mar 11, 2026 at 10:48:09PM -0700, Ian Rogers wrote:
>> The patch:
>> https://lore.kernel.org/lkml/20260311075201.2951073-2-dapeng1.mi@linux.intel.com/
>> showed it was pretty easy to accidentally cast non-x86 PMUs to
>> x86_hybrid_pmus. Add a BUG_ON for that case. Restructure is_x86_event
>> and add an is_x86_pmu to facilitate this.
>>
>> @@ -779,6 +795,7 @@ struct x86_hybrid_pmu {
>>
>> static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
>> {
>> + BUG_ON(!is_x86_pmu(pmu));
>> return container_of(pmu, struct x86_hybrid_pmu, pmu);
>> }
> Given that hybrid_pmu will have PERF_PMU_CAP_EXTENDED_HW_TYPE, and we
> should really only use hyrid_pmu() on one of those, would not the
> simpler patch be so?
>
>
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index fad87d3c8b2c..13ec623617a9 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -779,6 +779,7 @@ struct x86_hybrid_pmu {
>
> static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
> {
> + BUG_ON(!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE));
It looks we can't add either !is_x86_pmu(pmu) or !(pmu->capabilities &
PERF_PMU_CAP_EXTENDED_HW_TYPE) here. hybrid_pmu() is called by the hybrid()
marco or other variants, and hybrid() macro is called in many places of the
intel_pmu_init(), like the update_pmu_cap() , but the flag
PERF_PMU_CAP_EXTENDED_HW_TYPE is still not set for the hybrid
pmu->capabilities until intel_pmu_init() ends and the hybrid pmus are
registered. Then it would cause the unexpected kernel crash.
[ 1.945128] kernel BUG at arch/x86/events/intel/../perf_event.h:798!
[ 1.946131] Oops: invalid opcode: 0000 [#1] SMP NOPTI
[ 1.947127] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted
7.0.0-rc3-perf-urgent-gc8b4b538960c #460 PREEMPT(full)
[ 1.947127] Hardware name: Intel Corporation Panther Lake Client
Platform/PTL-UH LP5 T3 RVP1, BIOS PTLPFWI1.R00.3171.D00.2504220409 04/22/2025
[ 1.947127] RIP: 0010:intel_pmu_init+0x25c9/0x5fd0
[ 1.947127] Code: db 44 ff 4c 89 35 c7 da 44 ff 48 89 2d 80 da 44 ff e9
49 df ff ff 83 7a 68 04 0f 84 1b f9 ff ff f6 42 6d 01 0f 85 11 f9 ff ff
<0f> 0b 31 d2 48 89 df
[ 1.947127] RSP: 0000:ffffd5dc800f7db8 EFLAGS: 00010246
[ 1.947127] RAX: 0000000000000001 RBX: 00000000000abfff RCX:
0000000000000000
[ 1.947127] RDX: ffff8f40856bc000 RSI: 0000000000000001 RDI:
00000000000000ff
[ 1.947127] RBP: 0000000000000001 R08: ffffffffffffffff R09:
0000000000000004
[ 1.947127] R10: ffffffffbd4e2500 R11: 0000000000000006 R12:
ffffffffbc26438b
[ 1.947127] R13: 0000000000000000 R14: 0000000000000000 R15:
0000000000000000
[ 1.947127] FS: 0000000000000000(0000) GS:ffff8f482214f000(0000)
knlGS:0000000000000000
[ 1.947127] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1.947127] CR2: ffff8f47ff7ff000 CR3: 00000004c1434001 CR4:
0000000000f70ef0
[ 1.947127] PKRU: 55555554
[ 1.947127] Call Trace:
[ 1.947127] <TASK>
[ 1.947127] ? __pfx_init_hw_perf_events+0x10/0x10
[ 1.947127] init_hw_perf_events+0x2af/0x4b0
[ 1.947127] ? __pfx_init_hw_perf_events+0x10/0x10
[ 1.947127] do_one_initcall+0x52/0x250
[ 1.947127] ? _raw_spin_unlock+0x18/0x40
[ 1.947127] ? __register_sysctl_table+0x143/0x1a0
[ 1.947127] kernel_init_freeable+0x21d/0x340
[ 1.947127] ? __pfx_kernel_init+0x10/0x10
[ 1.947127] kernel_init+0x1a/0x1c0
[ 1.947127] ret_from_fork+0xcb/0x1c0
[ 1.947127] ? __pfx_kernel_init+0x10/0x10
[ 1.947127] ret_from_fork_asm+0x1a/0x30
[ 1.947127] </TASK>
[ 1.947127] Modules linked in:
[ 1.947127] ---[ end trace 0000000000000000 ]---
[ 1.948128] RIP: 0010:intel_pmu_init+0x25c9/0x5fd0
[ 1.949128] Code: db 44 ff 4c 89 35 c7 da 44 ff 48 89 2d 80 da 44 ff e9
49 df ff ff 83 7a 68 04 0f 84 1b f9 ff ff f6 42 6d 01 0f 85 11 f9 ff ff
<0f> 0b 31 d2 48 89 df
[ 1.950129] RSP: 0000:ffffd5dc800f7db8 EFLAGS: 00010246
[ 1.951128] RAX: 0000000000000001 RBX: 00000000000abfff RCX:
0000000000000000
[ 1.952128] RDX: ffff8f40856bc000 RSI: 0000000000000001 RDI:
00000000000000ff
[ 1.953128] RBP: 0000000000000001 R08: ffffffffffffffff R09:
0000000000000004
[ 1.954129] R10: ffffffffbd4e2500 R11: 0000000000000006 R12:
ffffffffbc26438b
[ 1.955128] R13: 0000000000000000 R14: 0000000000000000 R15:
0000000000000000
[ 1.956128] FS: 0000000000000000(0000) GS:ffff8f482214f000(0000)
knlGS:0000000000000000
[ 1.957128] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1.958128] CR2: ffff8f47ff7ff000 CR3: 00000004c1434001 CR4:
0000000000f70ef0
[ 1.959128] PKRU: 55555554
[ 1.960128] Kernel panic - not syncing: Attempted to kill init!
exitcode=0x0000000b
I'm not sure if we can move the flag PERF_PMU_CAP_EXTENDED_HW_TYPE setting
earlier and eventually find a good place to set the flag. Even it's
possible, but could be risky ...
Ian, if you don't object, I would suggest to drop the bug_on(). I would
adopt other changes and add the is_x86_pmu() check in the
x86_pmu_has_rdpmc_user_disable() to fix the issue.
Thanks.
> return container_of(pmu, struct x86_hybrid_pmu, pmu);
> }
>
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu
2026-03-12 9:44 ` Mi, Dapeng
@ 2026-03-12 15:16 ` Ian Rogers
2026-03-13 0:48 ` Mi, Dapeng
0 siblings, 1 reply; 18+ messages in thread
From: Ian Rogers @ 2026-03-12 15:16 UTC (permalink / raw)
To: Mi, Dapeng
Cc: Peter Zijlstra, dapeng1.mi, acme, adrian.hunter, ak,
alexander.shishkin, eranian, linux-kernel, linux-perf-users,
mingo, namhyung, thomas.falcon, xudong.hao, zide.chen
On Thu, Mar 12, 2026 at 2:44 AM Mi, Dapeng <dapeng1.mi@linux.intel.com> wrote:
>
>
> On 3/12/2026 4:31 PM, Peter Zijlstra wrote:
> > On Wed, Mar 11, 2026 at 10:48:09PM -0700, Ian Rogers wrote:
> >> The patch:
> >> https://lore.kernel.org/lkml/20260311075201.2951073-2-dapeng1.mi@linux.intel.com/
> >> showed it was pretty easy to accidentally cast non-x86 PMUs to
> >> x86_hybrid_pmus. Add a BUG_ON for that case. Restructure is_x86_event
> >> and add an is_x86_pmu to facilitate this.
> >>
> >> @@ -779,6 +795,7 @@ struct x86_hybrid_pmu {
> >>
> >> static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
> >> {
> >> + BUG_ON(!is_x86_pmu(pmu));
> >> return container_of(pmu, struct x86_hybrid_pmu, pmu);
> >> }
> > Given that hybrid_pmu will have PERF_PMU_CAP_EXTENDED_HW_TYPE, and we
> > should really only use hyrid_pmu() on one of those, would not the
> > simpler patch be so?
> >
> >
> > diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> > index fad87d3c8b2c..13ec623617a9 100644
> > --- a/arch/x86/events/perf_event.h
> > +++ b/arch/x86/events/perf_event.h
> > @@ -779,6 +779,7 @@ struct x86_hybrid_pmu {
> >
> > static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
> > {
> > + BUG_ON(!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE));
>
> It looks we can't add either !is_x86_pmu(pmu) or !(pmu->capabilities &
> PERF_PMU_CAP_EXTENDED_HW_TYPE) here. hybrid_pmu() is called by the hybrid()
> marco or other variants, and hybrid() macro is called in many places of the
> intel_pmu_init(), like the update_pmu_cap() , but the flag
> PERF_PMU_CAP_EXTENDED_HW_TYPE is still not set for the hybrid
> pmu->capabilities until intel_pmu_init() ends and the hybrid pmus are
> registered. Then it would cause the unexpected kernel crash.
>
> [ 1.945128] kernel BUG at arch/x86/events/intel/../perf_event.h:798!
> [ 1.946131] Oops: invalid opcode: 0000 [#1] SMP NOPTI
> [ 1.947127] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted
> 7.0.0-rc3-perf-urgent-gc8b4b538960c #460 PREEMPT(full)
> [ 1.947127] Hardware name: Intel Corporation Panther Lake Client
> Platform/PTL-UH LP5 T3 RVP1, BIOS PTLPFWI1.R00.3171.D00.2504220409 04/22/2025
> [ 1.947127] RIP: 0010:intel_pmu_init+0x25c9/0x5fd0
> [ 1.947127] Code: db 44 ff 4c 89 35 c7 da 44 ff 48 89 2d 80 da 44 ff e9
> 49 df ff ff 83 7a 68 04 0f 84 1b f9 ff ff f6 42 6d 01 0f 85 11 f9 ff ff
> <0f> 0b 31 d2 48 89 df
> [ 1.947127] RSP: 0000:ffffd5dc800f7db8 EFLAGS: 00010246
> [ 1.947127] RAX: 0000000000000001 RBX: 00000000000abfff RCX:
> 0000000000000000
> [ 1.947127] RDX: ffff8f40856bc000 RSI: 0000000000000001 RDI:
> 00000000000000ff
> [ 1.947127] RBP: 0000000000000001 R08: ffffffffffffffff R09:
> 0000000000000004
> [ 1.947127] R10: ffffffffbd4e2500 R11: 0000000000000006 R12:
> ffffffffbc26438b
> [ 1.947127] R13: 0000000000000000 R14: 0000000000000000 R15:
> 0000000000000000
> [ 1.947127] FS: 0000000000000000(0000) GS:ffff8f482214f000(0000)
> knlGS:0000000000000000
> [ 1.947127] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 1.947127] CR2: ffff8f47ff7ff000 CR3: 00000004c1434001 CR4:
> 0000000000f70ef0
> [ 1.947127] PKRU: 55555554
> [ 1.947127] Call Trace:
> [ 1.947127] <TASK>
> [ 1.947127] ? __pfx_init_hw_perf_events+0x10/0x10
> [ 1.947127] init_hw_perf_events+0x2af/0x4b0
> [ 1.947127] ? __pfx_init_hw_perf_events+0x10/0x10
> [ 1.947127] do_one_initcall+0x52/0x250
> [ 1.947127] ? _raw_spin_unlock+0x18/0x40
> [ 1.947127] ? __register_sysctl_table+0x143/0x1a0
> [ 1.947127] kernel_init_freeable+0x21d/0x340
> [ 1.947127] ? __pfx_kernel_init+0x10/0x10
> [ 1.947127] kernel_init+0x1a/0x1c0
> [ 1.947127] ret_from_fork+0xcb/0x1c0
> [ 1.947127] ? __pfx_kernel_init+0x10/0x10
> [ 1.947127] ret_from_fork_asm+0x1a/0x30
> [ 1.947127] </TASK>
> [ 1.947127] Modules linked in:
> [ 1.947127] ---[ end trace 0000000000000000 ]---
> [ 1.948128] RIP: 0010:intel_pmu_init+0x25c9/0x5fd0
> [ 1.949128] Code: db 44 ff 4c 89 35 c7 da 44 ff 48 89 2d 80 da 44 ff e9
> 49 df ff ff 83 7a 68 04 0f 84 1b f9 ff ff f6 42 6d 01 0f 85 11 f9 ff ff
> <0f> 0b 31 d2 48 89 df
> [ 1.950129] RSP: 0000:ffffd5dc800f7db8 EFLAGS: 00010246
> [ 1.951128] RAX: 0000000000000001 RBX: 00000000000abfff RCX:
> 0000000000000000
> [ 1.952128] RDX: ffff8f40856bc000 RSI: 0000000000000001 RDI:
> 00000000000000ff
> [ 1.953128] RBP: 0000000000000001 R08: ffffffffffffffff R09:
> 0000000000000004
> [ 1.954129] R10: ffffffffbd4e2500 R11: 0000000000000006 R12:
> ffffffffbc26438b
> [ 1.955128] R13: 0000000000000000 R14: 0000000000000000 R15:
> 0000000000000000
> [ 1.956128] FS: 0000000000000000(0000) GS:ffff8f482214f000(0000)
> knlGS:0000000000000000
> [ 1.957128] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 1.958128] CR2: ffff8f47ff7ff000 CR3: 00000004c1434001 CR4:
> 0000000000f70ef0
> [ 1.959128] PKRU: 55555554
> [ 1.960128] Kernel panic - not syncing: Attempted to kill init!
> exitcode=0x0000000b
>
> I'm not sure if we can move the flag PERF_PMU_CAP_EXTENDED_HW_TYPE setting
> earlier and eventually find a good place to set the flag. Even it's
> possible, but could be risky ...
>
> Ian, if you don't object, I would suggest to drop the bug_on(). I would
> adopt other changes and add the is_x86_pmu() check in the
> x86_pmu_has_rdpmc_user_disable() to fix the issue.
No objections from me, these patches aim to improve the code's typing
and were intended more as a suggestion easily expressed in code than
by email :-)
I feel that the x86_pmu and x86_hybrid_pmu are something of a mess,
making features like counter partitioning harder than necessary if the
code were structured better - ie 1 partition per PMU, which means
multiple PMUs even without hybrid. It'd be nice if we could implement
something like the BUG_ON to at least ensure correctness in the
current code. I'll try to find other broken casts/container_ofs in the
code by visual inspection.
Thanks,
Ian
> Thanks.
>
> > return container_of(pmu, struct x86_hybrid_pmu, pmu);
> > }
> >
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu
2026-03-12 15:16 ` Ian Rogers
@ 2026-03-13 0:48 ` Mi, Dapeng
0 siblings, 0 replies; 18+ messages in thread
From: Mi, Dapeng @ 2026-03-13 0:48 UTC (permalink / raw)
To: Ian Rogers
Cc: Peter Zijlstra, dapeng1.mi, acme, adrian.hunter, ak,
alexander.shishkin, eranian, linux-kernel, linux-perf-users,
mingo, namhyung, thomas.falcon, xudong.hao, zide.chen
On 3/12/2026 11:16 PM, Ian Rogers wrote:
> On Thu, Mar 12, 2026 at 2:44 AM Mi, Dapeng <dapeng1.mi@linux.intel.com> wrote:
>>
>> On 3/12/2026 4:31 PM, Peter Zijlstra wrote:
>>> On Wed, Mar 11, 2026 at 10:48:09PM -0700, Ian Rogers wrote:
>>>> The patch:
>>>> https://lore.kernel.org/lkml/20260311075201.2951073-2-dapeng1.mi@linux.intel.com/
>>>> showed it was pretty easy to accidentally cast non-x86 PMUs to
>>>> x86_hybrid_pmus. Add a BUG_ON for that case. Restructure is_x86_event
>>>> and add an is_x86_pmu to facilitate this.
>>>>
>>>> @@ -779,6 +795,7 @@ struct x86_hybrid_pmu {
>>>>
>>>> static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
>>>> {
>>>> + BUG_ON(!is_x86_pmu(pmu));
>>>> return container_of(pmu, struct x86_hybrid_pmu, pmu);
>>>> }
>>> Given that hybrid_pmu will have PERF_PMU_CAP_EXTENDED_HW_TYPE, and we
>>> should really only use hyrid_pmu() on one of those, would not the
>>> simpler patch be so?
>>>
>>>
>>> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
>>> index fad87d3c8b2c..13ec623617a9 100644
>>> --- a/arch/x86/events/perf_event.h
>>> +++ b/arch/x86/events/perf_event.h
>>> @@ -779,6 +779,7 @@ struct x86_hybrid_pmu {
>>>
>>> static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
>>> {
>>> + BUG_ON(!(pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE));
>> It looks we can't add either !is_x86_pmu(pmu) or !(pmu->capabilities &
>> PERF_PMU_CAP_EXTENDED_HW_TYPE) here. hybrid_pmu() is called by the hybrid()
>> marco or other variants, and hybrid() macro is called in many places of the
>> intel_pmu_init(), like the update_pmu_cap() , but the flag
>> PERF_PMU_CAP_EXTENDED_HW_TYPE is still not set for the hybrid
>> pmu->capabilities until intel_pmu_init() ends and the hybrid pmus are
>> registered. Then it would cause the unexpected kernel crash.
>>
>> [ 1.945128] kernel BUG at arch/x86/events/intel/../perf_event.h:798!
>> [ 1.946131] Oops: invalid opcode: 0000 [#1] SMP NOPTI
>> [ 1.947127] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted
>> 7.0.0-rc3-perf-urgent-gc8b4b538960c #460 PREEMPT(full)
>> [ 1.947127] Hardware name: Intel Corporation Panther Lake Client
>> Platform/PTL-UH LP5 T3 RVP1, BIOS PTLPFWI1.R00.3171.D00.2504220409 04/22/2025
>> [ 1.947127] RIP: 0010:intel_pmu_init+0x25c9/0x5fd0
>> [ 1.947127] Code: db 44 ff 4c 89 35 c7 da 44 ff 48 89 2d 80 da 44 ff e9
>> 49 df ff ff 83 7a 68 04 0f 84 1b f9 ff ff f6 42 6d 01 0f 85 11 f9 ff ff
>> <0f> 0b 31 d2 48 89 df
>> [ 1.947127] RSP: 0000:ffffd5dc800f7db8 EFLAGS: 00010246
>> [ 1.947127] RAX: 0000000000000001 RBX: 00000000000abfff RCX:
>> 0000000000000000
>> [ 1.947127] RDX: ffff8f40856bc000 RSI: 0000000000000001 RDI:
>> 00000000000000ff
>> [ 1.947127] RBP: 0000000000000001 R08: ffffffffffffffff R09:
>> 0000000000000004
>> [ 1.947127] R10: ffffffffbd4e2500 R11: 0000000000000006 R12:
>> ffffffffbc26438b
>> [ 1.947127] R13: 0000000000000000 R14: 0000000000000000 R15:
>> 0000000000000000
>> [ 1.947127] FS: 0000000000000000(0000) GS:ffff8f482214f000(0000)
>> knlGS:0000000000000000
>> [ 1.947127] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 1.947127] CR2: ffff8f47ff7ff000 CR3: 00000004c1434001 CR4:
>> 0000000000f70ef0
>> [ 1.947127] PKRU: 55555554
>> [ 1.947127] Call Trace:
>> [ 1.947127] <TASK>
>> [ 1.947127] ? __pfx_init_hw_perf_events+0x10/0x10
>> [ 1.947127] init_hw_perf_events+0x2af/0x4b0
>> [ 1.947127] ? __pfx_init_hw_perf_events+0x10/0x10
>> [ 1.947127] do_one_initcall+0x52/0x250
>> [ 1.947127] ? _raw_spin_unlock+0x18/0x40
>> [ 1.947127] ? __register_sysctl_table+0x143/0x1a0
>> [ 1.947127] kernel_init_freeable+0x21d/0x340
>> [ 1.947127] ? __pfx_kernel_init+0x10/0x10
>> [ 1.947127] kernel_init+0x1a/0x1c0
>> [ 1.947127] ret_from_fork+0xcb/0x1c0
>> [ 1.947127] ? __pfx_kernel_init+0x10/0x10
>> [ 1.947127] ret_from_fork_asm+0x1a/0x30
>> [ 1.947127] </TASK>
>> [ 1.947127] Modules linked in:
>> [ 1.947127] ---[ end trace 0000000000000000 ]---
>> [ 1.948128] RIP: 0010:intel_pmu_init+0x25c9/0x5fd0
>> [ 1.949128] Code: db 44 ff 4c 89 35 c7 da 44 ff 48 89 2d 80 da 44 ff e9
>> 49 df ff ff 83 7a 68 04 0f 84 1b f9 ff ff f6 42 6d 01 0f 85 11 f9 ff ff
>> <0f> 0b 31 d2 48 89 df
>> [ 1.950129] RSP: 0000:ffffd5dc800f7db8 EFLAGS: 00010246
>> [ 1.951128] RAX: 0000000000000001 RBX: 00000000000abfff RCX:
>> 0000000000000000
>> [ 1.952128] RDX: ffff8f40856bc000 RSI: 0000000000000001 RDI:
>> 00000000000000ff
>> [ 1.953128] RBP: 0000000000000001 R08: ffffffffffffffff R09:
>> 0000000000000004
>> [ 1.954129] R10: ffffffffbd4e2500 R11: 0000000000000006 R12:
>> ffffffffbc26438b
>> [ 1.955128] R13: 0000000000000000 R14: 0000000000000000 R15:
>> 0000000000000000
>> [ 1.956128] FS: 0000000000000000(0000) GS:ffff8f482214f000(0000)
>> knlGS:0000000000000000
>> [ 1.957128] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 1.958128] CR2: ffff8f47ff7ff000 CR3: 00000004c1434001 CR4:
>> 0000000000f70ef0
>> [ 1.959128] PKRU: 55555554
>> [ 1.960128] Kernel panic - not syncing: Attempted to kill init!
>> exitcode=0x0000000b
>>
>> I'm not sure if we can move the flag PERF_PMU_CAP_EXTENDED_HW_TYPE setting
>> earlier and eventually find a good place to set the flag. Even it's
>> possible, but could be risky ...
>>
>> Ian, if you don't object, I would suggest to drop the bug_on(). I would
>> adopt other changes and add the is_x86_pmu() check in the
>> x86_pmu_has_rdpmc_user_disable() to fix the issue.
> No objections from me, these patches aim to improve the code's typing
> and were intended more as a suggestion easily expressed in code than
> by email :-)
Got it.
>
> I feel that the x86_pmu and x86_hybrid_pmu are something of a mess,
> making features like counter partitioning harder than necessary if the
> code were structured better - ie 1 partition per PMU, which means
> multiple PMUs even without hybrid. It'd be nice if we could implement
> something like the BUG_ON to at least ensure correctness in the
> current code. I'll try to find other broken casts/container_ofs in the
> code by visual inspection.
Agree. The intel_pmu_init() indeed becomes over large and complicated along
with supporting more and more platforms. I ever wanted to do some
optimization for it, but always interrupted by other higher priority tasks.
Anyway, I would put it into my todo list and think how could we reconstruct it.
Thanks.
>
> Thanks,
> Ian
>
>> Thanks.
>>
>>> return container_of(pmu, struct x86_hybrid_pmu, pmu);
>>> }
>>>
^ permalink raw reply [flat|nested] 18+ messages in thread