public inbox for linux-perf-users@vger.kernel.org
 help / color / mirror / Atom feed
From: Ian Rogers <irogers@google.com>
To: dapeng1.mi@intel.com, dapeng1.mi@linux.intel.com
Cc: irogers@google.com, acme@kernel.org, adrian.hunter@intel.com,
	 ak@linux.intel.com, alexander.shishkin@linux.intel.com,
	eranian@google.com,  linux-kernel@vger.kernel.org,
	linux-perf-users@vger.kernel.org,  mingo@redhat.com,
	namhyung@kernel.org, peterz@infradead.org,
	 thomas.falcon@intel.com, xudong.hao@intel.com,
	zide.chen@intel.com
Subject: [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu
Date: Wed, 11 Mar 2026 22:48:10 -0700	[thread overview]
Message-ID: <20260312054810.1571020-2-irogers@google.com> (raw)
In-Reply-To: <20260312054810.1571020-1-irogers@google.com>

Use the capabilities of the PMU rather than the global variable
perf_is_hybrid to determine if a hybrid pmu has been passed to the
main accessors. As the pmu capabilities check mirrors that in
is_x86_pmu, the BUG_ON(!is_x86_pmu...) in hybrid_pmu can be elided as
it is provably always false (with sufficient function inlining, common
sub-expression elimination, etc.) in its most common uses.

Signed-off-by: Ian Rogers <irogers@google.com>
---
Only build tested.
---
 arch/x86/events/perf_event.h | 52 +++++++++++++++++++-----------------
 1 file changed, 28 insertions(+), 24 deletions(-)

diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index f1123c95d174..7990d86ef233 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -802,34 +802,38 @@ static __always_inline struct x86_hybrid_pmu *hybrid_pmu(struct pmu *pmu)
 extern struct static_key_false perf_is_hybrid;
 #define is_hybrid()		static_branch_unlikely(&perf_is_hybrid)
 
-#define hybrid(_pmu, _field)				\
-(*({							\
-	typeof(&x86_pmu._field) __Fp = &x86_pmu._field;	\
-							\
-	if (is_hybrid() && (_pmu))			\
-		__Fp = &hybrid_pmu(_pmu)->_field;	\
-							\
-	__Fp;						\
+
+#define hybrid(_pmu, _field)						\
+(*({									\
+	typeof(&x86_pmu._field) __Fp = &x86_pmu._field;			\
+	struct pmu *__pmu = _pmu;					\
+									\
+	if (__pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE)	\
+		__Fp = &hybrid_pmu(__pmu)->_field;			\
+									\
+	__Fp;								\
 }))
 
-#define hybrid_var(_pmu, _var)				\
-(*({							\
-	typeof(&_var) __Fp = &_var;			\
-							\
-	if (is_hybrid() && (_pmu))			\
-		__Fp = &hybrid_pmu(_pmu)->_var;		\
-							\
-	__Fp;						\
+#define hybrid_var(_pmu, _var)						\
+(*({									\
+	typeof(&_var) __Fp = &_var;					\
+	struct pmu *__pmu = _pmu;					\
+									\
+	if (__pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE)	\
+		__Fp = &hybrid_pmu(__pmu)->_var;			\
+									\
+	__Fp;								\
 }))
 
-#define hybrid_bit(_pmu, _field)			\
-({							\
-	bool __Fp = x86_pmu._field;			\
-							\
-	if (is_hybrid() && (_pmu))			\
-		__Fp = hybrid_pmu(_pmu)->_field;	\
-							\
-	__Fp;						\
+#define hybrid_bit(_pmu, _field)					\
+({									\
+	bool __Fp = x86_pmu._field;					\
+	struct pmu *__pmu = _pmu;					\
+									\
+	if (__pmu->capabilities & PERF_PMU_CAP_EXTENDED_HW_TYPE)	\
+		__Fp = hybrid_pmu(__pmu)->_field;			\
+									\
+	__Fp;								\
 })
 
 /*
-- 
2.53.0.851.ga537e3e6e9-goog


  reply	other threads:[~2026-03-12  5:48 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-11  7:52 [PATCH 1/2] perf/x86/intel: Fix OMR snoop information parsing issues Dapeng Mi
2026-03-11  7:52 ` [PATCH 2/2] perf/x86: Update cap_user_rdpmc base on rdpmc user disable state Dapeng Mi
2026-03-12  4:44   ` Ian Rogers
2026-03-12  5:04     ` Ian Rogers
2026-03-12  5:48       ` [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu Ian Rogers
2026-03-12  5:48         ` Ian Rogers [this message]
2026-03-12  6:44           ` [PATCH v1 2/2] perf/x86: Reduce is_hybrid calls and aid ellision of BUG_ON in hybrid_pmu Mi, Dapeng
2026-03-12  8:40           ` Peter Zijlstra
2026-03-12 15:06             ` Ian Rogers
2026-03-12  6:43         ` [PATCH v1 1/2] perf/x86: Avoid inadvertent casts to x86_hybrid_pmu Mi, Dapeng
2026-03-12  8:25         ` Mi, Dapeng
2026-03-12  8:31         ` Peter Zijlstra
2026-03-12  9:44           ` Mi, Dapeng
2026-03-12 15:16             ` Ian Rogers
2026-03-13  0:48               ` Mi, Dapeng
2026-03-12  6:23       ` [PATCH 2/2] perf/x86: Update cap_user_rdpmc base on rdpmc user disable state Mi, Dapeng
2026-03-12  6:17     ` Mi, Dapeng
2026-03-12  4:07 ` [PATCH 1/2] perf/x86/intel: Fix OMR snoop information parsing issues Ian Rogers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260312054810.1571020-2-irogers@google.com \
    --to=irogers@google.com \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=ak@linux.intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=dapeng1.mi@intel.com \
    --cc=dapeng1.mi@linux.intel.com \
    --cc=eranian@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=peterz@infradead.org \
    --cc=thomas.falcon@intel.com \
    --cc=xudong.hao@intel.com \
    --cc=zide.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox