From: "Yan, Zheng" <zheng.z.yan@intel.com>
To: linux-kernel@vger.kernel.org
Cc: a.p.zijlstra@chello.nl, mingo@kernel.org, acme@infradead.org,
eranian@google.com, andi@firstfloor.org
Subject: Re: [PATCH V4 07/16] perf, x86: track number of events that use LBR callstack
Date: Mon, 07 Jul 2014 14:36:36 +0800 [thread overview]
Message-ID: <53BA3FF4.2010101@intel.com> (raw)
In-Reply-To: <1404714527-18603-9-git-send-email-zheng.z.yan@intel.com>
please ignore this patch
On 07/07/2014 02:28 PM, Yan, Zheng wrote:
> When enabling/disabling an event, check if the event uses the LBR
> callstack feature, adjust the LBR callstack usage count accordingly.
> Later patch will use the usage count to decide if LBR stack should
> be saved/restored.
>
> Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
> ---
> arch/x86/kernel/cpu/perf_event_intel_lbr.c | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/arch/x86/kernel/cpu/perf_event_intel_lbr.c b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> index 9a94fff..66969cb 100644
> --- a/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> +++ b/arch/x86/kernel/cpu/perf_event_intel_lbr.c
> @@ -198,9 +198,15 @@ void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in)
> }
> }
>
> +static inline bool branch_user_callstack(unsigned br_sel)
> +{
> + return (br_sel & X86_BR_USER) && (br_sel & X86_BR_CALL_STACK);
> +}
> +
> void intel_pmu_lbr_enable(struct perf_event *event)
> {
> struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
> + struct x86_perf_task_context *task_ctx;
>
> if (!x86_pmu.lbr_nr)
> return;
> @@ -214,6 +220,10 @@ void intel_pmu_lbr_enable(struct perf_event *event)
> }
> cpuc->br_sel = event->hw.branch_reg.reg;
>
> + task_ctx = event->ctx ? event->ctx->task_ctx_data : NULL;
> + if (branch_user_callstack(cpuc->br_sel))
> + task_ctx->lbr_callstack_users++;
> +
> cpuc->lbr_users++;
> if (cpuc->lbr_users == 1)
> perf_sched_cb_enable(event->ctx->pmu);
> @@ -222,10 +232,15 @@ void intel_pmu_lbr_enable(struct perf_event *event)
> void intel_pmu_lbr_disable(struct perf_event *event)
> {
> struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
> + struct x86_perf_task_context *task_ctx;
>
> if (!x86_pmu.lbr_nr)
> return;
>
> + task_ctx = event->ctx ? event->ctx->task_ctx_data : NULL;
> + if (branch_user_callstack(cpuc->br_sel))
> + task_ctx->lbr_callstack_users--;
> +
> cpuc->lbr_users--;
> WARN_ON_ONCE(cpuc->lbr_users < 0);
>
>
next prev parent reply other threads:[~2014-07-07 6:37 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-07 6:28 [PATCH v5 00/16] perf, x86: Haswell LBR call stack support Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 01/16] perf, x86: Reduce lbr_sel_map size Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 02/16] perf, core: introduce pmu context switch callback Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 03/16] perf, x86: use context switch callback to flush LBR stack Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 04/16] perf, x86: Basic Haswell LBR call stack support Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 05/16] perf, core: pmu specific data for perf task context Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 06/16] perf, core: always switch pmu specific data during context switch Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 07/16] perf, x86: allocate space for storing LBR stack Yan, Zheng
2014-07-07 6:28 ` [PATCH V4 07/16] perf, x86: track number of events that use LBR callstack Yan, Zheng
2014-07-07 6:36 ` Yan, Zheng [this message]
2014-07-07 6:28 ` [PATCH V4 08/16] perf, x86: allocate space for storing LBR stack Yan, Zheng
2014-07-07 6:36 ` Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 08/16] perf, x86: track number of events that use LBR callstack Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 09/16] perf, x86: Save/resotre LBR stack during context switch Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 10/16] perf, core: simplify need branch stack check Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 11/16] perf, core: Pass perf_sample_data to perf_callchain() Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 12/16] perf, x86: use LBR call stack to get user callchain Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 13/16] perf, x86: re-organize code that implicitly enables LBR/PEBS Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 14/16] perf, x86: enable LBR callstack when recording callchain Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 15/16] perf, x86: disable FREEZE_LBRS_ON_PMI when LBR operates in callstack mode Yan, Zheng
2014-07-07 6:28 ` [PATCH v5 16/16] perf, x86: Discard zero length call entries in LBR call stack Yan, Zheng
2014-09-05 14:25 ` [PATCH v5 00/16] perf, x86: Haswell LBR call stack support Liang, Kan
2014-09-05 15:20 ` Stephane Eranian
-- strict thread matches above, loose matches on Subject: below --
2014-06-30 8:50 [PATCH V4 " Yan, Zheng
2014-06-30 8:50 ` [PATCH V4 07/16] perf, x86: track number of events that use LBR callstack Yan, Zheng
2014-07-02 10:21 ` Peter Zijlstra
2014-07-03 5:59 ` Yan, Zheng
2014-07-02 10:25 ` Peter Zijlstra
2014-03-17 5:57 [PATCH v4 00/16] perf, x86: Haswell LBR call stack support Yan, Zheng
2014-03-17 5:57 ` [PATCH v4 07/16] perf, x86: track number of events that use LBR callstack Yan, Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53BA3FF4.2010101@intel.com \
--to=zheng.z.yan@intel.com \
--cc=a.p.zijlstra@chello.nl \
--cc=acme@infradead.org \
--cc=andi@firstfloor.org \
--cc=eranian@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).