* [PATCH] perf/x86: fix wrong assumption that LBR is only useful for sampling events
@ 2024-09-05 18:00 Andrii Nakryiko
2024-09-05 19:20 ` Liang, Kan
0 siblings, 1 reply; 6+ messages in thread
From: Andrii Nakryiko @ 2024-09-05 18:00 UTC (permalink / raw)
To: linux-perf-users, peterz, kan.liang
Cc: x86, mingo, linux-kernel, bpf, acme, kernel-team, Andrii Nakryiko,
stable
It's incorrect to assume that LBR can/should only be used with sampling
events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
which expects a properly setup and activated perf event which allows
kernel to capture LBR data.
For instance, retsnoop tool ([0]) makes an extensive use of this
functionality and sets up perf event as follows:
struct perf_event_attr attr;
memset(&attr, 0, sizeof(attr));
attr.size = sizeof(attr);
attr.type = PERF_TYPE_HARDWARE;
attr.config = PERF_COUNT_HW_CPU_CYCLES;
attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
Commit referenced in Fixes tag broke this setup by making invalid assumption
that LBR is useful only for sampling events. Remove that assumption.
Note, earlier we removed a similar assumption on AMD side of LBR support,
see [1] for details.
[0] https://github.com/anakryiko/retsnoop
[1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
Cc: stable@vger.kernel.org # 6.8+
Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
arch/x86/events/intel/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 9e519d8a810a..f82a342b8852 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
x86_pmu.pebs_aliases(event);
}
- if (needs_branch_stack(event) && is_sampling_event(event))
+ if (needs_branch_stack(event))
event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
if (branch_sample_counters(event)) {
--
2.43.5
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] perf/x86: fix wrong assumption that LBR is only useful for sampling events
2024-09-05 18:00 [PATCH] perf/x86: fix wrong assumption that LBR is only useful for sampling events Andrii Nakryiko
@ 2024-09-05 19:20 ` Liang, Kan
2024-09-05 20:22 ` Andrii Nakryiko
0 siblings, 1 reply; 6+ messages in thread
From: Liang, Kan @ 2024-09-05 19:20 UTC (permalink / raw)
To: Andrii Nakryiko, linux-perf-users, peterz
Cc: x86, mingo, linux-kernel, bpf, acme, kernel-team, stable
On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
> It's incorrect to assume that LBR can/should only be used with sampling
> events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
> which expects a properly setup and activated perf event which allows
> kernel to capture LBR data.
>
> For instance, retsnoop tool ([0]) makes an extensive use of this
> functionality and sets up perf event as follows:
>
> struct perf_event_attr attr;
>
> memset(&attr, 0, sizeof(attr));
> attr.size = sizeof(attr);
> attr.type = PERF_TYPE_HARDWARE;
> attr.config = PERF_COUNT_HW_CPU_CYCLES;
> attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
> attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
>
> Commit referenced in Fixes tag broke this setup by making invalid assumption
> that LBR is useful only for sampling events. Remove that assumption.
>
> Note, earlier we removed a similar assumption on AMD side of LBR support,
> see [1] for details.
>
> [0] https://github.com/anakryiko/retsnoop
> [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
>
> Cc: stable@vger.kernel.org # 6.8+
> Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---
> arch/x86/events/intel/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 9e519d8a810a..f82a342b8852 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
> x86_pmu.pebs_aliases(event);
> }
>
> - if (needs_branch_stack(event) && is_sampling_event(event))
> + if (needs_branch_stack(event))
> event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
To limit the LBR for a sampling event is to avoid unnecessary branch
stack setup for a counting event in the sample read. The above change
should break the sample read case.
How about the below patch (not test)? Is it good enough for the BPF usage?
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 0c9c2706d4ec..8d67cbda916b 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
*event)
x86_pmu.pebs_aliases(event);
}
- if (needs_branch_stack(event) && is_sampling_event(event))
- event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
+ if (needs_branch_stack(event)) {
+ /* Avoid branch stack setup for counting events in SAMPLE READ */
+ if (is_sampling_event(event) ||
+ !(event->attr.sample_type & PERF_SAMPLE_READ))
+ event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
+ }
if (branch_sample_counters(event)) {
struct perf_event *leader, *sibling;
Thanks,
Kan
>
> if (branch_sample_counters(event)) {
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] perf/x86: fix wrong assumption that LBR is only useful for sampling events
2024-09-05 19:20 ` Liang, Kan
@ 2024-09-05 20:22 ` Andrii Nakryiko
2024-09-05 20:29 ` Liang, Kan
0 siblings, 1 reply; 6+ messages in thread
From: Andrii Nakryiko @ 2024-09-05 20:22 UTC (permalink / raw)
To: Liang, Kan
Cc: Andrii Nakryiko, linux-perf-users, peterz, x86, mingo,
linux-kernel, bpf, acme, kernel-team, stable
On Thu, Sep 5, 2024 at 12:21 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>
>
>
> On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
> > It's incorrect to assume that LBR can/should only be used with sampling
> > events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
> > which expects a properly setup and activated perf event which allows
> > kernel to capture LBR data.
> >
> > For instance, retsnoop tool ([0]) makes an extensive use of this
> > functionality and sets up perf event as follows:
> >
> > struct perf_event_attr attr;
> >
> > memset(&attr, 0, sizeof(attr));
> > attr.size = sizeof(attr);
> > attr.type = PERF_TYPE_HARDWARE;
> > attr.config = PERF_COUNT_HW_CPU_CYCLES;
> > attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
> > attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
> >
> > Commit referenced in Fixes tag broke this setup by making invalid assumption
> > that LBR is useful only for sampling events. Remove that assumption.
> >
> > Note, earlier we removed a similar assumption on AMD side of LBR support,
> > see [1] for details.
> >
> > [0] https://github.com/anakryiko/retsnoop
> > [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
> >
> > Cc: stable@vger.kernel.org # 6.8+
> > Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
> > arch/x86/events/intel/core.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> > index 9e519d8a810a..f82a342b8852 100644
> > --- a/arch/x86/events/intel/core.c
> > +++ b/arch/x86/events/intel/core.c
> > @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
> > x86_pmu.pebs_aliases(event);
> > }
> >
> > - if (needs_branch_stack(event) && is_sampling_event(event))
> > + if (needs_branch_stack(event))
> > event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>
> To limit the LBR for a sampling event is to avoid unnecessary branch
> stack setup for a counting event in the sample read. The above change
> should break the sample read case.
>
> How about the below patch (not test)? Is it good enough for the BPF usage?
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 0c9c2706d4ec..8d67cbda916b 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
> *event)
> x86_pmu.pebs_aliases(event);
> }
>
> - if (needs_branch_stack(event) && is_sampling_event(event))
> - event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> + if (needs_branch_stack(event)) {
> + /* Avoid branch stack setup for counting events in SAMPLE READ */
> + if (is_sampling_event(event) ||
> + !(event->attr.sample_type & PERF_SAMPLE_READ))
> + event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> + }
>
I'm sure it will be fine for my use case, as I set only
PERF_SAMPLE_BRANCH_STACK.
But I'll leave it up to perf subsystem experts to decide if this
condition makes sense, because looking at what PERF_SAMPLE_READ is:
PERF_SAMPLE_READ
Record counter values for all events in a group,
not just the group leader.
It's not clear why this would disable LBR, if specified.
> if (branch_sample_counters(event)) {
> struct perf_event *leader, *sibling;
>
>
> Thanks,
> Kan
> >
> > if (branch_sample_counters(event)) {
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] perf/x86: fix wrong assumption that LBR is only useful for sampling events
2024-09-05 20:22 ` Andrii Nakryiko
@ 2024-09-05 20:29 ` Liang, Kan
2024-09-05 20:33 ` Andrii Nakryiko
0 siblings, 1 reply; 6+ messages in thread
From: Liang, Kan @ 2024-09-05 20:29 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Andrii Nakryiko, linux-perf-users, peterz, x86, mingo,
linux-kernel, bpf, acme, kernel-team, stable
On 2024-09-05 4:22 p.m., Andrii Nakryiko wrote:
> On Thu, Sep 5, 2024 at 12:21 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>>
>>
>>
>> On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
>>> It's incorrect to assume that LBR can/should only be used with sampling
>>> events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
>>> which expects a properly setup and activated perf event which allows
>>> kernel to capture LBR data.
>>>
>>> For instance, retsnoop tool ([0]) makes an extensive use of this
>>> functionality and sets up perf event as follows:
>>>
>>> struct perf_event_attr attr;
>>>
>>> memset(&attr, 0, sizeof(attr));
>>> attr.size = sizeof(attr);
>>> attr.type = PERF_TYPE_HARDWARE;
>>> attr.config = PERF_COUNT_HW_CPU_CYCLES;
>>> attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
>>> attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
>>>
>>> Commit referenced in Fixes tag broke this setup by making invalid assumption
>>> that LBR is useful only for sampling events. Remove that assumption.
>>>
>>> Note, earlier we removed a similar assumption on AMD side of LBR support,
>>> see [1] for details.
>>>
>>> [0] https://github.com/anakryiko/retsnoop
>>> [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
>>>
>>> Cc: stable@vger.kernel.org # 6.8+
>>> Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
>>> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
>>> ---
>>> arch/x86/events/intel/core.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>>> index 9e519d8a810a..f82a342b8852 100644
>>> --- a/arch/x86/events/intel/core.c
>>> +++ b/arch/x86/events/intel/core.c
>>> @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
>>> x86_pmu.pebs_aliases(event);
>>> }
>>>
>>> - if (needs_branch_stack(event) && is_sampling_event(event))
>>> + if (needs_branch_stack(event))
>>> event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>
>> To limit the LBR for a sampling event is to avoid unnecessary branch
>> stack setup for a counting event in the sample read. The above change
>> should break the sample read case.
>>
>> How about the below patch (not test)? Is it good enough for the BPF usage?
>>
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index 0c9c2706d4ec..8d67cbda916b 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
>> @@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
>> *event)
>> x86_pmu.pebs_aliases(event);
>> }
>>
>> - if (needs_branch_stack(event) && is_sampling_event(event))
>> - event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>> + if (needs_branch_stack(event)) {
>> + /* Avoid branch stack setup for counting events in SAMPLE READ */
>> + if (is_sampling_event(event) ||
>> + !(event->attr.sample_type & PERF_SAMPLE_READ))
>> + event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>> + }
>>
>
> I'm sure it will be fine for my use case, as I set only
> PERF_SAMPLE_BRANCH_STACK.
>
> But I'll leave it up to perf subsystem experts to decide if this
> condition makes sense, because looking at what PERF_SAMPLE_READ is:
>
> PERF_SAMPLE_READ
> Record counter values for all events in a group,
> not just the group leader.
>
> It's not clear why this would disable LBR, if specified.
It only disables the counting event with SAMPLE_READ, since LBR is only
read in the sampling event's overflow.
Thanks,
Kan
>
>> if (branch_sample_counters(event)) {
>> struct perf_event *leader, *sibling;
>>
>>
>> Thanks,
>> Kan
>>>
>>> if (branch_sample_counters(event)) {
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] perf/x86: fix wrong assumption that LBR is only useful for sampling events
2024-09-05 20:29 ` Liang, Kan
@ 2024-09-05 20:33 ` Andrii Nakryiko
2024-09-09 16:02 ` Liang, Kan
0 siblings, 1 reply; 6+ messages in thread
From: Andrii Nakryiko @ 2024-09-05 20:33 UTC (permalink / raw)
To: Liang, Kan
Cc: Andrii Nakryiko, linux-perf-users, peterz, x86, mingo,
linux-kernel, bpf, acme, kernel-team, stable
On Thu, Sep 5, 2024 at 1:29 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>
>
>
> On 2024-09-05 4:22 p.m., Andrii Nakryiko wrote:
> > On Thu, Sep 5, 2024 at 12:21 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
> >>
> >>
> >>
> >> On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
> >>> It's incorrect to assume that LBR can/should only be used with sampling
> >>> events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
> >>> which expects a properly setup and activated perf event which allows
> >>> kernel to capture LBR data.
> >>>
> >>> For instance, retsnoop tool ([0]) makes an extensive use of this
> >>> functionality and sets up perf event as follows:
> >>>
> >>> struct perf_event_attr attr;
> >>>
> >>> memset(&attr, 0, sizeof(attr));
> >>> attr.size = sizeof(attr);
> >>> attr.type = PERF_TYPE_HARDWARE;
> >>> attr.config = PERF_COUNT_HW_CPU_CYCLES;
> >>> attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
> >>> attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
> >>>
> >>> Commit referenced in Fixes tag broke this setup by making invalid assumption
> >>> that LBR is useful only for sampling events. Remove that assumption.
> >>>
> >>> Note, earlier we removed a similar assumption on AMD side of LBR support,
> >>> see [1] for details.
> >>>
> >>> [0] https://github.com/anakryiko/retsnoop
> >>> [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
> >>>
> >>> Cc: stable@vger.kernel.org # 6.8+
> >>> Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
> >>> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> >>> ---
> >>> arch/x86/events/intel/core.c | 2 +-
> >>> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> >>> index 9e519d8a810a..f82a342b8852 100644
> >>> --- a/arch/x86/events/intel/core.c
> >>> +++ b/arch/x86/events/intel/core.c
> >>> @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
> >>> x86_pmu.pebs_aliases(event);
> >>> }
> >>>
> >>> - if (needs_branch_stack(event) && is_sampling_event(event))
> >>> + if (needs_branch_stack(event))
> >>> event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> >>
> >> To limit the LBR for a sampling event is to avoid unnecessary branch
> >> stack setup for a counting event in the sample read. The above change
> >> should break the sample read case.
> >>
> >> How about the below patch (not test)? Is it good enough for the BPF usage?
> >>
> >> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> >> index 0c9c2706d4ec..8d67cbda916b 100644
> >> --- a/arch/x86/events/intel/core.c
> >> +++ b/arch/x86/events/intel/core.c
> >> @@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
> >> *event)
> >> x86_pmu.pebs_aliases(event);
> >> }
> >>
> >> - if (needs_branch_stack(event) && is_sampling_event(event))
> >> - event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> >> + if (needs_branch_stack(event)) {
> >> + /* Avoid branch stack setup for counting events in SAMPLE READ */
> >> + if (is_sampling_event(event) ||
> >> + !(event->attr.sample_type & PERF_SAMPLE_READ))
> >> + event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> >> + }
> >>
> >
> > I'm sure it will be fine for my use case, as I set only
> > PERF_SAMPLE_BRANCH_STACK.
> >
> > But I'll leave it up to perf subsystem experts to decide if this
> > condition makes sense, because looking at what PERF_SAMPLE_READ is:
> >
> > PERF_SAMPLE_READ
> > Record counter values for all events in a group,
> > not just the group leader.
> >
> > It's not clear why this would disable LBR, if specified.
>
> It only disables the counting event with SAMPLE_READ, since LBR is only
> read in the sampling event's overflow.
>
Ok, sounds good! Would you like to send a proper patch with your
proposed changes?
> Thanks,
> Kan
> >
> >> if (branch_sample_counters(event)) {
> >> struct perf_event *leader, *sibling;
> >>
> >>
> >> Thanks,
> >> Kan
> >>>
> >>> if (branch_sample_counters(event)) {
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] perf/x86: fix wrong assumption that LBR is only useful for sampling events
2024-09-05 20:33 ` Andrii Nakryiko
@ 2024-09-09 16:02 ` Liang, Kan
0 siblings, 0 replies; 6+ messages in thread
From: Liang, Kan @ 2024-09-09 16:02 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Andrii Nakryiko, linux-perf-users, peterz, x86, mingo,
linux-kernel, bpf, acme, kernel-team, stable
On 2024-09-05 4:33 p.m., Andrii Nakryiko wrote:
> On Thu, Sep 5, 2024 at 1:29 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>>
>>
>>
>> On 2024-09-05 4:22 p.m., Andrii Nakryiko wrote:
>>> On Thu, Sep 5, 2024 at 12:21 PM Liang, Kan <kan.liang@linux.intel.com> wrote:
>>>>
>>>>
>>>>
>>>> On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
>>>>> It's incorrect to assume that LBR can/should only be used with sampling
>>>>> events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
>>>>> which expects a properly setup and activated perf event which allows
>>>>> kernel to capture LBR data.
>>>>>
>>>>> For instance, retsnoop tool ([0]) makes an extensive use of this
>>>>> functionality and sets up perf event as follows:
>>>>>
>>>>> struct perf_event_attr attr;
>>>>>
>>>>> memset(&attr, 0, sizeof(attr));
>>>>> attr.size = sizeof(attr);
>>>>> attr.type = PERF_TYPE_HARDWARE;
>>>>> attr.config = PERF_COUNT_HW_CPU_CYCLES;
>>>>> attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
>>>>> attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
>>>>>
>>>>> Commit referenced in Fixes tag broke this setup by making invalid assumption
>>>>> that LBR is useful only for sampling events. Remove that assumption.
>>>>>
>>>>> Note, earlier we removed a similar assumption on AMD side of LBR support,
>>>>> see [1] for details.
>>>>>
>>>>> [0] https://github.com/anakryiko/retsnoop
>>>>> [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
>>>>>
>>>>> Cc: stable@vger.kernel.org # 6.8+
>>>>> Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
>>>>> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
>>>>> ---
>>>>> arch/x86/events/intel/core.c | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>>>>> index 9e519d8a810a..f82a342b8852 100644
>>>>> --- a/arch/x86/events/intel/core.c
>>>>> +++ b/arch/x86/events/intel/core.c
>>>>> @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
>>>>> x86_pmu.pebs_aliases(event);
>>>>> }
>>>>>
>>>>> - if (needs_branch_stack(event) && is_sampling_event(event))
>>>>> + if (needs_branch_stack(event))
>>>>> event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>>>
>>>> To limit the LBR for a sampling event is to avoid unnecessary branch
>>>> stack setup for a counting event in the sample read. The above change
>>>> should break the sample read case.
>>>>
>>>> How about the below patch (not test)? Is it good enough for the BPF usage?
>>>>
>>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>>>> index 0c9c2706d4ec..8d67cbda916b 100644
>>>> --- a/arch/x86/events/intel/core.c
>>>> +++ b/arch/x86/events/intel/core.c
>>>> @@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
>>>> *event)
>>>> x86_pmu.pebs_aliases(event);
>>>> }
>>>>
>>>> - if (needs_branch_stack(event) && is_sampling_event(event))
>>>> - event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>>> + if (needs_branch_stack(event)) {
>>>> + /* Avoid branch stack setup for counting events in SAMPLE READ */
>>>> + if (is_sampling_event(event) ||
>>>> + !(event->attr.sample_type & PERF_SAMPLE_READ))
>>>> + event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>>> + }
>>>>
>>>
>>> I'm sure it will be fine for my use case, as I set only
>>> PERF_SAMPLE_BRANCH_STACK.
>>>
>>> But I'll leave it up to perf subsystem experts to decide if this
>>> condition makes sense, because looking at what PERF_SAMPLE_READ is:
>>>
>>> PERF_SAMPLE_READ
>>> Record counter values for all events in a group,
>>> not just the group leader.
>>>
>>> It's not clear why this would disable LBR, if specified.
>>
>> It only disables the counting event with SAMPLE_READ, since LBR is only
>> read in the sampling event's overflow.
>>
>
> Ok, sounds good! Would you like to send a proper patch with your
> proposed changes?
The patch has been posted. Please give it a try.
https://lore.kernel.org/lkml/20240909155848.326640-1-kan.liang@linux.intel.com/
Thanks,
Kan
>
>> Thanks,
>> Kan
>>>
>>>> if (branch_sample_counters(event)) {
>>>> struct perf_event *leader, *sibling;
>>>>
>>>>
>>>> Thanks,
>>>> Kan
>>>>>
>>>>> if (branch_sample_counters(event)) {
>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-09-09 16:02 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-05 18:00 [PATCH] perf/x86: fix wrong assumption that LBR is only useful for sampling events Andrii Nakryiko
2024-09-05 19:20 ` Liang, Kan
2024-09-05 20:22 ` Andrii Nakryiko
2024-09-05 20:29 ` Liang, Kan
2024-09-05 20:33 ` Andrii Nakryiko
2024-09-09 16:02 ` Liang, Kan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).