public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Namhyung Kim <namhyung@kernel.org>
To: Tengda Wu <wutengda@huaweicloud.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	song@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Ian Rogers <irogers@google.com>,
	Adrian Hunter <adrian.hunter@intel.com>,
	kan.liang@linux.intel.com, linux-perf-users@vger.kernel.org,
	linux-kernel@vger.kernel.org, bpf@vger.kernel.org
Subject: Re: [PATCH -next 1/2] perf stat: Increase perf_attr_map entries
Date: Fri, 27 Sep 2024 10:12:24 -0700	[thread overview]
Message-ID: <ZvbnePGVmbWF0fAF@google.com> (raw)
In-Reply-To: <41d1d728-dbf4-4b0d-9855-19cd06e2a594@huaweicloud.com>

On Fri, Sep 27, 2024 at 10:35:54AM +0800, Tengda Wu wrote:
> 
> 
> On 2024/9/26 12:16, Namhyung Kim wrote:
> > On Wed, Sep 25, 2024 at 01:55:22PM +0000, Tengda Wu wrote:
> >> bperf restricts the size of perf_attr_map's entries to 16, which
> >> cannot hold all events in many scenarios. A typical example is
> >> when the user specifies `-a -ddd` ([0]). And in other cases such as
> >> top-down analysis, which often requires a set of more than 16 PMUs
> >> to be collected simultaneously.
> >>
> >> Fix this by increase perf_attr_map entries to 100, and an event
> >> number check has been introduced when bperf__load() to ensure that
> >> users receive a more friendly prompt when the event limit is reached.
> >>
> >>   [0] https://lore.kernel.org/all/20230104064402.1551516-3-namhyung@kernel.org/
> > 
> > Apparently this patch was never applied.  I don't know how much you need
> > but having too many events at the same time won't be very useful because
> > multiplexing could reduce the accuracy.
> > 
> 
> Could you please explain why patch [0] was not merged at that time? I couldn't
> find this information from the previous emails.

I guess it's just fell through the crack. :)

> 
> In my scenario, we collect more than 40+ events to support necessary metric
> calculations, which multiplexing is inevitable. Although multiplexing may
> reduce accuracy, for the purpose of supporting metric calculations, these
> accuracy losses can be acceptable. Perf also has the same issue with multiplexing.
> Removing the event limit for bperf can provide users with additional options.
> 
> In addition to accuracy, we also care about overhead. I compared the overhead
> of bperf and perf by testing ./lat_ctx in lmbench [1], and found that the
> overhead of bperf stat is about 4% less than perf. This is why we choose to
> use bperf in some extreme scenarios.

Ok, thanks for explanation.  I think it's ok to increase the limit.

Thanks,
Namhyung

> 
>   [1] https://github.com/intel/lmbench
> 
> Thanks,
> Tengda
> 
> > 
> >>
> >> Fixes: 7fac83aaf2ee ("perf stat: Introduce 'bperf' to share hardware PMCs with BPF")
> >> Signed-off-by: Tengda Wu <wutengda@huaweicloud.com>
> >> ---
> >>  tools/perf/util/bpf_counter.c | 8 +++++++-
> >>  1 file changed, 7 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
> >> index 7a8af60e0f51..3346129c20cf 100644
> >> --- a/tools/perf/util/bpf_counter.c
> >> +++ b/tools/perf/util/bpf_counter.c
> >> @@ -28,7 +28,7 @@
> >>  #include "bpf_skel/bperf_leader.skel.h"
> >>  #include "bpf_skel/bperf_follower.skel.h"
> >>  
> >> -#define ATTR_MAP_SIZE 16
> >> +#define ATTR_MAP_SIZE 100
> >>  
> >>  static inline void *u64_to_ptr(__u64 ptr)
> >>  {
> >> @@ -451,6 +451,12 @@ static int bperf__load(struct evsel *evsel, struct target *target)
> >>  	enum bperf_filter_type filter_type;
> >>  	__u32 filter_entry_cnt, i;
> >>  
> >> +	if (evsel->evlist->core.nr_entries > ATTR_MAP_SIZE) {
> >> +		pr_err("Too many events, please limit to %d or less\n",
> >> +			ATTR_MAP_SIZE);
> >> +		return -1;
> >> +	}
> >> +
> >>  	if (bperf_check_target(evsel, target, &filter_type, &filter_entry_cnt))
> >>  		return -1;
> >>  
> >> -- 
> >> 2.34.1
> >>
> 

  reply	other threads:[~2024-09-27 17:12 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-25 13:55 [PATCH -next 0/2] perf stat: a set of small fixes for bperf Tengda Wu
2024-09-25 13:55 ` [PATCH -next 1/2] perf stat: Increase perf_attr_map entries Tengda Wu
2024-09-26  4:16   ` Namhyung Kim
2024-09-27  2:35     ` Tengda Wu
2024-09-27 17:12       ` Namhyung Kim [this message]
2024-09-29  0:54         ` Tengda Wu
2024-09-25 13:55 ` [PATCH -next 2/2] perf stat: Fix incorrect display of bperf when event count is 0 Tengda Wu
2024-10-09  5:20 ` [PATCH -next 0/2] perf stat: a set of small fixes for bperf Namhyung Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZvbnePGVmbWF0fAF@google.com \
    --to=namhyung@kernel.org \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=bpf@vger.kernel.org \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=song@kernel.org \
    --cc=wutengda@huaweicloud.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox