From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Ingo Molnar <mingo@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>, Jiri Olsa <jolsa@kernel.org>,
Ian Rogers <irogers@google.com>,
Adrian Hunter <adrian.hunter@intel.com>,
Peter Zijlstra <peterz@infradead.org>,
LKML <linux-kernel@vger.kernel.org>,
linux-perf-users@vger.kernel.org
Subject: Re: [perf stat] Extend --cpu to non-system-wide runs too? was Re: [PATCH v3] perf bench sched pipe: Add -G/--cgroups option
Date: Tue, 17 Oct 2023 15:31:30 -0300 [thread overview]
Message-ID: <ZS7TAr1bpOfkeNuv@kernel.org> (raw)
In-Reply-To: <ZS6BgfOUeWQnI1mS@gmail.com>
Em Tue, Oct 17, 2023 at 02:43:45PM +0200, Ingo Molnar escreveu:
> * Arnaldo Carvalho de Melo <acme@kernel.org> wrote:
> > Em Tue, Oct 17, 2023 at 01:40:07PM +0200, Ingo Molnar escreveu:
> > > Side note: it might make sense to add a sane cpumask/affinity setting
> > > option to perf stat itself:
> > > perf stat --cpumask
> > > ... or so?
> > > We do have -C:
> > > -C, --cpu <cpu> list of cpus to monitor in system-wide
> > > ... but that's limited to --all-cpus, right?
> > > Perhaps we could extend --cpu to non-system-wide runs too?
> > Maybe I misunderstood your question, but its a list of cpus to limit the
> > counting:
> Ok.
> So I thought that "--cpumask mask/list/etc" should simply do what 'taskset'
> is doing: using the sched_setaffinity() syscall to make the current
> workload and all its children.
> There's impact on perf stat itself: it could just call sched_setaffinity()
> early on, and not bother about it?
> Having it built-in into perf would simply make it easier to not forget
> running 'taskset'. :-)
Would that be the only advantage?
I think using taskset isn't that much of a burden and keeps with the
Unix tradition, no? :-\
See, using 'perf record -C', i.e. sampling, will use sched_setaffinity,
and in that case there is a clear advantage... wait, this train of
thought made me remember something, but its just about counter setup,
not about the workload:
[acme@five perf-tools-next]$ grep affinity__set tools/perf/*.c
tools/perf/builtin-stat.c: else if (affinity__setup(&saved_affinity) < 0)
tools/perf/builtin-stat.c: if (affinity__setup(&saved_affinity) < 0)
[acme@five perf-tools-next]$
/*
* perf_event_open does an IPI internally to the target CPU.
* It is more efficient to change perf's affinity to the target
* CPU and then set up all events on that CPU, so we amortize
* CPU communication.
*/
void affinity__set(struct affinity *a, int cpu)
[root@five ~]# perf trace --summary -e sched_setaffinity perf stat -e cycles -a sleep 1
Performance counter stats for 'system wide':
6,319,186,681 cycles
1.002665795 seconds time elapsed
Summary of events:
perf (24307), 396 events, 87.4%
syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 198 0 4.544 0.006 0.023 0.042 2.30%
[root@five ~]#
[root@five ~]# perf trace --summary -e sched_setaffinity perf stat -C 1 -e cycles -a sleep 1
Performance counter stats for 'system wide':
105,311,506 cycles
1.001203282 seconds time elapsed
Summary of events:
perf (24633), 24 events, 29.6%
syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 12 0 0.105 0.005 0.009 0.039 32.07%
[root@five ~]# perf trace --summary -e sched_setaffinity perf stat -C 1,2 -e cycles -a sleep 1
Performance counter stats for 'system wide':
131,474,375 cycles
1.001324346 seconds time elapsed
Summary of events:
perf (24636), 36 events, 38.7%
syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 18 0 0.442 0.000 0.025 0.093 24.75%
[root@five ~]# perf trace --summary -e sched_setaffinity perf stat -C 1,2,30 -e cycles -a sleep 1
Performance counter stats for 'system wide':
191,674,889 cycles
1.001280015 seconds time elapsed
Summary of events:
perf (24639), 48 events, 45.7%
syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 24 0 0.835 0.000 0.035 0.144 24.40%
[root@five ~]#
Too much affinity setting :-)
- Arnaldo
next prev parent reply other threads:[~2023-10-17 18:31 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-16 4:42 [PATCH v3] perf bench sched pipe: Add -G/--cgroups option Namhyung Kim
2023-10-16 9:35 ` Ingo Molnar
2023-10-16 15:38 ` Arnaldo Carvalho de Melo
2023-10-16 15:45 ` Arnaldo Carvalho de Melo
2023-10-16 15:51 ` Arnaldo Carvalho de Melo
2023-10-16 15:55 ` Arnaldo Carvalho de Melo
2023-10-16 20:34 ` Arnaldo Carvalho de Melo
2023-10-16 21:44 ` Namhyung Kim
2023-10-17 12:09 ` Arnaldo Carvalho de Melo
2023-10-17 11:40 ` Ingo Molnar
2023-10-17 12:28 ` [perf stat] Extend --cpu to non-system-wide runs too? was " Arnaldo Carvalho de Melo
2023-10-17 12:43 ` Ingo Molnar
2023-10-17 18:31 ` Arnaldo Carvalho de Melo [this message]
2023-10-17 19:05 ` Namhyung Kim
2023-10-18 12:07 ` Ingo Molnar
2023-10-17 8:06 ` Athira Rajeev
2023-10-16 17:25 ` Athira Rajeev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZS7TAr1bpOfkeNuv@kernel.org \
--to=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=irogers@google.com \
--cc=jolsa@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).