From: Ingo Molnar <mingo@kernel.org>
To: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
Mark Rutland <mark.rutland@arm.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Stephane Eranian <eranian@google.com>,
Kan Liang <kan.liang@linux.intel.com>,
Ravi Bangoria <ravi.bangoria@amd.com>,
stable@vger.kernel.org
Subject: Re: [PATCH] perf/core: Introduce cpuctx->cgrp_ctx_list
Date: Wed, 4 Oct 2023 09:26:46 +0200 [thread overview]
Message-ID: <ZR0TtjhGT+Em+/ti@gmail.com> (raw)
In-Reply-To: <20231004040844.797044-1-namhyung@kernel.org>
* Namhyung Kim <namhyung@kernel.org> wrote:
> AFAIK we don't have a tool to measure the context switch overhead
> directly. (I think I should add one to perf ftrace latency). But I can
> see it with a simple perf bench command like this.
>
> $ perf bench sched pipe -l 100000
> # Running 'sched/pipe' benchmark:
> # Executed 100000 pipe operations between two processes
>
> Total time: 0.650 [sec]
>
> 6.505740 usecs/op
> 153710 ops/sec
>
> It runs two tasks communicate each other using a pipe so it should
> stress the context switch code. This is the normal numbers on my
> system. But after I run these two perf stat commands in background,
> the numbers vary a lot.
>
> $ sudo perf stat -a -e cycles -G user.slice -- sleep 100000 &
> $ sudo perf stat -a -e uncore_imc/cas_count_read/ -- sleep 10000 &
>
> I will show the last two lines of perf bench sched pipe output for
> three runs.
>
> 58.597060 usecs/op # run 1
> 17065 ops/sec
>
> 11.329240 usecs/op # run 2
> 88267 ops/sec
>
> 88.481920 usecs/op # run 3
> 11301 ops/sec
>
> I think the deviation comes from the fact that uncore events are managed
> a certain number of cpus only. If the target process runs on a cpu that
> manages uncore pmu, it'd take longer. Otherwise it won't affect the
> performance much.
The numbers of pipe-message context switching will vary a lot depending on
CPU migration patterns as well.
The best way to measure context-switch overhead is to pin that task
to a single CPU with something like:
$ taskset 1 perf stat --null --repeat 10 perf bench sched pipe -l 10000 >/dev/null
Performance counter stats for 'perf bench sched pipe -l 10000' (10 runs):
0.049798 +- 0.000102 seconds time elapsed ( +- 0.21% )
as you can see the 0.21% stddev is pretty low.
If we allow 2 CPUs, both runtime and stddev is much higher:
$ taskset 3 perf stat --null --repeat 10 perf bench sched pipe -l 10000 >/dev/null
Performance counter stats for 'perf bench sched pipe -l 10000' (10 runs):
1.4835 +- 0.0383 seconds time elapsed ( +- 2.58% )
Thanks,
Ingo
next prev parent reply other threads:[~2023-10-04 7:26 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-04 4:08 [PATCH] perf/core: Introduce cpuctx->cgrp_ctx_list Namhyung Kim
2023-10-04 7:26 ` Ingo Molnar [this message]
2023-10-04 15:01 ` Namhyung Kim
2023-10-04 15:42 ` Ingo Molnar
2023-10-04 21:27 ` Namhyung Kim
2023-10-04 16:02 ` Peter Zijlstra
2023-10-04 16:32 ` Namhyung Kim
2023-10-09 21:04 ` Peter Zijlstra
2023-10-10 4:57 ` Namhyung Kim
2023-10-11 3:45 ` Namhyung Kim
2023-10-11 7:51 ` Peter Zijlstra
2023-10-11 9:50 ` Peter Zijlstra
2023-10-11 16:02 ` Namhyung Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZR0TtjhGT+Em+/ti@gmail.com \
--to=mingo@kernel.org \
--cc=acme@kernel.org \
--cc=alexander.shishkin@linux.intel.com \
--cc=eranian@google.com \
--cc=kan.liang@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=ravi.bangoria@amd.com \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox