From: Peter Zijlstra <peterz@infradead.org>
To: Stephane Eranian <eranian@google.com>
Cc: Namhyung Kim <namhyung@kernel.org>,
Ingo Molnar <mingo@kernel.org>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Jiri Olsa <jolsa@redhat.com>, Mark Rutland <mark.rutland@arm.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
LKML <linux-kernel@vger.kernel.org>,
Andi Kleen <ak@linux.intel.com>, Ian Rogers <irogers@google.com>,
Song Liu <songliubraving@fb.com>, Tejun Heo <tj@kernel.org>,
kernel test robot <lkp@intel.com>,
Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH v3 1/2] perf/core: Share an event with multiple cgroups
Date: Tue, 20 Apr 2021 11:48:48 +0200 [thread overview]
Message-ID: <YH6jgMWpFXy+pFVP@hirez.programming.kicks-ass.net> (raw)
In-Reply-To: <CABPqkBTncWfeFWY=kYXTAr3gRjpyFVL-YJN4K1YOPpHO35PHBw@mail.gmail.com>
On Tue, Apr 20, 2021 at 01:34:40AM -0700, Stephane Eranian wrote:
> The sampling approach will certainly incur more overhead and be at
> risk of losing the ability to
> reconstruct the total counter per-cgroup, unless you set the period
> for SW_CGROUP_SWITCHES to
> 1. But then, you run the risk of losing samples if the buffer is full
> or sampling is throtlled.
> In some scenarios, we believe the number of context switches between
> cgroup could be quite high (>> 1000/s).
> And on top you would have to add the processing of the samples to
> extract the counts per cgroup. That would require
> a synthesis on cgroup on perf record and some post-processing on perf
> report. We are interested in using the data live
> to make some policy decisions, so a counting approach with perf stat
> will always be best.
Can you please configure your MUA to sanely (re)flow text? The above
random line-breaks are *so* painful to read.
next prev parent reply other threads:[~2021-04-20 9:49 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-13 15:53 [PATCH v3 0/2] perf core: Sharing events with multiple cgroups Namhyung Kim
2021-04-13 15:53 ` [PATCH v3 1/2] perf/core: Share an event " Namhyung Kim
2021-04-15 14:51 ` Peter Zijlstra
2021-04-15 23:48 ` Namhyung Kim
2021-04-16 9:26 ` Peter Zijlstra
2021-04-16 9:29 ` Peter Zijlstra
2021-04-16 10:19 ` Namhyung Kim
2021-04-16 10:27 ` Peter Zijlstra
2021-04-16 11:22 ` Namhyung Kim
2021-04-16 11:59 ` Peter Zijlstra
2021-04-16 12:19 ` Namhyung Kim
2021-04-16 13:39 ` Peter Zijlstra
2021-05-09 7:13 ` Namhyung Kim
2021-04-16 10:18 ` Namhyung Kim
2021-04-16 9:49 ` Namhyung Kim
2021-04-20 10:28 ` Peter Zijlstra
2021-04-20 18:37 ` Namhyung Kim
2021-04-20 18:43 ` Peter Zijlstra
2021-04-20 8:34 ` Stephane Eranian
2021-04-20 9:48 ` Peter Zijlstra [this message]
2021-04-20 11:28 ` Peter Zijlstra
2021-04-21 19:37 ` Namhyung Kim
2021-05-03 21:53 ` Namhyung Kim
2021-04-13 15:53 ` [PATCH v3 2/2] perf/core: Support reading group events with shared cgroups Namhyung Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YH6jgMWpFXy+pFVP@hirez.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=acme@kernel.org \
--cc=ak@linux.intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=eranian@google.com \
--cc=irogers@google.com \
--cc=jolsa@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@intel.com \
--cc=mark.rutland@arm.com \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=songliubraving@fb.com \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox