From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Swapnil Sapkal <swapnil.sapkal@amd.com>
Cc: peterz@infradead.org, mingo@redhat.com, namhyung@kernel.org,
irogers@google.com, james.clark@arm.com, ravi.bangoria@amd.com,
yu.c.chen@intel.com, mark.rutland@arm.com,
alexander.shishkin@linux.intel.com, jolsa@kernel.org,
rostedt@goodmis.org, vincent.guittot@linaro.org,
adrian.hunter@intel.com, kan.liang@linux.intel.com,
gautham.shenoy@amd.com, kprateek.nayak@amd.com,
juri.lelli@redhat.com, yangjihong@bytedance.com,
void@manifault.com, tj@kernel.org, sshegde@linux.ibm.com,
ctshao@google.com, quic_zhonhan@quicinc.com,
thomas.falcon@intel.com, blakejones@google.com,
ashelat@redhat.com, leo.yan@arm.com, dvyukov@google.com,
ak@linux.intel.com, yujie.liu@intel.com, graham.woodward@arm.com,
ben.gainey@arm.com, vineethr@linux.ibm.com,
tim.c.chen@linux.intel.com, linux@treblig.org,
santosh.shukla@amd.com, sandipan.das@amd.com,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org,
James Clark <james.clark@linaro.org>
Subject: Re: [PATCH v5 07/10] perf sched stats: Add support for live mode
Date: Thu, 22 Jan 2026 12:42:26 -0300 [thread overview]
Message-ID: <aXJFYpY8_oL2GMhV@x1> (raw)
In-Reply-To: <aXF1OECUByl0kgCF@x1>
On Wed, Jan 21, 2026 at 09:54:16PM -0300, Arnaldo Carvalho de Melo wrote:
> On Mon, Jan 19, 2026 at 05:58:29PM +0000, Swapnil Sapkal wrote:
> > The live mode works similar to simple `perf stat` command, by profiling
> > the target and printing results on the terminal as soon as the target
> > finishes.
> >
> > Example usage:
> >
> > # perf sched stats -- true
> > Description
> > ----------------------------------------------------------------------------------------------------
> > DESC -> Description of the field
> > COUNT -> Value of the field
> > PCT_CHANGE -> Percent change with corresponding base value
> > AVG_JIFFIES -> Avg time in jiffies between two consecutive occurrence of event
> > ----------------------------------------------------------------------------------------------------
> >
> > Time elapsed (in jiffies) : 1
> > ----------------------------------------------------------------------------------------------------
> > CPU: <ALL CPUS SUMMARY>
> > ----------------------------------------------------------------------------------------------------
> > DESC COUNT PCT_CHANGE
> > ----------------------------------------------------------------------------------------------------
> > yld_count : 0
> > array_exp : 0
> > sched_count : 0
> > sched_goidle : 0 ( 0.00% )
> > ttwu_count : 0
> > ttwu_local : 0 ( 0.00% )
> > rq_cpu_time : 27875
> > run_delay : 0 ( 0.00% )
> > pcount : 0
> > ----------------------------------------------------------------------------------------------------
> > CPU: <ALL CPUS SUMMARY> | DOMAIN: SMT
> > ----------------------------------------------------------------------------------------------------
> > DESC COUNT AVG_JIFFIES
> > ----------------------------------------- <Category busy> ------------------------------------------
> > busy_lb_count : 0 $ 0.00 $
> > busy_lb_balanced : 0 $ 0.00 $
> > busy_lb_failed : 0 $ 0.00 $
> > busy_lb_imbalance_load : 0
> > busy_lb_imbalance_util : 0
> > busy_lb_imbalance_task : 0
> > busy_lb_imbalance_misfit : 0
> > busy_lb_gained : 0
> > busy_lb_hot_gained : 0
> > busy_lb_nobusyq : 0 $ 0.00 $
> > busy_lb_nobusyg : 0 $ 0.00 $
> > *busy_lb_success_count : 0
> > *busy_lb_avg_pulled : 0.00
> >
> > ... and so on. Output will show similar data for all the cpus in the
> > system.
> >
> > Co-developed-by: Ravi Bangoria <ravi.bangoria@amd.com>
> > Signed-off-by: Ravi Bangoria <ravi.bangoria@amd.com>
> > Tested-by: James Clark <james.clark@linaro.org>
> > Signed-off-by: Swapnil Sapkal <swapnil.sapkal@amd.com>
> > ---
> > tools/perf/builtin-sched.c | 99 +++++++++++++++++++++++++++++++++++++-
> > tools/perf/util/header.c | 3 +-
> > tools/perf/util/header.h | 3 ++
> > 3 files changed, 102 insertions(+), 3 deletions(-)
> >
> > diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
> > index c6b054b9b12a..8993308439bc 100644
> > --- a/tools/perf/builtin-sched.c
> > +++ b/tools/perf/builtin-sched.c
> > @@ -4426,6 +4426,103 @@ static int perf_sched__schedstat_report(struct perf_sched *sched)
> > return err;
> > }
> >
> > +static int process_synthesized_event_live(const struct perf_tool *tool __maybe_unused,
> > + union perf_event *event,
> > + struct perf_sample *sample __maybe_unused,
> > + struct machine *machine __maybe_unused)
> > +{
> > + return perf_sched__process_schedstat(tool, NULL, event);
> > +}
> > +
> > +static int perf_sched__schedstat_live(struct perf_sched *sched,
> > + int argc, const char **argv)
> > +{
> > + struct cpu_domain_map **cd_map = NULL;
> > + struct target target = {};
> > + u32 __maybe_unused md;
> > + struct evlist *evlist;
> > + u32 nr = 0, sv;
> > + int reset = 0;
> > + int err = 0;
> > +
> > + signal(SIGINT, sighandler);
> > + signal(SIGCHLD, sighandler);
> > + signal(SIGTERM, sighandler);
> > +
> > + evlist = evlist__new();
> > + if (!evlist)
> > + return -ENOMEM;
> > +
> > + /*
> > + * `perf sched schedstat` does not support workload profiling (-p pid)
> > + * since /proc/schedstat file contains cpu specific data only. Hence, a
> > + * profile target is either set of cpus or systemwide, never a process.
> > + * Note that, although `-- <workload>` is supported, profile data are
> > + * still cpu/systemwide.
> > + */
> > + if (cpu_list)
> > + target.cpu_list = cpu_list;
> > + else
> > + target.system_wide = true;
> > +
> > + if (argc) {
> > + err = evlist__prepare_workload(evlist, &target, argv, false, NULL);
> > + if (err)
> > + goto out;
> > + }
> > +
> > + err = evlist__create_maps(evlist, &target);
> > + if (err < 0)
> > + goto out;
> > +
> > + user_requested_cpus = evlist->core.user_requested_cpus;
> > +
> > + err = perf_event__synthesize_schedstat(&(sched->tool),
> > + process_synthesized_event_live,
> > + user_requested_cpus);
> > + if (err < 0)
> > + goto out;
> > +
> > + err = enable_sched_schedstats(&reset);
> > + if (err < 0)
> > + goto out;
> > +
> > + if (argc)
> > + evlist__start_workload(evlist);
> > +
> > + /* wait for signal */
> > + pause();
> > +
> > + if (reset) {
> > + err = disable_sched_schedstat();
> > + if (err < 0)
> > + goto out;
> > + }
> > +
> > + err = perf_event__synthesize_schedstat(&(sched->tool),
> > + process_synthesized_event_live,
> > + user_requested_cpus);
> > + if (err)
> > + goto out;
> > +
> > + setup_pager();
> > +
> > + if (list_empty(&cpu_head)) {
> > + pr_err("Data is not available\n");
> > + err = -1;
> > + goto out;
> > + }
> > +
> > + nr = cpu__max_present_cpu().cpu;
> > + cd_map = build_cpu_domain_map(&sv, &md, nr);
> > + show_schedstat_data(&cpu_head, cd_map);
> > +out:
>
> With clang on almalinux 10:
>
> + make ARCH= CROSS_COMPILE= EXTRA_CFLAGS= -C tools/perf O=/tmp/build/perf CC=clang
> make: Entering directory '/git/perf-6.19.0-rc4/tools/perf'
> GEN /tmp/build/perf/pmu-events/arch/powerpc/power8/memory.json
> builtin-sched.c:4709:6: error: variable 'sv' is used uninitialized whenever 'if' condition is true [-Werror,-Wsometimes-uninitialized]
> 4709 | if (list_empty(&cpu_head)) {
> | ^~~~~~~~~~~~~~~~~~~~~
> GEN /tmp/build/perf/pmu-events/arch/powerpc/power8/metrics.json
> GEN /tmp/build/perf/pmu-events/arch/powerpc/power8/other.json
> CC /tmp/build/perf/tests/kmod-path.o
> builtin-sched.c:4719:31: note: uninitialized use occurs here
> GEN /tmp/build/perf/pmu-events/arch/powerpc/power8/pipeline.json
> 4719 | free_cpu_domain_info(cd_map, sv, nr);
> | ^~
>
Moving the label does the trick as free_cpu_domain_info() only needs to
be called if build_cpu_domain_map() was called.
- Arnaldo
diff --git a/tools/perf/builtin-sched.c b/tools/perf/builtin-sched.c
index 8993308439bc5998..ec9fa29196b24f5a 100644
--- a/tools/perf/builtin-sched.c
+++ b/tools/perf/builtin-sched.c
@@ -4516,8 +4516,8 @@ static int perf_sched__schedstat_live(struct perf_sched *sched,
nr = cpu__max_present_cpu().cpu;
cd_map = build_cpu_domain_map(&sv, &md, nr);
show_schedstat_data(&cpu_head, cd_map);
-out:
free_cpu_domain_info(cd_map, sv, nr);
+out:
free_schedstat(&cpu_head);
evlist__delete(evlist);
return err;
next prev parent reply other threads:[~2026-01-22 15:42 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-19 17:58 [PATCH v5 00/10] perf sched: Introduce stats tool Swapnil Sapkal
2026-01-19 17:58 ` [PATCH v5 01/10] tools/lib: Add list_is_first() Swapnil Sapkal
2026-01-19 17:58 ` [PATCH v5 02/10] perf header: Support CPU DOMAIN relation info Swapnil Sapkal
2026-01-21 17:18 ` Shrikanth Hegde
2026-01-23 15:19 ` Swapnil Sapkal
2026-01-23 16:31 ` Arnaldo Carvalho de Melo
2026-01-26 14:46 ` kernel test robot
2026-01-26 17:12 ` Ian Rogers
2026-01-27 2:46 ` Oliver Sang
2026-01-19 17:58 ` [PATCH v5 03/10] perf sched stats: Add record and rawdump support Swapnil Sapkal
2026-01-19 17:58 ` [PATCH v5 04/10] perf sched stats: Add schedstat v16 support Swapnil Sapkal
2026-01-19 17:58 ` [PATCH v5 05/10] perf sched stats: Add schedstat v17 support Swapnil Sapkal
2026-01-19 17:58 ` [PATCH v5 06/10] perf sched stats: Add support for report subcommand Swapnil Sapkal
2026-01-19 17:58 ` [PATCH v5 07/10] perf sched stats: Add support for live mode Swapnil Sapkal
2026-01-22 0:54 ` Arnaldo Carvalho de Melo
2026-01-22 1:32 ` Arnaldo Carvalho de Melo
2026-01-22 15:34 ` Arnaldo Carvalho de Melo
2026-01-22 15:42 ` Arnaldo Carvalho de Melo [this message]
2026-03-03 18:47 ` Ian Rogers
2026-03-10 10:08 ` Swapnil Sapkal
2026-01-19 17:58 ` [PATCH v5 08/10] perf sched stats: Add support for diff subcommand Swapnil Sapkal
2026-01-19 17:58 ` [PATCH v5 09/10] perf sched stats: Add basic perf sched stats test Swapnil Sapkal
2026-01-19 17:58 ` [PATCH v5 10/10] perf sched stats: Add details in man page Swapnil Sapkal
2026-01-21 16:09 ` [PATCH v5 00/10] perf sched: Introduce stats tool Chen, Yu C
2026-01-21 16:33 ` Peter Zijlstra
2026-01-21 17:12 ` Ian Rogers
2026-01-21 19:51 ` Peter Zijlstra
2026-01-21 20:08 ` Arnaldo Carvalho de Melo
2026-01-21 20:10 ` Peter Zijlstra
2026-01-22 16:00 ` Arnaldo Carvalho de Melo
2026-01-23 16:32 ` Swapnil Sapkal
2026-01-23 16:37 ` Arnaldo Carvalho de Melo
2026-01-23 16:42 ` Swapnil Sapkal
2026-01-21 17:52 ` Shrikanth Hegde
2026-01-23 16:19 ` Swapnil Sapkal
2026-01-21 22:59 ` Namhyung Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aXJFYpY8_oL2GMhV@x1 \
--to=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=ak@linux.intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=ashelat@redhat.com \
--cc=ben.gainey@arm.com \
--cc=blakejones@google.com \
--cc=ctshao@google.com \
--cc=dvyukov@google.com \
--cc=gautham.shenoy@amd.com \
--cc=graham.woodward@arm.com \
--cc=irogers@google.com \
--cc=james.clark@arm.com \
--cc=james.clark@linaro.org \
--cc=jolsa@kernel.org \
--cc=juri.lelli@redhat.com \
--cc=kan.liang@linux.intel.com \
--cc=kprateek.nayak@amd.com \
--cc=leo.yan@arm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=linux@treblig.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=quic_zhonhan@quicinc.com \
--cc=ravi.bangoria@amd.com \
--cc=rostedt@goodmis.org \
--cc=sandipan.das@amd.com \
--cc=santosh.shukla@amd.com \
--cc=sshegde@linux.ibm.com \
--cc=swapnil.sapkal@amd.com \
--cc=thomas.falcon@intel.com \
--cc=tim.c.chen@linux.intel.com \
--cc=tj@kernel.org \
--cc=vincent.guittot@linaro.org \
--cc=vineethr@linux.ibm.com \
--cc=void@manifault.com \
--cc=yangjihong@bytedance.com \
--cc=yu.c.chen@intel.com \
--cc=yujie.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox