public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Leo Yan <leo.yan@arm.com>
To: James Clark <james.clark@linaro.org>
Cc: coresight@lists.linaro.org, linux-arm-kernel@lists.infradead.org,
	Yeoreum Yun <yeoreum.yun@arm.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Will Deacon <will@kernel.org>, Yabin Cui <yabinc@google.com>,
	Keita Morisaki <keyz@google.com>,
	Yuanfang Zhang <quic_yuanfang@quicinc.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Tamas Petz <tamas.petz@arm.com>,
	Thomas Gleixner <tglx@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Mike Leach <mike.leach@arm.com>
Subject: Re: [PATCH v10 16/20] coresight: Add PM callbacks for sink device
Date: Thu, 9 Apr 2026 16:44:37 +0100	[thread overview]
Message-ID: <20260409154437.GU356832@e132581.arm.com> (raw)
In-Reply-To: <e7c6f497-f8f0-409b-b80a-a064b222c08d@linaro.org>

On Thu, Apr 09, 2026 at 11:52:00AM +0100, James Clark wrote:

[...]

> > @@ -1787,15 +1808,32 @@ static int coresight_pm_save(struct coresight_path *path)
> >   	to = list_prev_entry(coresight_path_last_node(path), link);
> >   	coresight_disable_path_from_to(path, from, to);
> > +	ret = coresight_pm_device_save(coresight_get_sink(path));
> > +	if (ret)
> > +		goto sink_failed;
> > +
> 
> The comment directly above this says "Up to the node before sink to avoid
> latency". But then this line goes and saves the sink anyway. So I'm not sure
> what's meant by the comment?

I will refine the comment to: "Control the path up to the node before
the _system_ sink to avoid latency caused by memory copying; afterwards,
saves context if the sink provides a callback".


> >   	return 0;
> > +
> > +sink_failed:
> > +	if (!coresight_enable_path_from_to(path, coresight_get_mode(source),
> > +					   from, to))
> > +		coresight_pm_device_restore(source);
> > +
> > +	pr_err("Failed in coresight PM save on CPU%d: %d\n",
> > +	       smp_processor_id(), ret);
> > +	this_cpu_write(percpu_pm_failed, true);
> 
> Why does only a failing sink set percpu_pm_failed when failing to save the
> source exits early.

My purpose is to use "percpu_pm_failed == true" to prevent further
issues if we already know the link has failed.  To be clear, this is not
a recovery mechanism; it simply means "if link enabling or disabling
fails in CPU PM, do not continue to avoid potential hardware lockups.”

The source save failure is a bit special, it fails at the beginning of
the CoreSight CPU PM flow, and the returned error prevents the CPU idle
flow from proceeding.  As a result, there is nothing further to do.

> Sashiko has a similar comment that this could result in
> restoring uninitialised source save data later, but a comment in this
> function about why the flow is like this would be helpful.

The comment of "Restoring uninitialised source save data later" does not
make sense.

When the save fails, the notifier only rolls back the callbacks that ran
before the failing callback by invoking CPU_PM_ENTER_FAILED.  And there
is no CPU_PM_EXIT, as the idle state is never entered.

> We have coresight_disable_path_from_to() which always succeeds and doesn't
> return an error. TRBE is the only sink with a pm_save_disable()
> callback, but it always succeeds anyway.
> 
> Would it not be much simpler to require that sink save/restore callbacks
> always succeed and don't return anything? Seems like this percpu_pm_failed
> stuff is extra complexity for a scenario that doesn't exist? The only thing
> that can fail is saving the source but it doesn't goto sink_failed when that
> happens.

As you suggested, I can simplify the code with assuming sink ops
always success, but we might expect complaints from Sashiko.

> Ideally etm4_cpu_save() wouldn't have a return value either. It would be
> good if we could find away to skip or ignore the timeouts in there somehow
> because that's the only reason it can fail.

Seems to me, ETMv4 has a timeout log and returns error is not bad, as
it can be helpful to locate issue in a cheap way (sometimes, hang
issue caused by lockup might be a time-consumed debugging).

Thanks,
Leo


  parent reply	other threads:[~2026-04-09 15:44 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-05 15:02 [PATCH v10 00/20] CoreSight: Refactor power management for CoreSight path Leo Yan
2026-04-05 15:02 ` [PATCH v10 01/20] coresight: Extract device init into coresight_init_device() Leo Yan
2026-04-05 15:02 ` [PATCH v10 02/20] coresight: Populate CPU ID into coresight_device Leo Yan
2026-04-05 15:02 ` [PATCH v10 03/20] coresight: Remove .cpu_id() callback from source ops Leo Yan
2026-04-05 15:02 ` [PATCH v10 04/20] coresight: Take hotplug lock in enable_source_store() for Sysfs mode Leo Yan
2026-04-05 15:02 ` [PATCH v10 05/20] coresight: etm4x: Set per-CPU path on local CPU Leo Yan
2026-04-05 15:02 ` [PATCH v10 06/20] coresight: etm3x: " Leo Yan
2026-04-05 15:02 ` [PATCH v10 07/20] coresight: Register CPU PM notifier in core layer Leo Yan
2026-04-05 15:02 ` [PATCH v10 08/20] coresight: etm4x: Hook CPU PM callbacks Leo Yan
2026-04-05 15:02 ` [PATCH v10 09/20] coresight: etm4x: Remove redundant checks in PM save and restore Leo Yan
2026-04-05 15:02 ` [PATCH v10 10/20] coresight: syscfg: Use IRQ-safe spinlock to protect active variables Leo Yan
2026-04-05 15:02 ` [PATCH v10 11/20] coresight: Move source helper disabling to coresight_disable_path() Leo Yan
2026-04-05 15:02 ` [PATCH v10 12/20] coresight: Control path with range Leo Yan
2026-04-05 15:02 ` [PATCH v10 13/20] coresight: Use helpers to fetch first and last nodes Leo Yan
2026-04-05 15:02 ` [PATCH v10 14/20] coresight: Introduce coresight_enable_source() helper Leo Yan
2026-04-05 15:02 ` [PATCH v10 15/20] coresight: Control path during CPU idle Leo Yan
2026-04-05 15:02 ` [PATCH v10 16/20] coresight: Add PM callbacks for sink device Leo Yan
2026-04-09 10:52   ` James Clark
2026-04-09 12:54     ` James Clark
2026-04-09 13:14     ` Suzuki K Poulose
2026-04-09 14:30       ` James Clark
2026-04-09 14:31         ` James Clark
2026-04-09 14:49         ` Leo Yan
2026-04-09 15:44     ` Leo Yan [this message]
2026-04-05 15:02 ` [PATCH v10 17/20] coresight: trbe: Save and restore state across CPU low power state Leo Yan
2026-04-09 10:52   ` James Clark
2026-04-09 15:54     ` Leo Yan
2026-04-05 15:02 ` [PATCH v10 18/20] coresight: sysfs: Increment refcount only for system tracers Leo Yan
2026-04-05 15:02 ` [PATCH v10 19/20] coresight: Move CPU hotplug callbacks to core layer Leo Yan
2026-04-05 15:02 ` [PATCH v10 20/20] coresight: sysfs: Validate CPU online status for per-CPU sources Leo Yan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260409154437.GU356832@e132581.arm.com \
    --to=leo.yan@arm.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=coresight@lists.linaro.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=james.clark@linaro.org \
    --cc=keyz@google.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=mark.rutland@arm.com \
    --cc=mike.leach@arm.com \
    --cc=peterz@infradead.org \
    --cc=quic_yuanfang@quicinc.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tamas.petz@arm.com \
    --cc=tglx@kernel.org \
    --cc=will@kernel.org \
    --cc=yabinc@google.com \
    --cc=yeoreum.yun@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox