public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM
@ 2026-01-21  1:54 Kevin Hilman (TI)
  2026-01-21  1:54 ` [PATCH 1/2] PM / QoS: add flag to indicate latency applies system-wide Kevin Hilman (TI)
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Kevin Hilman (TI) @ 2026-01-21  1:54 UTC (permalink / raw)
  To: Rafael J. Wysocki, Ulf Hansson, linux-pm; +Cc: Dhruva Gole, linux-kernel

Currently QoS resume latencies are only considered for runtime PM
transitions of pmdomains, which remains the default.

In order to also support QoS resume latencies during system-wide PM,
add a new flag to indicate a resume latency should be used for
system-wide PM *instead of* runtime PM.

For example, by doing this:

   # echo 500000 > /sys/devices/.../<dev0>/power/pm_qos_resume_latency_us

dev0 now has a resume latency of 500000 usec for runtime PM
transitions.

Then, if the new flag is also set:

   # echo 1 > /sys/devices/.../<dev0>/power/pm_qos_latency_sys

That 500000 usec delay now applies to system-wide PM (and not to
runtime PM).

If a user requires a different latency value for system-wide PM
compared to runtime PM, then the runtime PM value can be set for
normal operations, and the system-wide value (and flag) can be set by
userspace before suspend, and the runtime PM value can be restored
after resume.

To: Rafael J. Wysocki <rafael@kernel.org>
To: Ulf Hansson <ulf.hansson@linaro.org>
To: linux-pm@vger.kernel.org

Signed-off-by: Kevin Hilman (TI) <khilman@baylibre.com>
---
Kevin Hilman (TI) (2):
      PM / QoS: add flag to indicate latency applies system-wide
      pmdommain: add support system-wide resume latency constraints

 drivers/base/power/sysfs.c  | 27 +++++++++++++++++++++++++++
 drivers/pmdomain/governor.c | 42 +++++++++++++++++++++++++++++++++++++++++-
 include/linux/pm_qos.h      |  2 ++
 3 files changed, 70 insertions(+), 1 deletion(-)
---
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
change-id: 20260120-topic-lpm-pmdomain-device-constraints-e5e78ce48502

Best regards,
--  
Kevin Hilman (TI) <khilman@baylibre.com>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] PM / QoS: add flag to indicate latency applies system-wide
  2026-01-21  1:54 [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM Kevin Hilman (TI)
@ 2026-01-21  1:54 ` Kevin Hilman (TI)
  2026-01-27 16:20   ` Rafael J. Wysocki
  2026-01-28 10:25   ` Ulf Hansson
  2026-01-21  1:54 ` [PATCH 2/2] pmdommain: add support system-wide resume latency constraints Kevin Hilman (TI)
  2026-01-28 10:33 ` [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM Ulf Hansson
  2 siblings, 2 replies; 13+ messages in thread
From: Kevin Hilman (TI) @ 2026-01-21  1:54 UTC (permalink / raw)
  To: Rafael J. Wysocki, Ulf Hansson, linux-pm; +Cc: Dhruva Gole, linux-kernel

By default, the QoS resume latency currenly only applied to runtime PM
decisions.

Add new PM_QOS_FLAG_LATENCY_SYS flag to indicate that the
resume latency QoS constraint should be applied to system-wide
PM, and *not* runtime PM.

Signed-off-by: Kevin Hilman (TI) <khilman@baylibre.com>
---
 drivers/base/power/sysfs.c | 27 +++++++++++++++++++++++++++
 include/linux/pm_qos.h     |  2 ++
 2 files changed, 29 insertions(+)

diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c
index 13b31a3adc77..9136c13c17e4 100644
--- a/drivers/base/power/sysfs.c
+++ b/drivers/base/power/sysfs.c
@@ -316,6 +316,32 @@ static ssize_t pm_qos_no_power_off_store(struct device *dev,
 
 static DEVICE_ATTR_RW(pm_qos_no_power_off);
 
+static ssize_t pm_qos_latency_sys_show(struct device *dev,
+				       struct device_attribute *attr,
+				       char *buf)
+{
+	return sysfs_emit(buf, "%d\n", !!(dev_pm_qos_requested_flags(dev)
+					  & PM_QOS_FLAG_LATENCY_SYS));
+}
+
+static ssize_t pm_qos_latency_sys_store(struct device *dev,
+					 struct device_attribute *attr,
+					 const char *buf, size_t n)
+{
+	int ret;
+
+	if (kstrtoint(buf, 0, &ret))
+		return -EINVAL;
+
+	if (ret != 0 && ret != 1)
+		return -EINVAL;
+
+	ret = dev_pm_qos_update_flags(dev, PM_QOS_FLAG_LATENCY_SYS, ret);
+	return ret < 0 ? ret : n;
+}
+
+static DEVICE_ATTR_RW(pm_qos_latency_sys);
+
 #ifdef CONFIG_PM_SLEEP
 static const char _enabled[] = "enabled";
 static const char _disabled[] = "disabled";
@@ -681,6 +707,7 @@ static const struct attribute_group pm_qos_latency_tolerance_attr_group = {
 
 static struct attribute *pm_qos_flags_attrs[] = {
 	&dev_attr_pm_qos_no_power_off.attr,
+	&dev_attr_pm_qos_latency_sys.attr,
 	NULL,
 };
 static const struct attribute_group pm_qos_flags_attr_group = {
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 6cea4455f867..aededda52b6b 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -37,6 +37,8 @@ enum pm_qos_flags_status {
 #define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT	(-1)
 
 #define PM_QOS_FLAG_NO_POWER_OFF	(1 << 0)
+/* latency value applies to system-wide suspend/s2idle */
+#define PM_QOS_FLAG_LATENCY_SYS		(2 << 0)
 
 enum pm_qos_type {
 	PM_QOS_UNITIALIZED,

-- 
2.51.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] pmdommain: add support system-wide resume latency constraints
  2026-01-21  1:54 [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM Kevin Hilman (TI)
  2026-01-21  1:54 ` [PATCH 1/2] PM / QoS: add flag to indicate latency applies system-wide Kevin Hilman (TI)
@ 2026-01-21  1:54 ` Kevin Hilman (TI)
  2026-02-02 11:49   ` Ulf Hansson
  2026-01-28 10:33 ` [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM Ulf Hansson
  2 siblings, 1 reply; 13+ messages in thread
From: Kevin Hilman (TI) @ 2026-01-21  1:54 UTC (permalink / raw)
  To: Rafael J. Wysocki, Ulf Hansson, linux-pm; +Cc: Dhruva Gole, linux-kernel

In addition to checking for CPU latency constraints when checking if
OK to power down a domain, also check for QoS latency constraints in
all devices of a domain and use that in determining the final latency
constraint to use for the domain.

Since cpu_system_power_down_ok() is used for system-wide suspend, the
per-device constratints are only relevant if the LATENCY_SYS QoS flag
is set.

Because this flag implies the latency constraint only applies to
system-wide suspend, also check the flag in
dev_update_qos_constraint(). If it is set, then the constraint is not
relevant for runtime PM decisions.

Signed-off-by: Kevin Hilman (TI) <khilman@baylibre.com>
---
 drivers/pmdomain/governor.c | 42 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 41 insertions(+), 1 deletion(-)

diff --git a/drivers/pmdomain/governor.c b/drivers/pmdomain/governor.c
index 96737abbb496..03802a859a78 100644
--- a/drivers/pmdomain/governor.c
+++ b/drivers/pmdomain/governor.c
@@ -31,6 +31,8 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
 		constraint_ns = td ? td->effective_constraint_ns :
 				PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
 	} else {
+		enum pm_qos_flags_status flag_status;
+
 		/*
 		 * The child is not in a domain and there's no info on its
 		 * suspend/resume latencies, so assume them to be negligible and
@@ -38,7 +40,14 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
 		 * known at this point anyway).
 		 */
 		constraint_ns = dev_pm_qos_read_value(dev, DEV_PM_QOS_RESUME_LATENCY);
-		constraint_ns *= NSEC_PER_USEC;
+		flag_status = dev_pm_qos_flags(dev, PM_QOS_FLAG_LATENCY_SYS);
+		if ((constraint_ns != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) &&
+			    (flag_status == PM_QOS_FLAGS_ALL)) {
+			dev_dbg_once(dev, "resume-latency only for system-wide.  Ignoring.\n");
+			constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
+		} else {
+			constraint_ns *= NSEC_PER_USEC;
+		}
 	}
 
 	if (constraint_ns < *constraint_ns_p)
@@ -430,12 +439,43 @@ static bool cpu_system_power_down_ok(struct dev_pm_domain *pd)
 	s64 constraint_ns = cpu_wakeup_latency_qos_limit() * NSEC_PER_USEC;
 	struct generic_pm_domain *genpd = pd_to_genpd(pd);
 	int state_idx = genpd->state_count - 1;
+	struct pm_domain_data *pdd;
+	s32 min_dev_latency = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
+	s64 min_dev_latency_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
+	struct gpd_link *link;
 
 	if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN)) {
 		genpd->state_idx = state_idx;
 		return true;
 	}
 
+	list_for_each_entry(link, &genpd->parent_links, parent_node) {
+		struct generic_pm_domain *child_pd = link->child;
+
+		list_for_each_entry(pdd, &child_pd->dev_list, list_node) {
+			enum pm_qos_flags_status flag_status;
+			s32 dev_latency;
+
+			dev_latency = dev_pm_qos_read_value(pdd->dev, DEV_PM_QOS_RESUME_LATENCY);
+			flag_status = dev_pm_qos_flags(pdd->dev, PM_QOS_FLAG_LATENCY_SYS);
+			if ((dev_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) &&
+			    (flag_status == PM_QOS_FLAGS_ALL)) {
+				dev_dbg(pdd->dev,
+					"in domain %s, has QoS system-wide resume latency=%d\n",
+					child_pd->name, dev_latency);
+				if (dev_latency < min_dev_latency)
+					min_dev_latency = dev_latency;
+			}
+		}
+	}
+
+	/* If device latency < CPU wakeup latency, use it instead */
+	if (min_dev_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) {
+		min_dev_latency_ns = min_dev_latency * NSEC_PER_USEC;
+		if (min_dev_latency_ns < constraint_ns)
+			constraint_ns = min_dev_latency_ns;
+	}
+
 	/* Find the deepest state for the latency constraint. */
 	while (state_idx >= 0) {
 		s64 latency_ns = genpd->states[state_idx].power_off_latency_ns +

-- 
2.51.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] PM / QoS: add flag to indicate latency applies system-wide
  2026-01-21  1:54 ` [PATCH 1/2] PM / QoS: add flag to indicate latency applies system-wide Kevin Hilman (TI)
@ 2026-01-27 16:20   ` Rafael J. Wysocki
  2026-01-28 10:25   ` Ulf Hansson
  1 sibling, 0 replies; 13+ messages in thread
From: Rafael J. Wysocki @ 2026-01-27 16:20 UTC (permalink / raw)
  To: Kevin Hilman (TI)
  Cc: Rafael J. Wysocki, Ulf Hansson, linux-pm, Dhruva Gole,
	linux-kernel

On Wed, Jan 21, 2026 at 2:54 AM Kevin Hilman (TI) <khilman@baylibre.com> wrote:
>
> By default, the QoS resume latency currenly only applied to runtime PM
> decisions.
>
> Add new PM_QOS_FLAG_LATENCY_SYS flag to indicate that the
> resume latency QoS constraint should be applied to system-wide
> PM, and *not* runtime PM.
>
> Signed-off-by: Kevin Hilman (TI) <khilman@baylibre.com>

This is fine with me, so

Acked-by: Rafael J. Wysocki (Intel) <rafael@kernel.org>

Or do you want me to pick it up?

> ---
>  drivers/base/power/sysfs.c | 27 +++++++++++++++++++++++++++
>  include/linux/pm_qos.h     |  2 ++
>  2 files changed, 29 insertions(+)
>
> diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c
> index 13b31a3adc77..9136c13c17e4 100644
> --- a/drivers/base/power/sysfs.c
> +++ b/drivers/base/power/sysfs.c
> @@ -316,6 +316,32 @@ static ssize_t pm_qos_no_power_off_store(struct device *dev,
>
>  static DEVICE_ATTR_RW(pm_qos_no_power_off);
>
> +static ssize_t pm_qos_latency_sys_show(struct device *dev,
> +                                      struct device_attribute *attr,
> +                                      char *buf)
> +{
> +       return sysfs_emit(buf, "%d\n", !!(dev_pm_qos_requested_flags(dev)
> +                                         & PM_QOS_FLAG_LATENCY_SYS));
> +}
> +
> +static ssize_t pm_qos_latency_sys_store(struct device *dev,
> +                                        struct device_attribute *attr,
> +                                        const char *buf, size_t n)
> +{
> +       int ret;
> +
> +       if (kstrtoint(buf, 0, &ret))
> +               return -EINVAL;
> +
> +       if (ret != 0 && ret != 1)
> +               return -EINVAL;
> +
> +       ret = dev_pm_qos_update_flags(dev, PM_QOS_FLAG_LATENCY_SYS, ret);
> +       return ret < 0 ? ret : n;
> +}
> +
> +static DEVICE_ATTR_RW(pm_qos_latency_sys);
> +
>  #ifdef CONFIG_PM_SLEEP
>  static const char _enabled[] = "enabled";
>  static const char _disabled[] = "disabled";
> @@ -681,6 +707,7 @@ static const struct attribute_group pm_qos_latency_tolerance_attr_group = {
>
>  static struct attribute *pm_qos_flags_attrs[] = {
>         &dev_attr_pm_qos_no_power_off.attr,
> +       &dev_attr_pm_qos_latency_sys.attr,
>         NULL,
>  };
>  static const struct attribute_group pm_qos_flags_attr_group = {
> diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
> index 6cea4455f867..aededda52b6b 100644
> --- a/include/linux/pm_qos.h
> +++ b/include/linux/pm_qos.h
> @@ -37,6 +37,8 @@ enum pm_qos_flags_status {
>  #define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT (-1)
>
>  #define PM_QOS_FLAG_NO_POWER_OFF       (1 << 0)
> +/* latency value applies to system-wide suspend/s2idle */
> +#define PM_QOS_FLAG_LATENCY_SYS                (2 << 0)
>
>  enum pm_qos_type {
>         PM_QOS_UNITIALIZED,
>
> --
> 2.51.0
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] PM / QoS: add flag to indicate latency applies system-wide
  2026-01-21  1:54 ` [PATCH 1/2] PM / QoS: add flag to indicate latency applies system-wide Kevin Hilman (TI)
  2026-01-27 16:20   ` Rafael J. Wysocki
@ 2026-01-28 10:25   ` Ulf Hansson
  1 sibling, 0 replies; 13+ messages in thread
From: Ulf Hansson @ 2026-01-28 10:25 UTC (permalink / raw)
  To: Kevin Hilman (TI); +Cc: Rafael J. Wysocki, linux-pm, Dhruva Gole, linux-kernel

On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@baylibre.com> wrote:
>
> By default, the QoS resume latency currenly only applied to runtime PM
> decisions.
>
> Add new PM_QOS_FLAG_LATENCY_SYS flag to indicate that the
> resume latency QoS constraint should be applied to system-wide
> PM, and *not* runtime PM.

What if we need latency constraints to be applied for both runtime PM
and system-wide PM? Can that be done?

>
> Signed-off-by: Kevin Hilman (TI) <khilman@baylibre.com>
> ---
>  drivers/base/power/sysfs.c | 27 +++++++++++++++++++++++++++
>  include/linux/pm_qos.h     |  2 ++
>  2 files changed, 29 insertions(+)
>
> diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c
> index 13b31a3adc77..9136c13c17e4 100644
> --- a/drivers/base/power/sysfs.c
> +++ b/drivers/base/power/sysfs.c
> @@ -316,6 +316,32 @@ static ssize_t pm_qos_no_power_off_store(struct device *dev,
>
>  static DEVICE_ATTR_RW(pm_qos_no_power_off);
>
> +static ssize_t pm_qos_latency_sys_show(struct device *dev,
> +                                      struct device_attribute *attr,
> +                                      char *buf)
> +{
> +       return sysfs_emit(buf, "%d\n", !!(dev_pm_qos_requested_flags(dev)
> +                                         & PM_QOS_FLAG_LATENCY_SYS));
> +}
> +
> +static ssize_t pm_qos_latency_sys_store(struct device *dev,
> +                                        struct device_attribute *attr,
> +                                        const char *buf, size_t n)
> +{
> +       int ret;
> +
> +       if (kstrtoint(buf, 0, &ret))
> +               return -EINVAL;
> +
> +       if (ret != 0 && ret != 1)
> +               return -EINVAL;
> +
> +       ret = dev_pm_qos_update_flags(dev, PM_QOS_FLAG_LATENCY_SYS, ret);
> +       return ret < 0 ? ret : n;
> +}
> +
> +static DEVICE_ATTR_RW(pm_qos_latency_sys);

Shouldn't this code be moved below, inside the #ifdef CONFIG_PM_SLEEP?

I also wonder if "pm_qos_latency_sys" may be slightly too short to
allow the user to understand what it's used for. Perhaps we should
rename it to something along the lines of
"pm_qos_latency_wakeup_enabled". Just a thought, no strong opinions.

> +
>  #ifdef CONFIG_PM_SLEEP
>  static const char _enabled[] = "enabled";
>  static const char _disabled[] = "disabled";
> @@ -681,6 +707,7 @@ static const struct attribute_group pm_qos_latency_tolerance_attr_group = {
>
>  static struct attribute *pm_qos_flags_attrs[] = {
>         &dev_attr_pm_qos_no_power_off.attr,
> +       &dev_attr_pm_qos_latency_sys.attr,
>         NULL,
>  };
>  static const struct attribute_group pm_qos_flags_attr_group = {
> diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
> index 6cea4455f867..aededda52b6b 100644
> --- a/include/linux/pm_qos.h
> +++ b/include/linux/pm_qos.h
> @@ -37,6 +37,8 @@ enum pm_qos_flags_status {
>  #define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT (-1)
>
>  #define PM_QOS_FLAG_NO_POWER_OFF       (1 << 0)
> +/* latency value applies to system-wide suspend/s2idle */
> +#define PM_QOS_FLAG_LATENCY_SYS                (2 << 0)
>
>  enum pm_qos_type {
>         PM_QOS_UNITIALIZED,
>
> --
> 2.51.0
>

What about documentation?

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM
  2026-01-21  1:54 [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM Kevin Hilman (TI)
  2026-01-21  1:54 ` [PATCH 1/2] PM / QoS: add flag to indicate latency applies system-wide Kevin Hilman (TI)
  2026-01-21  1:54 ` [PATCH 2/2] pmdommain: add support system-wide resume latency constraints Kevin Hilman (TI)
@ 2026-01-28 10:33 ` Ulf Hansson
  2026-01-28 23:51   ` Kevin Hilman
  2 siblings, 1 reply; 13+ messages in thread
From: Ulf Hansson @ 2026-01-28 10:33 UTC (permalink / raw)
  To: Kevin Hilman (TI); +Cc: Rafael J. Wysocki, linux-pm, Dhruva Gole, linux-kernel

On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@baylibre.com> wrote:
>
> Currently QoS resume latencies are only considered for runtime PM
> transitions of pmdomains, which remains the default.
>
> In order to also support QoS resume latencies during system-wide PM,
> add a new flag to indicate a resume latency should be used for
> system-wide PM *instead of* runtime PM.
>
> For example, by doing this:
>
>    # echo 500000 > /sys/devices/.../<dev0>/power/pm_qos_resume_latency_us
>
> dev0 now has a resume latency of 500000 usec for runtime PM
> transitions.
>
> Then, if the new flag is also set:
>
>    # echo 1 > /sys/devices/.../<dev0>/power/pm_qos_latency_sys
>
> That 500000 usec delay now applies to system-wide PM (and not to
> runtime PM).
>
> If a user requires a different latency value for system-wide PM
> compared to runtime PM, then the runtime PM value can be set for
> normal operations, and the system-wide value (and flag) can be set by
> userspace before suspend, and the runtime PM value can be restored
> after resume.

That's sounds complicated for user-space to manage - and causes churns
during every suspend/resume cycle. Why don't we just add a new latency
value instead, that applies both to runtime PM and system-wide PM,
similar and consistent to what we did for CPU QoS?

Kind regards
Uffe

>
> To: Rafael J. Wysocki <rafael@kernel.org>
> To: Ulf Hansson <ulf.hansson@linaro.org>
> To: linux-pm@vger.kernel.org
>
> Signed-off-by: Kevin Hilman (TI) <khilman@baylibre.com>
> ---
> Kevin Hilman (TI) (2):
>       PM / QoS: add flag to indicate latency applies system-wide
>       pmdommain: add support system-wide resume latency constraints
>
>  drivers/base/power/sysfs.c  | 27 +++++++++++++++++++++++++++
>  drivers/pmdomain/governor.c | 42 +++++++++++++++++++++++++++++++++++++++++-
>  include/linux/pm_qos.h      |  2 ++
>  3 files changed, 70 insertions(+), 1 deletion(-)
> ---
> base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
> change-id: 20260120-topic-lpm-pmdomain-device-constraints-e5e78ce48502
>
> Best regards,
> --
> Kevin Hilman (TI) <khilman@baylibre.com>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM
  2026-01-28 10:33 ` [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM Ulf Hansson
@ 2026-01-28 23:51   ` Kevin Hilman
  2026-01-29 11:04     ` Ulf Hansson
  0 siblings, 1 reply; 13+ messages in thread
From: Kevin Hilman @ 2026-01-28 23:51 UTC (permalink / raw)
  To: Ulf Hansson; +Cc: Rafael J. Wysocki, linux-pm, Dhruva Gole, linux-kernel

Hi Ulf,

Ulf Hansson <ulf.hansson@linaro.org> writes:

> On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@baylibre.com> wrote:
>>
>> Currently QoS resume latencies are only considered for runtime PM
>> transitions of pmdomains, which remains the default.
>>
>> In order to also support QoS resume latencies during system-wide PM,
>> add a new flag to indicate a resume latency should be used for
>> system-wide PM *instead of* runtime PM.
>>
>> For example, by doing this:
>>
>>    # echo 500000 > /sys/devices/.../<dev0>/power/pm_qos_resume_latency_us
>>
>> dev0 now has a resume latency of 500000 usec for runtime PM
>> transitions.
>>
>> Then, if the new flag is also set:
>>
>>    # echo 1 > /sys/devices/.../<dev0>/power/pm_qos_latency_sys
>>
>> That 500000 usec delay now applies to system-wide PM (and not to
>> runtime PM).
>>
>> If a user requires a different latency value for system-wide PM
>> compared to runtime PM, then the runtime PM value can be set for
>> normal operations, and the system-wide value (and flag) can be set by
>> userspace before suspend, and the runtime PM value can be restored
>> after resume.
>
> That's sounds complicated for user-space to manage - and causes churns
> during every suspend/resume cycle. Why don't we just add a new latency
> value instead, that applies both to runtime PM and system-wide PM,
> similar and consistent to what we did for CPU QoS?

First, I don't think it will be very common to have different *device*
latency values between runtime PM and system PM, because the reasons for
device-specific wakeup latency will likely be the same in both cases, at
least for all the usecases I've thought about.  The only real distiction
being whether the latency should be applied to runtime or system-wide
PM, which the new flag provides.

Second, this doesn't have to be in userspace at all, that's just the
example I used to illustrate.  In fact, today not many latency
constraints are exposed to userspace, so this can be acheived by the
kernel API for setting latency values & flags, which I think is the more
likely usecase anyways.  For example, for a driver that is managing a
wakeup latency constraint, it could update it's own constraint and set
the flag in it's ->prepare() and ->complete() hook if it needs separate
values for system-wide vs. runtime PM.

Third, adding a new QoS value for this involves a bunch of new code that
is basically copy/paste of the current latency code.  That includes APIs
for

  - sysfs interface
  - notifiers (add, remove)
  - read/add/update value adds a new type
  - expose value to userspace (becomes ABI)
  - tolerance
  
I actually went down this route first, and realized this would be lots
of duplicated code for a usecase that we're not even sure exists, so I
found the flag approach to be much more straight forward for the
usecases at hand.

Kevin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM
  2026-01-28 23:51   ` Kevin Hilman
@ 2026-01-29 11:04     ` Ulf Hansson
  2026-01-29 18:29       ` Kevin Hilman
  0 siblings, 1 reply; 13+ messages in thread
From: Ulf Hansson @ 2026-01-29 11:04 UTC (permalink / raw)
  To: Kevin Hilman; +Cc: Rafael J. Wysocki, linux-pm, Dhruva Gole, linux-kernel

On Thu, 29 Jan 2026 at 00:51, Kevin Hilman <khilman@baylibre.com> wrote:
>
> Hi Ulf,
>
> Ulf Hansson <ulf.hansson@linaro.org> writes:
>
> > On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@baylibre.com> wrote:
> >>
> >> Currently QoS resume latencies are only considered for runtime PM
> >> transitions of pmdomains, which remains the default.
> >>
> >> In order to also support QoS resume latencies during system-wide PM,
> >> add a new flag to indicate a resume latency should be used for
> >> system-wide PM *instead of* runtime PM.
> >>
> >> For example, by doing this:
> >>
> >>    # echo 500000 > /sys/devices/.../<dev0>/power/pm_qos_resume_latency_us
> >>
> >> dev0 now has a resume latency of 500000 usec for runtime PM
> >> transitions.
> >>
> >> Then, if the new flag is also set:
> >>
> >>    # echo 1 > /sys/devices/.../<dev0>/power/pm_qos_latency_sys
> >>
> >> That 500000 usec delay now applies to system-wide PM (and not to
> >> runtime PM).
> >>
> >> If a user requires a different latency value for system-wide PM
> >> compared to runtime PM, then the runtime PM value can be set for
> >> normal operations, and the system-wide value (and flag) can be set by
> >> userspace before suspend, and the runtime PM value can be restored
> >> after resume.
> >
> > That's sounds complicated for user-space to manage - and causes churns
> > during every suspend/resume cycle. Why don't we just add a new latency
> > value instead, that applies both to runtime PM and system-wide PM,
> > similar and consistent to what we did for CPU QoS?
>
> First, I don't think it will be very common to have different *device*
> latency values between runtime PM and system PM, because the reasons for
> device-specific wakeup latency will likely be the same in both cases, at
> least for all the usecases I've thought about.  The only real distiction
> being whether the latency should be applied to runtime or system-wide
> PM, which the new flag provides.
>
> Second, this doesn't have to be in userspace at all, that's just the
> example I used to illustrate.  In fact, today not many latency
> constraints are exposed to userspace, so this can be acheived by the
> kernel API for setting latency values & flags, which I think is the more
> likely usecase anyways.  For example, for a driver that is managing a
> wakeup latency constraint, it could update it's own constraint and set
> the flag in it's ->prepare() and ->complete() hook if it needs separate
> values for system-wide vs. runtime PM.

Right, as long as the use cases can be managed by the kernel itself,
then this should be fine. So, I guess the question is, if we should
consider use-cases that requires user space involvement at this point?

Also note, patch1 do exposes a new QoS sysfs file, to allow user space
to manage the new QoS flag - so this becomes ABI.

>
> Third, adding a new QoS value for this involves a bunch of new code that
> is basically copy/paste of the current latency code.  That includes APIs
> for
>
>   - sysfs interface
>   - notifiers (add, remove)
>   - read/add/update value adds a new type
>   - expose value to userspace (becomes ABI)
>   - tolerance
>
> I actually went down this route first, and realized this would be lots
> of duplicated code for a usecase that we're not even sure exists, so I
> found the flag approach to be much more straight forward for the
> usecases at hand.

I understand your concern and I agree!

However, my main issue is the user space ABI part. Is the QoS flag,
that patch1 exposes, future proof enough when considering user cases
that needs to be managed by user space? In my opinion, I don't think
so.

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM
  2026-01-29 11:04     ` Ulf Hansson
@ 2026-01-29 18:29       ` Kevin Hilman
  2026-01-30 11:10         ` Ulf Hansson
  0 siblings, 1 reply; 13+ messages in thread
From: Kevin Hilman @ 2026-01-29 18:29 UTC (permalink / raw)
  To: Ulf Hansson; +Cc: Rafael J. Wysocki, linux-pm, Dhruva Gole, linux-kernel

Ulf Hansson <ulf.hansson@linaro.org> writes:

> On Thu, 29 Jan 2026 at 00:51, Kevin Hilman <khilman@baylibre.com> wrote:
>>
>> Hi Ulf,
>>
>> Ulf Hansson <ulf.hansson@linaro.org> writes:
>>
>> > On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@baylibre.com> wrote:
>> >>
>> >> Currently QoS resume latencies are only considered for runtime PM
>> >> transitions of pmdomains, which remains the default.
>> >>
>> >> In order to also support QoS resume latencies during system-wide PM,
>> >> add a new flag to indicate a resume latency should be used for
>> >> system-wide PM *instead of* runtime PM.
>> >>
>> >> For example, by doing this:
>> >>
>> >>    # echo 500000 > /sys/devices/.../<dev0>/power/pm_qos_resume_latency_us
>> >>
>> >> dev0 now has a resume latency of 500000 usec for runtime PM
>> >> transitions.
>> >>
>> >> Then, if the new flag is also set:
>> >>
>> >>    # echo 1 > /sys/devices/.../<dev0>/power/pm_qos_latency_sys
>> >>
>> >> That 500000 usec delay now applies to system-wide PM (and not to
>> >> runtime PM).
>> >>
>> >> If a user requires a different latency value for system-wide PM
>> >> compared to runtime PM, then the runtime PM value can be set for
>> >> normal operations, and the system-wide value (and flag) can be set by
>> >> userspace before suspend, and the runtime PM value can be restored
>> >> after resume.
>> >
>> > That's sounds complicated for user-space to manage - and causes churns
>> > during every suspend/resume cycle. Why don't we just add a new latency
>> > value instead, that applies both to runtime PM and system-wide PM,
>> > similar and consistent to what we did for CPU QoS?
>>
>> First, I don't think it will be very common to have different *device*
>> latency values between runtime PM and system PM, because the reasons for
>> device-specific wakeup latency will likely be the same in both cases, at
>> least for all the usecases I've thought about.  The only real distiction
>> being whether the latency should be applied to runtime or system-wide
>> PM, which the new flag provides.
>>
>> Second, this doesn't have to be in userspace at all, that's just the
>> example I used to illustrate.  In fact, today not many latency
>> constraints are exposed to userspace, so this can be acheived by the
>> kernel API for setting latency values & flags, which I think is the more
>> likely usecase anyways.  For example, for a driver that is managing a
>> wakeup latency constraint, it could update it's own constraint and set
>> the flag in it's ->prepare() and ->complete() hook if it needs separate
>> values for system-wide vs. runtime PM.
>
> Right, as long as the use cases can be managed by the kernel itself,
> then this should be fine. So, I guess the question is, if we should
> consider use-cases that requires user space involvement at this point?
>
> Also note, patch1 do exposes a new QoS sysfs file, to allow user space
> to manage the new QoS flag - so this becomes ABI.
>
>>
>> Third, adding a new QoS value for this involves a bunch of new code that
>> is basically copy/paste of the current latency code.  That includes APIs
>> for
>>
>>   - sysfs interface
>>   - notifiers (add, remove)
>>   - read/add/update value adds a new type
>>   - expose value to userspace (becomes ABI)
>>   - tolerance
>>
>> I actually went down this route first, and realized this would be lots
>> of duplicated code for a usecase that we're not even sure exists, so I
>> found the flag approach to be much more straight forward for the
>> usecases at hand.
>
> I understand your concern and I agree!
>
> However, my main issue is the user space ABI part. Is the QoS flag,
> that patch1 exposes, future proof enough when considering user cases
> that needs to be managed by user space? In my opinion, I don't think
> so.

Well, I think we have the same problem with the alternative proposal
(creating a new QoS value) because we simply don't know the use-cases
that need to be managed by userspace.

That being said, I'm fine if we drop [PATCH 1/2] for now, and just manage
this from the kernel API while we start using this feature and and
whether it needs to be used from userspace at all.

Kevin


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM
  2026-01-29 18:29       ` Kevin Hilman
@ 2026-01-30 11:10         ` Ulf Hansson
  0 siblings, 0 replies; 13+ messages in thread
From: Ulf Hansson @ 2026-01-30 11:10 UTC (permalink / raw)
  To: Kevin Hilman; +Cc: Rafael J. Wysocki, linux-pm, Dhruva Gole, linux-kernel

On Thu, 29 Jan 2026 at 19:29, Kevin Hilman <khilman@baylibre.com> wrote:
>
> Ulf Hansson <ulf.hansson@linaro.org> writes:
>
> > On Thu, 29 Jan 2026 at 00:51, Kevin Hilman <khilman@baylibre.com> wrote:
> >>
> >> Hi Ulf,
> >>
> >> Ulf Hansson <ulf.hansson@linaro.org> writes:
> >>
> >> > On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@baylibre.com> wrote:
> >> >>
> >> >> Currently QoS resume latencies are only considered for runtime PM
> >> >> transitions of pmdomains, which remains the default.
> >> >>
> >> >> In order to also support QoS resume latencies during system-wide PM,
> >> >> add a new flag to indicate a resume latency should be used for
> >> >> system-wide PM *instead of* runtime PM.
> >> >>
> >> >> For example, by doing this:
> >> >>
> >> >>    # echo 500000 > /sys/devices/.../<dev0>/power/pm_qos_resume_latency_us
> >> >>
> >> >> dev0 now has a resume latency of 500000 usec for runtime PM
> >> >> transitions.
> >> >>
> >> >> Then, if the new flag is also set:
> >> >>
> >> >>    # echo 1 > /sys/devices/.../<dev0>/power/pm_qos_latency_sys
> >> >>
> >> >> That 500000 usec delay now applies to system-wide PM (and not to
> >> >> runtime PM).
> >> >>
> >> >> If a user requires a different latency value for system-wide PM
> >> >> compared to runtime PM, then the runtime PM value can be set for
> >> >> normal operations, and the system-wide value (and flag) can be set by
> >> >> userspace before suspend, and the runtime PM value can be restored
> >> >> after resume.
> >> >
> >> > That's sounds complicated for user-space to manage - and causes churns
> >> > during every suspend/resume cycle. Why don't we just add a new latency
> >> > value instead, that applies both to runtime PM and system-wide PM,
> >> > similar and consistent to what we did for CPU QoS?
> >>
> >> First, I don't think it will be very common to have different *device*
> >> latency values between runtime PM and system PM, because the reasons for
> >> device-specific wakeup latency will likely be the same in both cases, at
> >> least for all the usecases I've thought about.  The only real distiction
> >> being whether the latency should be applied to runtime or system-wide
> >> PM, which the new flag provides.
> >>
> >> Second, this doesn't have to be in userspace at all, that's just the
> >> example I used to illustrate.  In fact, today not many latency
> >> constraints are exposed to userspace, so this can be acheived by the
> >> kernel API for setting latency values & flags, which I think is the more
> >> likely usecase anyways.  For example, for a driver that is managing a
> >> wakeup latency constraint, it could update it's own constraint and set
> >> the flag in it's ->prepare() and ->complete() hook if it needs separate
> >> values for system-wide vs. runtime PM.
> >
> > Right, as long as the use cases can be managed by the kernel itself,
> > then this should be fine. So, I guess the question is, if we should
> > consider use-cases that requires user space involvement at this point?
> >
> > Also note, patch1 do exposes a new QoS sysfs file, to allow user space
> > to manage the new QoS flag - so this becomes ABI.
> >
> >>
> >> Third, adding a new QoS value for this involves a bunch of new code that
> >> is basically copy/paste of the current latency code.  That includes APIs
> >> for
> >>
> >>   - sysfs interface
> >>   - notifiers (add, remove)
> >>   - read/add/update value adds a new type
> >>   - expose value to userspace (becomes ABI)
> >>   - tolerance
> >>
> >> I actually went down this route first, and realized this would be lots
> >> of duplicated code for a usecase that we're not even sure exists, so I
> >> found the flag approach to be much more straight forward for the
> >> usecases at hand.
> >
> > I understand your concern and I agree!
> >
> > However, my main issue is the user space ABI part. Is the QoS flag,
> > that patch1 exposes, future proof enough when considering user cases
> > that needs to be managed by user space? In my opinion, I don't think
> > so.
>
> Well, I think we have the same problem with the alternative proposal
> (creating a new QoS value) because we simply don't know the use-cases
> that need to be managed by userspace.

I agree!

>
> That being said, I'm fine if we drop [PATCH 1/2] for now, and just manage
> this from the kernel API while we start using this feature and and
> whether it needs to be used from userspace at all.

Seems reasonable to me too, let's revisit the user space part later on.

Still we need the QoS flag to be defined in some patch, but I guess it
can be part of patch2 as well. Anyway, I will have a look at patch2
shortly and let you know what I think.

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] pmdommain: add support system-wide resume latency constraints
  2026-01-21  1:54 ` [PATCH 2/2] pmdommain: add support system-wide resume latency constraints Kevin Hilman (TI)
@ 2026-02-02 11:49   ` Ulf Hansson
  2026-02-03 23:19     ` Kevin Hilman
  0 siblings, 1 reply; 13+ messages in thread
From: Ulf Hansson @ 2026-02-02 11:49 UTC (permalink / raw)
  To: Kevin Hilman (TI); +Cc: Rafael J. Wysocki, linux-pm, Dhruva Gole, linux-kernel

On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@baylibre.com> wrote:
>
> In addition to checking for CPU latency constraints when checking if
> OK to power down a domain, also check for QoS latency constraints in
> all devices of a domain and use that in determining the final latency
> constraint to use for the domain.
>
> Since cpu_system_power_down_ok() is used for system-wide suspend, the
> per-device constratints are only relevant if the LATENCY_SYS QoS flag
> is set.
>
> Because this flag implies the latency constraint only applies to
> system-wide suspend, also check the flag in
> dev_update_qos_constraint(). If it is set, then the constraint is not
> relevant for runtime PM decisions.
>
> Signed-off-by: Kevin Hilman (TI) <khilman@baylibre.com>
> ---
>  drivers/pmdomain/governor.c | 42 +++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 41 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pmdomain/governor.c b/drivers/pmdomain/governor.c
> index 96737abbb496..03802a859a78 100644
> --- a/drivers/pmdomain/governor.c
> +++ b/drivers/pmdomain/governor.c
> @@ -31,6 +31,8 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
>                 constraint_ns = td ? td->effective_constraint_ns :
>                                 PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
>         } else {
> +               enum pm_qos_flags_status flag_status;
> +
>                 /*
>                  * The child is not in a domain and there's no info on its
>                  * suspend/resume latencies, so assume them to be negligible and
> @@ -38,7 +40,14 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
>                  * known at this point anyway).
>                  */
>                 constraint_ns = dev_pm_qos_read_value(dev, DEV_PM_QOS_RESUME_LATENCY);
> -               constraint_ns *= NSEC_PER_USEC;
> +               flag_status = dev_pm_qos_flags(dev, PM_QOS_FLAG_LATENCY_SYS);
> +               if ((constraint_ns != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) &&
> +                           (flag_status == PM_QOS_FLAGS_ALL)) {
> +                       dev_dbg_once(dev, "resume-latency only for system-wide.  Ignoring.\n");
> +                       constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
> +               } else {
> +                       constraint_ns *= NSEC_PER_USEC;
> +               }
>         }

dev_update_qos_constraint() is called only to take into account the
QoS constraints for the device's *children*.

It looks like we should also be checking the PM_QOS_FLAG_LATENCY_SYS
flag in default_suspend_ok() for the device in question.

That said, there seems to be more places in the kernel where we should
check the PM_QOS_FLAG_LATENCY_SYS flag, like in cpu_power_down_ok(),
cpuidle_governor_latency_req(), etc.

>
>         if (constraint_ns < *constraint_ns_p)
> @@ -430,12 +439,43 @@ static bool cpu_system_power_down_ok(struct dev_pm_domain *pd)
>         s64 constraint_ns = cpu_wakeup_latency_qos_limit() * NSEC_PER_USEC;
>         struct generic_pm_domain *genpd = pd_to_genpd(pd);
>         int state_idx = genpd->state_count - 1;
> +       struct pm_domain_data *pdd;
> +       s32 min_dev_latency = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
> +       s64 min_dev_latency_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
> +       struct gpd_link *link;
>
>         if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN)) {
>                 genpd->state_idx = state_idx;
>                 return true;
>         }
>
> +       list_for_each_entry(link, &genpd->parent_links, parent_node) {
> +               struct generic_pm_domain *child_pd = link->child;
> +
> +               list_for_each_entry(pdd, &child_pd->dev_list, list_node) {
> +                       enum pm_qos_flags_status flag_status;
> +                       s32 dev_latency;
> +
> +                       dev_latency = dev_pm_qos_read_value(pdd->dev, DEV_PM_QOS_RESUME_LATENCY);
> +                       flag_status = dev_pm_qos_flags(pdd->dev, PM_QOS_FLAG_LATENCY_SYS);
> +                       if ((dev_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) &&
> +                           (flag_status == PM_QOS_FLAGS_ALL)) {
> +                               dev_dbg(pdd->dev,
> +                                       "in domain %s, has QoS system-wide resume latency=%d\n",
> +                                       child_pd->name, dev_latency);
> +                               if (dev_latency < min_dev_latency)
> +                                       min_dev_latency = dev_latency;
> +                       }
> +               }

cpu_system_power_down_ok() is at the moment only used for CPU PM
domains. If the intent is to take into account QoS constraints for
CPUs, we should check the QoS value for CPU-devices as well (by using
get_cpu_device(), see cpu_power_down_ok(). For non-CPU devices
something along the lines of the above makes sense to me.

Although, please note, the above code is just walking through the
devices in the child-domains, there is nothing checking the devices
that belong to the current/parent-domain. Nor are we taking
child-devices into account.

> +       }
> +
> +       /* If device latency < CPU wakeup latency, use it instead */
> +       if (min_dev_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) {
> +               min_dev_latency_ns = min_dev_latency * NSEC_PER_USEC;
> +               if (min_dev_latency_ns < constraint_ns)
> +                       constraint_ns = min_dev_latency_ns;
> +       }
> +
>         /* Find the deepest state for the latency constraint. */
>         while (state_idx >= 0) {
>                 s64 latency_ns = genpd->states[state_idx].power_off_latency_ns +
>
> --
> 2.51.0
>

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] pmdommain: add support system-wide resume latency constraints
  2026-02-02 11:49   ` Ulf Hansson
@ 2026-02-03 23:19     ` Kevin Hilman
  2026-02-04 13:33       ` Ulf Hansson
  0 siblings, 1 reply; 13+ messages in thread
From: Kevin Hilman @ 2026-02-03 23:19 UTC (permalink / raw)
  To: Ulf Hansson; +Cc: Rafael J. Wysocki, linux-pm, Dhruva Gole, linux-kernel

Ulf Hansson <ulf.hansson@linaro.org> writes:

> On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@baylibre.com> wrote:
>>
>> In addition to checking for CPU latency constraints when checking if
>> OK to power down a domain, also check for QoS latency constraints in
>> all devices of a domain and use that in determining the final latency
>> constraint to use for the domain.
>>
>> Since cpu_system_power_down_ok() is used for system-wide suspend, the
>> per-device constratints are only relevant if the LATENCY_SYS QoS flag
>> is set.
>>
>> Because this flag implies the latency constraint only applies to
>> system-wide suspend, also check the flag in
>> dev_update_qos_constraint(). If it is set, then the constraint is not
>> relevant for runtime PM decisions.
>>
>> Signed-off-by: Kevin Hilman (TI) <khilman@baylibre.com>
>> ---
>>  drivers/pmdomain/governor.c | 42 +++++++++++++++++++++++++++++++++++++++++-
>>  1 file changed, 41 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/pmdomain/governor.c b/drivers/pmdomain/governor.c
>> index 96737abbb496..03802a859a78 100644
>> --- a/drivers/pmdomain/governor.c
>> +++ b/drivers/pmdomain/governor.c
>> @@ -31,6 +31,8 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
>>                 constraint_ns = td ? td->effective_constraint_ns :
>>                                 PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
>>         } else {
>> +               enum pm_qos_flags_status flag_status;
>> +
>>                 /*
>>                  * The child is not in a domain and there's no info on its
>>                  * suspend/resume latencies, so assume them to be negligible and
>> @@ -38,7 +40,14 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
>>                  * known at this point anyway).
>>                  */
>>                 constraint_ns = dev_pm_qos_read_value(dev, DEV_PM_QOS_RESUME_LATENCY);
>> -               constraint_ns *= NSEC_PER_USEC;
>> +               flag_status = dev_pm_qos_flags(dev, PM_QOS_FLAG_LATENCY_SYS);
>> +               if ((constraint_ns != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) &&
>> +                           (flag_status == PM_QOS_FLAGS_ALL)) {
>> +                       dev_dbg_once(dev, "resume-latency only for system-wide.  Ignoring.\n");
>> +                       constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
>> +               } else {
>> +                       constraint_ns *= NSEC_PER_USEC;
>> +               }
>>         }
>
> dev_update_qos_constraint() is called only to take into account the
> QoS constraints for the device's *children*.
>
> It looks like we should also be checking the PM_QOS_FLAG_LATENCY_SYS
> flag in default_suspend_ok() for the device in question.
>
> That said, there seems to be more places in the kernel where we should
> check the PM_QOS_FLAG_LATENCY_SYS flag, like in cpu_power_down_ok(),
> cpuidle_governor_latency_req(), etc.

OK.  But now that we've agreed to drop the userspace interface for this,
I wonder if the better approach is now to consider the flag to mean that
the latency applies to runtime PM *and* system-wide PM.   Then, without
the flag set, the latency applies *only* to runtime PM.

That approach would allow the current default behavior to stay the same,
and not require adding checks for this flag throughout the runtime code,
and only require checking for the flag in the system-wide PM paths.

>>         if (constraint_ns < *constraint_ns_p)
>> @@ -430,12 +439,43 @@ static bool cpu_system_power_down_ok(struct dev_pm_domain *pd)
>>         s64 constraint_ns = cpu_wakeup_latency_qos_limit() * NSEC_PER_USEC;
>>         struct generic_pm_domain *genpd = pd_to_genpd(pd);
>>         int state_idx = genpd->state_count - 1;
>> +       struct pm_domain_data *pdd;
>> +       s32 min_dev_latency = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
>> +       s64 min_dev_latency_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
>> +       struct gpd_link *link;
>>
>>         if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN)) {
>>                 genpd->state_idx = state_idx;
>>                 return true;
>>         }
>>
>> +       list_for_each_entry(link, &genpd->parent_links, parent_node) {
>> +               struct generic_pm_domain *child_pd = link->child;
>> +
>> +               list_for_each_entry(pdd, &child_pd->dev_list, list_node) {
>> +                       enum pm_qos_flags_status flag_status;
>> +                       s32 dev_latency;
>> +
>> +                       dev_latency = dev_pm_qos_read_value(pdd->dev, DEV_PM_QOS_RESUME_LATENCY);
>> +                       flag_status = dev_pm_qos_flags(pdd->dev, PM_QOS_FLAG_LATENCY_SYS);
>> +                       if ((dev_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) &&
>> +                           (flag_status == PM_QOS_FLAGS_ALL)) {
>> +                               dev_dbg(pdd->dev,
>> +                                       "in domain %s, has QoS system-wide resume latency=%d\n",
>> +                                       child_pd->name, dev_latency);
>> +                               if (dev_latency < min_dev_latency)
>> +                                       min_dev_latency = dev_latency;
>> +                       }
>> +               }
>
> cpu_system_power_down_ok() is at the moment only used for CPU PM
> domains. If the intent is to take into account QoS constraints for
> CPUs, we should check the QoS value for CPU-devices as well (by using
> get_cpu_device(), see cpu_power_down_ok(). For non-CPU devices
> something along the lines of the above makes sense to me.
>
> Although, please note, the above code is just walking through the
> devices in the child-domains, there is nothing checking the devices
> that belong to the current/parent-domain.

Oops, yeah.  Good catch.

> Nor are we taking child-devices into account.

Indeed... double oops.  

This makes me wonder if we have any helpers to iterate over every device
(and children) in a domain (and subdomains.)

Kevin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] pmdommain: add support system-wide resume latency constraints
  2026-02-03 23:19     ` Kevin Hilman
@ 2026-02-04 13:33       ` Ulf Hansson
  0 siblings, 0 replies; 13+ messages in thread
From: Ulf Hansson @ 2026-02-04 13:33 UTC (permalink / raw)
  To: Kevin Hilman; +Cc: Rafael J. Wysocki, linux-pm, Dhruva Gole, linux-kernel

On Wed, 4 Feb 2026 at 00:19, Kevin Hilman <khilman@baylibre.com> wrote:
>
> Ulf Hansson <ulf.hansson@linaro.org> writes:
>
> > On Wed, 21 Jan 2026 at 02:54, Kevin Hilman (TI) <khilman@baylibre.com> wrote:
> >>
> >> In addition to checking for CPU latency constraints when checking if
> >> OK to power down a domain, also check for QoS latency constraints in
> >> all devices of a domain and use that in determining the final latency
> >> constraint to use for the domain.
> >>
> >> Since cpu_system_power_down_ok() is used for system-wide suspend, the
> >> per-device constratints are only relevant if the LATENCY_SYS QoS flag
> >> is set.
> >>
> >> Because this flag implies the latency constraint only applies to
> >> system-wide suspend, also check the flag in
> >> dev_update_qos_constraint(). If it is set, then the constraint is not
> >> relevant for runtime PM decisions.
> >>
> >> Signed-off-by: Kevin Hilman (TI) <khilman@baylibre.com>
> >> ---
> >>  drivers/pmdomain/governor.c | 42 +++++++++++++++++++++++++++++++++++++++++-
> >>  1 file changed, 41 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/pmdomain/governor.c b/drivers/pmdomain/governor.c
> >> index 96737abbb496..03802a859a78 100644
> >> --- a/drivers/pmdomain/governor.c
> >> +++ b/drivers/pmdomain/governor.c
> >> @@ -31,6 +31,8 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
> >>                 constraint_ns = td ? td->effective_constraint_ns :
> >>                                 PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
> >>         } else {
> >> +               enum pm_qos_flags_status flag_status;
> >> +
> >>                 /*
> >>                  * The child is not in a domain and there's no info on its
> >>                  * suspend/resume latencies, so assume them to be negligible and
> >> @@ -38,7 +40,14 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
> >>                  * known at this point anyway).
> >>                  */
> >>                 constraint_ns = dev_pm_qos_read_value(dev, DEV_PM_QOS_RESUME_LATENCY);
> >> -               constraint_ns *= NSEC_PER_USEC;
> >> +               flag_status = dev_pm_qos_flags(dev, PM_QOS_FLAG_LATENCY_SYS);
> >> +               if ((constraint_ns != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) &&
> >> +                           (flag_status == PM_QOS_FLAGS_ALL)) {
> >> +                       dev_dbg_once(dev, "resume-latency only for system-wide.  Ignoring.\n");
> >> +                       constraint_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
> >> +               } else {
> >> +                       constraint_ns *= NSEC_PER_USEC;
> >> +               }
> >>         }
> >
> > dev_update_qos_constraint() is called only to take into account the
> > QoS constraints for the device's *children*.
> >
> > It looks like we should also be checking the PM_QOS_FLAG_LATENCY_SYS
> > flag in default_suspend_ok() for the device in question.
> >
> > That said, there seems to be more places in the kernel where we should
> > check the PM_QOS_FLAG_LATENCY_SYS flag, like in cpu_power_down_ok(),
> > cpuidle_governor_latency_req(), etc.
>
> OK.  But now that we've agreed to drop the userspace interface for this,
> I wonder if the better approach is now to consider the flag to mean that
> the latency applies to runtime PM *and* system-wide PM.   Then, without
> the flag set, the latency applies *only* to runtime PM.
>
> That approach would allow the current default behavior to stay the same,
> and not require adding checks for this flag throughout the runtime code,
> and only require checking for the flag in the system-wide PM paths.

I agree with all of the above!

It would certainly make this less intrusive and it would also be more
consistent with what we did for CPU QoS.

>
> >>         if (constraint_ns < *constraint_ns_p)
> >> @@ -430,12 +439,43 @@ static bool cpu_system_power_down_ok(struct dev_pm_domain *pd)
> >>         s64 constraint_ns = cpu_wakeup_latency_qos_limit() * NSEC_PER_USEC;
> >>         struct generic_pm_domain *genpd = pd_to_genpd(pd);
> >>         int state_idx = genpd->state_count - 1;
> >> +       struct pm_domain_data *pdd;
> >> +       s32 min_dev_latency = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT;
> >> +       s64 min_dev_latency_ns = PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS;
> >> +       struct gpd_link *link;
> >>
> >>         if (!(genpd->flags & GENPD_FLAG_CPU_DOMAIN)) {
> >>                 genpd->state_idx = state_idx;
> >>                 return true;
> >>         }
> >>
> >> +       list_for_each_entry(link, &genpd->parent_links, parent_node) {
> >> +               struct generic_pm_domain *child_pd = link->child;
> >> +
> >> +               list_for_each_entry(pdd, &child_pd->dev_list, list_node) {
> >> +                       enum pm_qos_flags_status flag_status;
> >> +                       s32 dev_latency;
> >> +
> >> +                       dev_latency = dev_pm_qos_read_value(pdd->dev, DEV_PM_QOS_RESUME_LATENCY);
> >> +                       flag_status = dev_pm_qos_flags(pdd->dev, PM_QOS_FLAG_LATENCY_SYS);
> >> +                       if ((dev_latency != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) &&
> >> +                           (flag_status == PM_QOS_FLAGS_ALL)) {
> >> +                               dev_dbg(pdd->dev,
> >> +                                       "in domain %s, has QoS system-wide resume latency=%d\n",
> >> +                                       child_pd->name, dev_latency);
> >> +                               if (dev_latency < min_dev_latency)
> >> +                                       min_dev_latency = dev_latency;
> >> +                       }
> >> +               }
> >
> > cpu_system_power_down_ok() is at the moment only used for CPU PM
> > domains. If the intent is to take into account QoS constraints for
> > CPUs, we should check the QoS value for CPU-devices as well (by using
> > get_cpu_device(), see cpu_power_down_ok(). For non-CPU devices
> > something along the lines of the above makes sense to me.
> >
> > Although, please note, the above code is just walking through the
> > devices in the child-domains, there is nothing checking the devices
> > that belong to the current/parent-domain.
>
> Oops, yeah.  Good catch.
>
> > Nor are we taking child-devices into account.
>
> Indeed... double oops.
>
> This makes me wonder if we have any helpers to iterate over every device
> (and children) in a domain (and subdomains.)

Unfortunately there isn't, but it's a good idea I think.

If you decide to add helpers for this, please define them in a new
header-file internally for genpd, in drivers/pmdomain/core.h, so they
don't get publicly available via include/linux/pm_domain.h.

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2026-02-04 13:33 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-21  1:54 [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM Kevin Hilman (TI)
2026-01-21  1:54 ` [PATCH 1/2] PM / QoS: add flag to indicate latency applies system-wide Kevin Hilman (TI)
2026-01-27 16:20   ` Rafael J. Wysocki
2026-01-28 10:25   ` Ulf Hansson
2026-01-21  1:54 ` [PATCH 2/2] pmdommain: add support system-wide resume latency constraints Kevin Hilman (TI)
2026-02-02 11:49   ` Ulf Hansson
2026-02-03 23:19     ` Kevin Hilman
2026-02-04 13:33       ` Ulf Hansson
2026-01-28 10:33 ` [PATCH 0/2] PM: QoS/pmdomains: support resume latencies for system-wide PM Ulf Hansson
2026-01-28 23:51   ` Kevin Hilman
2026-01-29 11:04     ` Ulf Hansson
2026-01-29 18:29       ` Kevin Hilman
2026-01-30 11:10         ` Ulf Hansson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox