linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/2] Avoid spurious freezer wakeups
@ 2023-09-08 22:49 Elliot Berman
  2023-09-08 22:49 ` [PATCH v4 1/2] sched/core: Remove ifdeffery for saved_state Elliot Berman
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Elliot Berman @ 2023-09-08 22:49 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Rafael J. Wysocki, Pavel Machek
  Cc: Thomas Gleixner, kernel, linux-arm-msm, linux-kernel, linux-pm,
	Elliot Berman, Prakash Viswalingam

After commit f5d39b020809 ("freezer,sched: Rewrite core freezer logic"),
tasks that transition directly from TASK_FREEZABLE to TASK_FROZEN  are
always woken up on the thaw path. Prior to that commit, tasks could ask
freezer to consider them "frozen enough" via freezer_do_not_count(). The
commit replaced freezer_do_not_count() with a TASK_FREEZABLE state which
allows freezer to immediately mark the task as TASK_FROZEN without
waking up the task.  This is efficient for the suspend path, but on the
thaw path, the task is always woken up even if the task didn't need to
wake up and goes back to its TASK_(UN)INTERRUPTIBLE state. Although
these tasks are capable of handling of the wakeup, we can observe a
power/perf impact from the extra wakeup.

We observed on Android many tasks wait in the TASK_FREEZABLE state
(particularly due to many of them being binder clients). We observed
nearly 4x the number of tasks and a corresponding linear increase in
latency and power consumption when thawing the system. The latency
increased from ~15ms to ~50ms.

Avoid the spurious wakeups by saving the state of TASK_FREEZABLE tasks.
If the task was running before entering TASK_FROZEN state
(__refrigerator()) or if the task received a wake up for the saved
state, then the task is woken on thaw. saved_state from PREEMPT_RT locks
can be re-used because freezer would not stomp on the rtlock wait flow:
TASK_RTLOCK_WAIT isn't considered freezable.

For testing purposes, I use these commands can help see how many tasks were
woken during thawing:

1. Setup:
   mkdir /sys/kernel/tracing/instances/freezer
   cd /sys/kernel/tracing/instances/freezer 
   echo 0 > tracing_on ; echo > trace
   echo power:suspend_resume > set_event
   echo 'enable_event:sched:sched_wakeup if action == "thaw_processes" && start == 1' > events/power/suspend_resume/trigger
   echo 'traceoff if action == "thaw_processes" && start == 0' > events/power/suspend_resume/trigger
   echo 1 > tracing_on

2. Let kernel go to suspend

3. After kernel's back up:
   cat /sys/kernel/tracing/instances/freezer/trace | grep sched_wakeup | grep -o "pid=[0-9]*" | sort -u | wc -l

Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
---
Changes in v4:
- Split into 2 commits
- Link to v3: https://lore.kernel.org/r/20230908-avoid-spurious-freezer-wakeups-v3-1-d49821fda04d@quicinc.com

Changes in v3:
- Remove #ifdeferry for saved_state in kernel/sched/core.c
- Link to v2: https://lore.kernel.org/r/20230830-avoid-spurious-freezer-wakeups-v2-1-8877245cdbdc@quicinc.com

Changes in v2:
- Rebase to v6.5 which includes commit 1c06918788e8 ("sched: Consider task_struct::saved_state in wait_task_inactive()")
  This allows us to remove the special jobctl handling on thaw.
- Link to v1: https://lore.kernel.org/r/20230828-avoid-spurious-freezer-wakeups-v1-1-8be8cf761472@quicinc.com

---
Elliot Berman (2):
      sched/core: Remove ifdeffery for saved_state
      freezer,sched: Use saved_state to reduce some spurious wakeups

 include/linux/sched.h |  2 --
 kernel/freezer.c      | 41 +++++++++++++++++++----------------------
 kernel/sched/core.c   | 40 +++++++++++++++++-----------------------
 3 files changed, 36 insertions(+), 47 deletions(-)
---
base-commit: 2dde18cd1d8fac735875f2e4987f11817cc0bc2c
change-id: 20230817-avoid-spurious-freezer-wakeups-9f8619680b3a

Best regards,
-- 
Elliot Berman <quic_eberman@quicinc.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v4 1/2] sched/core: Remove ifdeffery for saved_state
  2023-09-08 22:49 [PATCH v4 0/2] Avoid spurious freezer wakeups Elliot Berman
@ 2023-09-08 22:49 ` Elliot Berman
  2023-09-08 22:49 ` [PATCH v4 2/2] freezer,sched: Use saved_state to reduce some spurious wakeups Elliot Berman
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Elliot Berman @ 2023-09-08 22:49 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Rafael J. Wysocki, Pavel Machek
  Cc: Thomas Gleixner, kernel, linux-arm-msm, linux-kernel, linux-pm,
	Elliot Berman

In preparation for freezer to also use saved_state, remove the
CONFIG_PREEMPT_RT compilation guard around saved_state.

On the arm64 platform I tested which did not have CONFIG_PREEMPT_RT,
there was no statistically significant deviation by applying this patch.

Test methodology:

perf bench sched message -g 40 -l 40

Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
---
 include/linux/sched.h |  2 --
 kernel/sched/core.c   | 17 +++--------------
 2 files changed, 3 insertions(+), 16 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 609bde814cb0..14194a62294f 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -745,10 +745,8 @@ struct task_struct {
 #endif
 	unsigned int			__state;
 
-#ifdef CONFIG_PREEMPT_RT
 	/* saved state for "spinlock sleepers" */
 	unsigned int			saved_state;
-#endif
 
 	/*
 	 * This begins the randomizable portion of task_struct. Only
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c52c2eba7c73..58921ab7442b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2219,31 +2219,21 @@ int __task_state_match(struct task_struct *p, unsigned int state)
 	if (READ_ONCE(p->__state) & state)
 		return 1;
 
-#ifdef CONFIG_PREEMPT_RT
 	if (READ_ONCE(p->saved_state) & state)
 		return -1;
-#endif
+
 	return 0;
 }
 
 static __always_inline
 int task_state_match(struct task_struct *p, unsigned int state)
 {
-#ifdef CONFIG_PREEMPT_RT
-	int match;
-
 	/*
 	 * Serialize against current_save_and_set_rtlock_wait_state() and
 	 * current_restore_rtlock_saved_state().
 	 */
-	raw_spin_lock_irq(&p->pi_lock);
-	match = __task_state_match(p, state);
-	raw_spin_unlock_irq(&p->pi_lock);
-
-	return match;
-#else
+	guard(raw_spinlock_irq)(&p->pi_lock);
 	return __task_state_match(p, state);
-#endif
 }
 
 /*
@@ -4053,7 +4043,6 @@ bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
 
 	*success = !!(match = __task_state_match(p, state));
 
-#ifdef CONFIG_PREEMPT_RT
 	/*
 	 * Saved state preserves the task state across blocking on
 	 * an RT lock.  If the state matches, set p::saved_state to
@@ -4069,7 +4058,7 @@ bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
 	 */
 	if (match < 0)
 		p->saved_state = TASK_RUNNING;
-#endif
+
 	return match > 0;
 }
 

-- 
2.41.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v4 2/2] freezer,sched: Use saved_state to reduce some spurious wakeups
  2023-09-08 22:49 [PATCH v4 0/2] Avoid spurious freezer wakeups Elliot Berman
  2023-09-08 22:49 ` [PATCH v4 1/2] sched/core: Remove ifdeffery for saved_state Elliot Berman
@ 2023-09-08 22:49 ` Elliot Berman
  2023-09-11 15:49 ` [PATCH v4 0/2] Avoid spurious freezer wakeups Peter Zijlstra
  2023-09-26 16:17 ` Carlos Llamas
  3 siblings, 0 replies; 10+ messages in thread
From: Elliot Berman @ 2023-09-08 22:49 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Rafael J. Wysocki, Pavel Machek
  Cc: Thomas Gleixner, kernel, linux-arm-msm, linux-kernel, linux-pm,
	Elliot Berman, Prakash Viswalingam

After commit f5d39b020809 ("freezer,sched: Rewrite core freezer logic"),
tasks that transition directly from TASK_FREEZABLE to TASK_FROZEN  are
always woken up on the thaw path. Prior to that commit, tasks could ask
freezer to consider them "frozen enough" via freezer_do_not_count(). The
commit replaced freezer_do_not_count() with a TASK_FREEZABLE state which
allows freezer to immediately mark the task as TASK_FROZEN without
waking up the task.  This is efficient for the suspend path, but on the
thaw path, the task is always woken up even if the task didn't need to
wake up and goes back to its TASK_(UN)INTERRUPTIBLE state. Although
these tasks are capable of handling of the wakeup, we can observe a
power/perf impact from the extra wakeup.

We observed on Android many tasks wait in the TASK_FREEZABLE state
(particularly due to many of them being binder clients). We observed
nearly 4x the number of tasks and a corresponding linear increase in
latency and power consumption when thawing the system. The latency
increased from ~15ms to ~50ms.

Avoid the spurious wakeups by saving the state of TASK_FREEZABLE tasks.
If the task was running before entering TASK_FROZEN state
(__refrigerator()) or if the task received a wake up for the saved
state, then the task is woken on thaw. saved_state from PREEMPT_RT locks
can be re-used because freezer would not stomp on the rtlock wait flow:
TASK_RTLOCK_WAIT isn't considered freezable.

Reported-by: Prakash Viswalingam <quic_prakashv@quicinc.com>
Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
---
 kernel/freezer.c    | 41 +++++++++++++++++++----------------------
 kernel/sched/core.c | 23 ++++++++++++++---------
 2 files changed, 33 insertions(+), 31 deletions(-)

diff --git a/kernel/freezer.c b/kernel/freezer.c
index 4fad0e6fca64..c450fa8b8b5e 100644
--- a/kernel/freezer.c
+++ b/kernel/freezer.c
@@ -71,7 +71,11 @@ bool __refrigerator(bool check_kthr_stop)
 	for (;;) {
 		bool freeze;
 
+		raw_spin_lock_irq(&current->pi_lock);
 		set_current_state(TASK_FROZEN);
+		/* unstale saved_state so that __thaw_task() will wake us up */
+		current->saved_state = TASK_RUNNING;
+		raw_spin_unlock_irq(&current->pi_lock);
 
 		spin_lock_irq(&freezer_lock);
 		freeze = freezing(current) && !(check_kthr_stop && kthread_should_stop());
@@ -129,6 +133,7 @@ static int __set_task_frozen(struct task_struct *p, void *arg)
 		WARN_ON_ONCE(debug_locks && p->lockdep_depth);
 #endif
 
+	p->saved_state = p->__state;
 	WRITE_ONCE(p->__state, TASK_FROZEN);
 	return TASK_FROZEN;
 }
@@ -170,42 +175,34 @@ bool freeze_task(struct task_struct *p)
 }
 
 /*
- * The special task states (TASK_STOPPED, TASK_TRACED) keep their canonical
- * state in p->jobctl. If either of them got a wakeup that was missed because
- * TASK_FROZEN, then their canonical state reflects that and the below will
- * refuse to restore the special state and instead issue the wakeup.
+ * Restore the saved_state before the task entered freezer. For typical task
+ * in the __refrigerator(), saved_state == TASK_RUNNING so nothing happens
+ * here. For tasks which were TASK_NORMAL | TASK_FREEZABLE, their initial state
+ * is restored unless they got an expected wakeup (see ttwu_state_match()).
+ * Returns 1 if the task state was restored.
  */
-static int __set_task_special(struct task_struct *p, void *arg)
+static int __restore_freezer_state(struct task_struct *p, void *arg)
 {
-	unsigned int state = 0;
+	unsigned int state = p->saved_state;
 
-	if (p->jobctl & JOBCTL_TRACED)
-		state = TASK_TRACED;
-
-	else if (p->jobctl & JOBCTL_STOPPED)
-		state = TASK_STOPPED;
-
-	if (state)
+	if (state != TASK_RUNNING) {
 		WRITE_ONCE(p->__state, state);
+		return 1;
+	}
 
-	return state;
+	return 0;
 }
 
 void __thaw_task(struct task_struct *p)
 {
-	unsigned long flags, flags2;
+	unsigned long flags;
 
 	spin_lock_irqsave(&freezer_lock, flags);
 	if (WARN_ON_ONCE(freezing(p)))
 		goto unlock;
 
-	if (lock_task_sighand(p, &flags2)) {
-		/* TASK_FROZEN -> TASK_{STOPPED,TRACED} */
-		bool ret = task_call_func(p, __set_task_special, NULL);
-		unlock_task_sighand(p, &flags2);
-		if (ret)
-			goto unlock;
-	}
+	if (task_call_func(p, __restore_freezer_state, NULL))
+		goto unlock;
 
 	wake_up_state(p, TASK_FROZEN);
 unlock:
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 58921ab7442b..12f3c500622a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2229,8 +2229,8 @@ static __always_inline
 int task_state_match(struct task_struct *p, unsigned int state)
 {
 	/*
-	 * Serialize against current_save_and_set_rtlock_wait_state() and
-	 * current_restore_rtlock_saved_state().
+	 * Serialize against current_save_and_set_rtlock_wait_state(),
+	 * current_restore_rtlock_saved_state(), and __refrigerator().
 	 */
 	guard(raw_spinlock_irq)(&p->pi_lock);
 	return __task_state_match(p, state);
@@ -4023,13 +4023,17 @@ static void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
  * The caller holds p::pi_lock if p != current or has preemption
  * disabled when p == current.
  *
- * The rules of PREEMPT_RT saved_state:
+ * The rules of saved_state:
  *
  *   The related locking code always holds p::pi_lock when updating
  *   p::saved_state, which means the code is fully serialized in both cases.
  *
- *   The lock wait and lock wakeups happen via TASK_RTLOCK_WAIT. No other
- *   bits set. This allows to distinguish all wakeup scenarios.
+ *   For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
+ *   No other bits set. This allows to distinguish all wakeup scenarios.
+ *
+ *   For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
+ *   allows us to prevent early wakeup of tasks before they can be run on
+ *   asymmetric ISA architectures (eg ARMv9).
  */
 static __always_inline
 bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
@@ -4045,10 +4049,11 @@ bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
 
 	/*
 	 * Saved state preserves the task state across blocking on
-	 * an RT lock.  If the state matches, set p::saved_state to
-	 * TASK_RUNNING, but do not wake the task because it waits
-	 * for a lock wakeup. Also indicate success because from
-	 * the regular waker's point of view this has succeeded.
+	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
+	 * set p::saved_state to TASK_RUNNING, but do not wake the task
+	 * because it waits for a lock wakeup or __thaw_task(). Also
+	 * indicate success because from the regular waker's point of
+	 * view this has succeeded.
 	 *
 	 * After acquiring the lock the task will restore p::__state
 	 * from p::saved_state which ensures that the regular

-- 
2.41.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 0/2] Avoid spurious freezer wakeups
  2023-09-08 22:49 [PATCH v4 0/2] Avoid spurious freezer wakeups Elliot Berman
  2023-09-08 22:49 ` [PATCH v4 1/2] sched/core: Remove ifdeffery for saved_state Elliot Berman
  2023-09-08 22:49 ` [PATCH v4 2/2] freezer,sched: Use saved_state to reduce some spurious wakeups Elliot Berman
@ 2023-09-11 15:49 ` Peter Zijlstra
  2023-09-26 16:17 ` Carlos Llamas
  3 siblings, 0 replies; 10+ messages in thread
From: Peter Zijlstra @ 2023-09-11 15:49 UTC (permalink / raw)
  To: Elliot Berman
  Cc: Ingo Molnar, Rafael J. Wysocki, Pavel Machek, Thomas Gleixner,
	kernel, linux-arm-msm, linux-kernel, linux-pm,
	Prakash Viswalingam

On Fri, Sep 08, 2023 at 03:49:14PM -0700, Elliot Berman wrote:

> Elliot Berman (2):
>       sched/core: Remove ifdeffery for saved_state
>       freezer,sched: Use saved_state to reduce some spurious wakeups

Thanks!

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 0/2] Avoid spurious freezer wakeups
  2023-09-08 22:49 [PATCH v4 0/2] Avoid spurious freezer wakeups Elliot Berman
                   ` (2 preceding siblings ...)
  2023-09-11 15:49 ` [PATCH v4 0/2] Avoid spurious freezer wakeups Peter Zijlstra
@ 2023-09-26 16:17 ` Carlos Llamas
  2023-09-26 20:02   ` Peter Zijlstra
  3 siblings, 1 reply; 10+ messages in thread
From: Carlos Llamas @ 2023-09-26 16:17 UTC (permalink / raw)
  To: Elliot Berman
  Cc: Ingo Molnar, Peter Zijlstra, Rafael J. Wysocki, Pavel Machek,
	Thomas Gleixner, kernel, linux-arm-msm, linux-kernel, linux-pm,
	Prakash Viswalingam

On Fri, Sep 08, 2023 at 03:49:14PM -0700, Elliot Berman wrote:
> After commit f5d39b020809 ("freezer,sched: Rewrite core freezer logic"),
> tasks that transition directly from TASK_FREEZABLE to TASK_FROZEN  are
> always woken up on the thaw path. Prior to that commit, tasks could ask
> freezer to consider them "frozen enough" via freezer_do_not_count(). The
> commit replaced freezer_do_not_count() with a TASK_FREEZABLE state which
> allows freezer to immediately mark the task as TASK_FROZEN without
> waking up the task.  This is efficient for the suspend path, but on the
> thaw path, the task is always woken up even if the task didn't need to
> wake up and goes back to its TASK_(UN)INTERRUPTIBLE state. Although
> these tasks are capable of handling of the wakeup, we can observe a
> power/perf impact from the extra wakeup.

This issue is hurting the performance of our stable 6.1 releases. Does
it make sense to backport these patches into stable branches once they
land in mainline? I would assume we want to fix the perf regression
there too?

--
Carlos Llamas

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 0/2] Avoid spurious freezer wakeups
  2023-09-26 16:17 ` Carlos Llamas
@ 2023-09-26 20:02   ` Peter Zijlstra
  2023-09-26 20:56     ` Carlos Llamas
  2023-09-28 16:24     ` Elliot Berman
  0 siblings, 2 replies; 10+ messages in thread
From: Peter Zijlstra @ 2023-09-26 20:02 UTC (permalink / raw)
  To: Carlos Llamas
  Cc: Elliot Berman, Ingo Molnar, Rafael J. Wysocki, Pavel Machek,
	Thomas Gleixner, kernel, linux-arm-msm, linux-kernel, linux-pm,
	Prakash Viswalingam

On Tue, Sep 26, 2023 at 04:17:33PM +0000, Carlos Llamas wrote:
> On Fri, Sep 08, 2023 at 03:49:14PM -0700, Elliot Berman wrote:
> > After commit f5d39b020809 ("freezer,sched: Rewrite core freezer logic"),
> > tasks that transition directly from TASK_FREEZABLE to TASK_FROZEN  are
> > always woken up on the thaw path. Prior to that commit, tasks could ask
> > freezer to consider them "frozen enough" via freezer_do_not_count(). The
> > commit replaced freezer_do_not_count() with a TASK_FREEZABLE state which
> > allows freezer to immediately mark the task as TASK_FROZEN without
> > waking up the task.  This is efficient for the suspend path, but on the
> > thaw path, the task is always woken up even if the task didn't need to
> > wake up and goes back to its TASK_(UN)INTERRUPTIBLE state. Although
> > these tasks are capable of handling of the wakeup, we can observe a
> > power/perf impact from the extra wakeup.
> 
> This issue is hurting the performance of our stable 6.1 releases. Does
> it make sense to backport these patches into stable branches once they
> land in mainline? I would assume we want to fix the perf regression
> there too?

Note that these patches are in tip/sched/core, slated for the next merge
window.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 0/2] Avoid spurious freezer wakeups
  2023-09-26 20:02   ` Peter Zijlstra
@ 2023-09-26 20:56     ` Carlos Llamas
  2023-09-26 22:04       ` Elliot Berman
  2023-09-28 16:24     ` Elliot Berman
  1 sibling, 1 reply; 10+ messages in thread
From: Carlos Llamas @ 2023-09-26 20:56 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Elliot Berman, Ingo Molnar, Rafael J. Wysocki, Pavel Machek,
	Thomas Gleixner, kernel, linux-arm-msm, linux-kernel, linux-pm,
	Prakash Viswalingam, Greg Kroah-Hartman, stable

On Tue, Sep 26, 2023 at 10:02:38PM +0200, Peter Zijlstra wrote:
> On Tue, Sep 26, 2023 at 04:17:33PM +0000, Carlos Llamas wrote:
> > 
> > This issue is hurting the performance of our stable 6.1 releases. Does
> > it make sense to backport these patches into stable branches once they
> > land in mainline? I would assume we want to fix the perf regression
> > there too?
> 
> Note that these patches are in tip/sched/core, slated for the next merge
> window.

We can wait, no problem. I just wanted to make sure we also patch stable
if needed. Elliot, would you be able to send a backport of your patches
to stable once they land in mainline on the next merge window?

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: stable@vger.kernel.org

--
Carlos Llamas

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 0/2] Avoid spurious freezer wakeups
  2023-09-26 20:56     ` Carlos Llamas
@ 2023-09-26 22:04       ` Elliot Berman
  0 siblings, 0 replies; 10+ messages in thread
From: Elliot Berman @ 2023-09-26 22:04 UTC (permalink / raw)
  To: Carlos Llamas, Peter Zijlstra
  Cc: Ingo Molnar, Rafael J. Wysocki, Pavel Machek, Thomas Gleixner,
	kernel, linux-arm-msm, linux-kernel, linux-pm,
	Prakash Viswalingam, Greg Kroah-Hartman, stable



On 9/26/2023 1:56 PM, Carlos Llamas wrote:
> On Tue, Sep 26, 2023 at 10:02:38PM +0200, Peter Zijlstra wrote:
>> On Tue, Sep 26, 2023 at 04:17:33PM +0000, Carlos Llamas wrote:
>>>
>>> This issue is hurting the performance of our stable 6.1 releases. Does
>>> it make sense to backport these patches into stable branches once they
>>> land in mainline? I would assume we want to fix the perf regression
>>> there too?
>>
>> Note that these patches are in tip/sched/core, slated for the next merge
>> window.
> 
> We can wait, no problem. I just wanted to make sure we also patch stable
> if needed. Elliot, would you be able to send a backport of your patches
> to stable once they land in mainline on the next merge window?

Yep, happy to send it. There's a trivial conflict to resolve w/older
kernels not having the new guard(...)(...) macros.

> 
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Cc: stable@vger.kernel.org
> 
> --
> Carlos Llamas

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 0/2] Avoid spurious freezer wakeups
  2023-09-26 20:02   ` Peter Zijlstra
  2023-09-26 20:56     ` Carlos Llamas
@ 2023-09-28 16:24     ` Elliot Berman
  2023-10-03 13:01       ` Ingo Molnar
  1 sibling, 1 reply; 10+ messages in thread
From: Elliot Berman @ 2023-09-28 16:24 UTC (permalink / raw)
  To: Peter Zijlstra, Carlos Llamas, Greg Kroah-Hartman
  Cc: Ingo Molnar, Rafael J. Wysocki, Pavel Machek, Thomas Gleixner,
	kernel, linux-arm-msm, linux-kernel, linux-pm,
	Prakash Viswalingam

Hi Peter,

On 9/26/2023 1:02 PM, Peter Zijlstra wrote:
> On Tue, Sep 26, 2023 at 04:17:33PM +0000, Carlos Llamas wrote:
>> On Fri, Sep 08, 2023 at 03:49:14PM -0700, Elliot Berman wrote:
>>> After commit f5d39b020809 ("freezer,sched: Rewrite core freezer logic"),
>>> tasks that transition directly from TASK_FREEZABLE to TASK_FROZEN  are
>>> always woken up on the thaw path. Prior to that commit, tasks could ask
>>> freezer to consider them "frozen enough" via freezer_do_not_count(). The
>>> commit replaced freezer_do_not_count() with a TASK_FREEZABLE state which
>>> allows freezer to immediately mark the task as TASK_FROZEN without
>>> waking up the task.  This is efficient for the suspend path, but on the
>>> thaw path, the task is always woken up even if the task didn't need to
>>> wake up and goes back to its TASK_(UN)INTERRUPTIBLE state. Although
>>> these tasks are capable of handling of the wakeup, we can observe a
>>> power/perf impact from the extra wakeup.
>>
>> This issue is hurting the performance of our stable 6.1 releases. Does
>> it make sense to backport these patches into stable branches once they
>> land in mainline? I would assume we want to fix the perf regression
>> there too?
> 
> Note that these patches are in tip/sched/core, slated for the next merge
> window.

Can the changes be scheduled for the next 6.6-rc? I'd like to get the
changes backported to stable sooner since we observed the regression on
real systems.

Thanks,
Elliot

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v4 0/2] Avoid spurious freezer wakeups
  2023-09-28 16:24     ` Elliot Berman
@ 2023-10-03 13:01       ` Ingo Molnar
  0 siblings, 0 replies; 10+ messages in thread
From: Ingo Molnar @ 2023-10-03 13:01 UTC (permalink / raw)
  To: Elliot Berman
  Cc: Peter Zijlstra, Carlos Llamas, Greg Kroah-Hartman, Ingo Molnar,
	Rafael J. Wysocki, Pavel Machek, Thomas Gleixner, kernel,
	linux-arm-msm, linux-kernel, linux-pm, Prakash Viswalingam


* Elliot Berman <quic_eberman@quicinc.com> wrote:

> Hi Peter,
> 
> On 9/26/2023 1:02 PM, Peter Zijlstra wrote:
> > On Tue, Sep 26, 2023 at 04:17:33PM +0000, Carlos Llamas wrote:
> >> On Fri, Sep 08, 2023 at 03:49:14PM -0700, Elliot Berman wrote:
> >>> After commit f5d39b020809 ("freezer,sched: Rewrite core freezer logic"),
> >>> tasks that transition directly from TASK_FREEZABLE to TASK_FROZEN  are
> >>> always woken up on the thaw path. Prior to that commit, tasks could ask
> >>> freezer to consider them "frozen enough" via freezer_do_not_count(). The
> >>> commit replaced freezer_do_not_count() with a TASK_FREEZABLE state which
> >>> allows freezer to immediately mark the task as TASK_FROZEN without
> >>> waking up the task.  This is efficient for the suspend path, but on the
> >>> thaw path, the task is always woken up even if the task didn't need to
> >>> wake up and goes back to its TASK_(UN)INTERRUPTIBLE state. Although
> >>> these tasks are capable of handling of the wakeup, we can observe a
> >>> power/perf impact from the extra wakeup.
> >>
> >> This issue is hurting the performance of our stable 6.1 releases. Does
> >> it make sense to backport these patches into stable branches once they
> >> land in mainline? I would assume we want to fix the perf regression
> >> there too?
> > 
> > Note that these patches are in tip/sched/core, slated for the next merge
> > window.
> 
> Can the changes be scheduled for the next 6.6-rc? I'd like to get the
> changes backported to stable sooner since we observed the regression on
> real systems.

These are pretty risky and go beyond fixes for regressions introduced
recently: the original commit is more than a year old.

But I agree with having the fixes in stable once it hits upstream in the v6.7 
merge window - the difference would only be a couple of days vs. -final.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-10-03 13:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-08 22:49 [PATCH v4 0/2] Avoid spurious freezer wakeups Elliot Berman
2023-09-08 22:49 ` [PATCH v4 1/2] sched/core: Remove ifdeffery for saved_state Elliot Berman
2023-09-08 22:49 ` [PATCH v4 2/2] freezer,sched: Use saved_state to reduce some spurious wakeups Elliot Berman
2023-09-11 15:49 ` [PATCH v4 0/2] Avoid spurious freezer wakeups Peter Zijlstra
2023-09-26 16:17 ` Carlos Llamas
2023-09-26 20:02   ` Peter Zijlstra
2023-09-26 20:56     ` Carlos Llamas
2023-09-26 22:04       ` Elliot Berman
2023-09-28 16:24     ` Elliot Berman
2023-10-03 13:01       ` Ingo Molnar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).