linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 6.6 0/5] [Backport] sched,freezer: Remove unnecessary warning in __thaw_task
       [not found] <2025072421-deviate-skintight-bbd5@gregkh>
@ 2025-07-28  2:41 ` Chen Ridong
  2025-07-28  2:41   ` [PATCH 6.6 1/5] sched/core: Remove ifdeffery for saved_state Chen Ridong
                     ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Chen Ridong @ 2025-07-28  2:41 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, vschneid, rafael, pavel
  Cc: linux-kernel, linux-pm, lujialin4, chenridong

From: Chen Ridong <chenridong@huawei.com>

To fix the [1] issue, it needs to backport:
9beb8c5e77dc ("sched,freezer: Remove unnecessary warning...")
14a67b42cb6f ("Revert 'cgroup_freezer: cgroup freezing: Check if not...'").

This series aims to backport 9beb8c5e77dc. To avoid conflicts, backport the
missing patches[2].

[1] https://lore.kernel.org/lkml/20250717085550.3828781-1-chenridong@huaweicloud.com/
[2] https://lore.kernel.org/stable/2025072421-deviate-skintight-bbd5@gregkh/

Chen Ridong (1):
  sched,freezer: Remove unnecessary warning in __thaw_task

Elliot Berman (4):
  sched/core: Remove ifdeffery for saved_state
  freezer,sched: Use saved_state to reduce some spurious wakeups
  freezer,sched: Do not restore saved_state of a thawed task
  freezer,sched: Clean saved_state when restoring it during thaw

 include/linux/sched.h |  2 --
 kernel/freezer.c      | 51 +++++++++++++++++--------------------------
 kernel/sched/core.c   | 31 +++++++++++++-------------
 3 files changed, 35 insertions(+), 49 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 6.6 1/5] sched/core: Remove ifdeffery for saved_state
  2025-07-28  2:41 ` [PATCH 6.6 0/5] [Backport] sched,freezer: Remove unnecessary warning in __thaw_task Chen Ridong
@ 2025-07-28  2:41   ` Chen Ridong
  2025-07-28  2:41   ` [PATCH 6.6 2/5] freezer,sched: Use saved_state to reduce some spurious wakeups Chen Ridong
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Chen Ridong @ 2025-07-28  2:41 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, vschneid, rafael, pavel
  Cc: linux-kernel, linux-pm, lujialin4, chenridong

From: Elliot Berman <quic_eberman@quicinc.com>

[ Upstream commit fbaa6a181a4b1886cbf4214abdf9a2df68471510 ]

In preparation for freezer to also use saved_state, remove the
CONFIG_PREEMPT_RT compilation guard around saved_state.

On the arm64 platform I tested which did not have CONFIG_PREEMPT_RT,
there was no statistically significant deviation by applying this patch.

Test methodology:

perf bench sched message -g 40 -l 40

Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
 include/linux/sched.h |  2 --
 kernel/sched/core.c   | 10 ++--------
 2 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 393c300347de..cb38eee732fd 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -753,10 +753,8 @@ struct task_struct {
 #endif
 	unsigned int			__state;
 
-#ifdef CONFIG_PREEMPT_RT
 	/* saved state for "spinlock sleepers" */
 	unsigned int			saved_state;
-#endif
 
 	/*
 	 * This begins the randomizable portion of task_struct. Only
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 760a6c3781cb..ab6550fadecd 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2238,17 +2238,15 @@ int __task_state_match(struct task_struct *p, unsigned int state)
 	if (READ_ONCE(p->__state) & state)
 		return 1;
 
-#ifdef CONFIG_PREEMPT_RT
 	if (READ_ONCE(p->saved_state) & state)
 		return -1;
-#endif
+
 	return 0;
 }
 
 static __always_inline
 int task_state_match(struct task_struct *p, unsigned int state)
 {
-#ifdef CONFIG_PREEMPT_RT
 	int match;
 
 	/*
@@ -2260,9 +2258,6 @@ int task_state_match(struct task_struct *p, unsigned int state)
 	raw_spin_unlock_irq(&p->pi_lock);
 
 	return match;
-#else
-	return __task_state_match(p, state);
-#endif
 }
 
 /*
@@ -4059,7 +4054,6 @@ bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
 
 	*success = !!(match = __task_state_match(p, state));
 
-#ifdef CONFIG_PREEMPT_RT
 	/*
 	 * Saved state preserves the task state across blocking on
 	 * an RT lock.  If the state matches, set p::saved_state to
@@ -4075,7 +4069,7 @@ bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
 	 */
 	if (match < 0)
 		p->saved_state = TASK_RUNNING;
-#endif
+
 	return match > 0;
 }
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 6.6 2/5] freezer,sched: Use saved_state to reduce some spurious wakeups
  2025-07-28  2:41 ` [PATCH 6.6 0/5] [Backport] sched,freezer: Remove unnecessary warning in __thaw_task Chen Ridong
  2025-07-28  2:41   ` [PATCH 6.6 1/5] sched/core: Remove ifdeffery for saved_state Chen Ridong
@ 2025-07-28  2:41   ` Chen Ridong
  2025-07-28  2:41   ` [PATCH 6.6 3/5] freezer,sched: Do not restore saved_state of a thawed task Chen Ridong
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Chen Ridong @ 2025-07-28  2:41 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, vschneid, rafael, pavel
  Cc: linux-kernel, linux-pm, lujialin4, chenridong

From: Elliot Berman <quic_eberman@quicinc.com>

[ Upstream commit 8f0eed4a78a81668bc78923ea09f51a7a663c2b0 ]

After commit f5d39b020809 ("freezer,sched: Rewrite core freezer logic"),
tasks that transition directly from TASK_FREEZABLE to TASK_FROZEN  are
always woken up on the thaw path. Prior to that commit, tasks could ask
freezer to consider them "frozen enough" via freezer_do_not_count(). The
commit replaced freezer_do_not_count() with a TASK_FREEZABLE state which
allows freezer to immediately mark the task as TASK_FROZEN without
waking up the task.  This is efficient for the suspend path, but on the
thaw path, the task is always woken up even if the task didn't need to
wake up and goes back to its TASK_(UN)INTERRUPTIBLE state. Although
these tasks are capable of handling of the wakeup, we can observe a
power/perf impact from the extra wakeup.

We observed on Android many tasks wait in the TASK_FREEZABLE state
(particularly due to many of them being binder clients). We observed
nearly 4x the number of tasks and a corresponding linear increase in
latency and power consumption when thawing the system. The latency
increased from ~15ms to ~50ms.

Avoid the spurious wakeups by saving the state of TASK_FREEZABLE tasks.
If the task was running before entering TASK_FROZEN state
(__refrigerator()) or if the task received a wake up for the saved
state, then the task is woken on thaw. saved_state from PREEMPT_RT locks
can be re-used because freezer would not stomp on the rtlock wait flow:
TASK_RTLOCK_WAIT isn't considered freezable.

Reported-by: Prakash Viswalingam <quic_prakashv@quicinc.com>
Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
 kernel/freezer.c    | 41 +++++++++++++++++++----------------------
 kernel/sched/core.c | 21 +++++++++++++--------
 2 files changed, 32 insertions(+), 30 deletions(-)

diff --git a/kernel/freezer.c b/kernel/freezer.c
index 4fad0e6fca64..c450fa8b8b5e 100644
--- a/kernel/freezer.c
+++ b/kernel/freezer.c
@@ -71,7 +71,11 @@ bool __refrigerator(bool check_kthr_stop)
 	for (;;) {
 		bool freeze;
 
+		raw_spin_lock_irq(&current->pi_lock);
 		set_current_state(TASK_FROZEN);
+		/* unstale saved_state so that __thaw_task() will wake us up */
+		current->saved_state = TASK_RUNNING;
+		raw_spin_unlock_irq(&current->pi_lock);
 
 		spin_lock_irq(&freezer_lock);
 		freeze = freezing(current) && !(check_kthr_stop && kthread_should_stop());
@@ -129,6 +133,7 @@ static int __set_task_frozen(struct task_struct *p, void *arg)
 		WARN_ON_ONCE(debug_locks && p->lockdep_depth);
 #endif
 
+	p->saved_state = p->__state;
 	WRITE_ONCE(p->__state, TASK_FROZEN);
 	return TASK_FROZEN;
 }
@@ -170,42 +175,34 @@ bool freeze_task(struct task_struct *p)
 }
 
 /*
- * The special task states (TASK_STOPPED, TASK_TRACED) keep their canonical
- * state in p->jobctl. If either of them got a wakeup that was missed because
- * TASK_FROZEN, then their canonical state reflects that and the below will
- * refuse to restore the special state and instead issue the wakeup.
+ * Restore the saved_state before the task entered freezer. For typical task
+ * in the __refrigerator(), saved_state == TASK_RUNNING so nothing happens
+ * here. For tasks which were TASK_NORMAL | TASK_FREEZABLE, their initial state
+ * is restored unless they got an expected wakeup (see ttwu_state_match()).
+ * Returns 1 if the task state was restored.
  */
-static int __set_task_special(struct task_struct *p, void *arg)
+static int __restore_freezer_state(struct task_struct *p, void *arg)
 {
-	unsigned int state = 0;
+	unsigned int state = p->saved_state;
 
-	if (p->jobctl & JOBCTL_TRACED)
-		state = TASK_TRACED;
-
-	else if (p->jobctl & JOBCTL_STOPPED)
-		state = TASK_STOPPED;
-
-	if (state)
+	if (state != TASK_RUNNING) {
 		WRITE_ONCE(p->__state, state);
+		return 1;
+	}
 
-	return state;
+	return 0;
 }
 
 void __thaw_task(struct task_struct *p)
 {
-	unsigned long flags, flags2;
+	unsigned long flags;
 
 	spin_lock_irqsave(&freezer_lock, flags);
 	if (WARN_ON_ONCE(freezing(p)))
 		goto unlock;
 
-	if (lock_task_sighand(p, &flags2)) {
-		/* TASK_FROZEN -> TASK_{STOPPED,TRACED} */
-		bool ret = task_call_func(p, __set_task_special, NULL);
-		unlock_task_sighand(p, &flags2);
-		if (ret)
-			goto unlock;
-	}
+	if (task_call_func(p, __restore_freezer_state, NULL))
+		goto unlock;
 
 	wake_up_state(p, TASK_FROZEN);
 unlock:
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ab6550fadecd..1b5e4389f788 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2251,7 +2251,7 @@ int task_state_match(struct task_struct *p, unsigned int state)
 
 	/*
 	 * Serialize against current_save_and_set_rtlock_wait_state() and
-	 * current_restore_rtlock_saved_state().
+	 * current_restore_rtlock_saved_state(), and __refrigerator().
 	 */
 	raw_spin_lock_irq(&p->pi_lock);
 	match = __task_state_match(p, state);
@@ -4034,13 +4034,17 @@ static void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
  * The caller holds p::pi_lock if p != current or has preemption
  * disabled when p == current.
  *
- * The rules of PREEMPT_RT saved_state:
+ * The rules of saved_state:
  *
  *   The related locking code always holds p::pi_lock when updating
  *   p::saved_state, which means the code is fully serialized in both cases.
  *
- *   The lock wait and lock wakeups happen via TASK_RTLOCK_WAIT. No other
- *   bits set. This allows to distinguish all wakeup scenarios.
+ *   For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
+ *   No other bits set. This allows to distinguish all wakeup scenarios.
+ *
+ *   For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
+ *   allows us to prevent early wakeup of tasks before they can be run on
+ *   asymmetric ISA architectures (eg ARMv9).
  */
 static __always_inline
 bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
@@ -4056,10 +4060,11 @@ bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
 
 	/*
 	 * Saved state preserves the task state across blocking on
-	 * an RT lock.  If the state matches, set p::saved_state to
-	 * TASK_RUNNING, but do not wake the task because it waits
-	 * for a lock wakeup. Also indicate success because from
-	 * the regular waker's point of view this has succeeded.
+	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
+	 * set p::saved_state to TASK_RUNNING, but do not wake the task
+	 * because it waits for a lock wakeup or __thaw_task(). Also
+	 * indicate success because from the regular waker's point of
+	 * view this has succeeded.
 	 *
 	 * After acquiring the lock the task will restore p::__state
 	 * from p::saved_state which ensures that the regular
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 6.6 3/5] freezer,sched: Do not restore saved_state of a thawed task
  2025-07-28  2:41 ` [PATCH 6.6 0/5] [Backport] sched,freezer: Remove unnecessary warning in __thaw_task Chen Ridong
  2025-07-28  2:41   ` [PATCH 6.6 1/5] sched/core: Remove ifdeffery for saved_state Chen Ridong
  2025-07-28  2:41   ` [PATCH 6.6 2/5] freezer,sched: Use saved_state to reduce some spurious wakeups Chen Ridong
@ 2025-07-28  2:41   ` Chen Ridong
  2025-07-28  2:41   ` [PATCH 6.6 4/5] freezer,sched: Clean saved_state when restoring it during thaw Chen Ridong
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Chen Ridong @ 2025-07-28  2:41 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, vschneid, rafael, pavel
  Cc: linux-kernel, linux-pm, lujialin4, chenridong

From: Elliot Berman <quic_eberman@quicinc.com>

[ Upstream commit 23ab79e8e469e2605beec2e3ccb40d19c68dd2e0 ]

It is possible for a task to be thawed multiple times when mixing the
*legacy* cgroup freezer and system-wide freezer. To do this, freeze the
cgroup, do system-wide freeze/thaw, then thaw the cgroup. When this
happens, then a stale saved_state can be written to the task's state
and cause task to hang indefinitely. Fix this by only trying to thaw
tasks that are actually frozen.

This change also has the marginal benefit avoiding unnecessary
wake_up_state(p, TASK_FROZEN) if we know the task is already thawed.
There is not possibility of time-of-compare/time-of-use race when we skip
the wake_up_state because entering/exiting TASK_FROZEN is guarded by
freezer_lock.

Fixes: 8f0eed4a78a8 ("freezer,sched: Use saved_state to reduce some spurious wakeups")
Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Abhijeet Dharmapurikar <quic_adharmap@quicinc.com>
Link: https://lore.kernel.org/r/20231120-freezer-state-multiple-thaws-v1-1-f2e1dd7ce5a2@quicinc.com
Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
 kernel/freezer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/freezer.c b/kernel/freezer.c
index c450fa8b8b5e..759006a9a910 100644
--- a/kernel/freezer.c
+++ b/kernel/freezer.c
@@ -201,7 +201,7 @@ void __thaw_task(struct task_struct *p)
 	if (WARN_ON_ONCE(freezing(p)))
 		goto unlock;
 
-	if (task_call_func(p, __restore_freezer_state, NULL))
+	if (!frozen(p) || task_call_func(p, __restore_freezer_state, NULL))
 		goto unlock;
 
 	wake_up_state(p, TASK_FROZEN);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 6.6 4/5] freezer,sched: Clean saved_state when restoring it during thaw
  2025-07-28  2:41 ` [PATCH 6.6 0/5] [Backport] sched,freezer: Remove unnecessary warning in __thaw_task Chen Ridong
                     ` (2 preceding siblings ...)
  2025-07-28  2:41   ` [PATCH 6.6 3/5] freezer,sched: Do not restore saved_state of a thawed task Chen Ridong
@ 2025-07-28  2:41   ` Chen Ridong
  2025-07-28  2:41   ` [PATCH 6.6 5/5] sched,freezer: Remove unnecessary warning in __thaw_task Chen Ridong
  2025-07-28  3:11   ` [PATCH 6.6 0/5] [Backport] " Chen Ridong
  5 siblings, 0 replies; 7+ messages in thread
From: Chen Ridong @ 2025-07-28  2:41 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, vschneid, rafael, pavel
  Cc: linux-kernel, linux-pm, lujialin4, chenridong

From: Elliot Berman <quic_eberman@quicinc.com>

[ Upstream commit 418146e39891ef1fb2284dee4cabbfe616cd21cf ]

Clean saved_state after using it during thaw. Cleaning the saved_state
allows us to avoid some unnecessary branches in ttwu_state_match.

Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20231120-freezer-state-multiple-thaws-v1-2-f2e1dd7ce5a2@quicinc.com
Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
 kernel/freezer.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/freezer.c b/kernel/freezer.c
index 759006a9a910..f57aaf96b829 100644
--- a/kernel/freezer.c
+++ b/kernel/freezer.c
@@ -187,6 +187,7 @@ static int __restore_freezer_state(struct task_struct *p, void *arg)
 
 	if (state != TASK_RUNNING) {
 		WRITE_ONCE(p->__state, state);
+		p->saved_state = TASK_RUNNING;
 		return 1;
 	}
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 6.6 5/5] sched,freezer: Remove unnecessary warning in __thaw_task
  2025-07-28  2:41 ` [PATCH 6.6 0/5] [Backport] sched,freezer: Remove unnecessary warning in __thaw_task Chen Ridong
                     ` (3 preceding siblings ...)
  2025-07-28  2:41   ` [PATCH 6.6 4/5] freezer,sched: Clean saved_state when restoring it during thaw Chen Ridong
@ 2025-07-28  2:41   ` Chen Ridong
  2025-07-28  3:11   ` [PATCH 6.6 0/5] [Backport] " Chen Ridong
  5 siblings, 0 replies; 7+ messages in thread
From: Chen Ridong @ 2025-07-28  2:41 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, vschneid, rafael, pavel
  Cc: linux-kernel, linux-pm, lujialin4, chenridong

From: Chen Ridong <chenridong@huawei.com>

[ Upstream commit 9beb8c5e77dc10e3889ff5f967eeffba78617a88 ]

Commit cff5f49d433f ("cgroup_freezer: cgroup_freezing: Check if not
frozen") modified the cgroup_freezing() logic to verify that the FROZEN
flag is not set, affecting the return value of the freezing() function,
in order to address a warning in __thaw_task.

A race condition exists that may allow tasks to escape being frozen. The
following scenario demonstrates this issue:

CPU 0 (get_signal path)		CPU 1 (freezer.state reader)
try_to_freeze			read freezer.state
__refrigerator			freezer_read
				update_if_frozen
WRITE_ONCE(current->__state, TASK_FROZEN);
				...
				/* Task is now marked frozen */
				/* frozen(task) == true */
				/* Assuming other tasks are frozen */
				freezer->state |= CGROUP_FROZEN;
/* freezing(current) returns false */
/* because cgroup is frozen (not freezing) */
break out
__set_current_state(TASK_RUNNING);
/* Bug: Task resumes running when it should remain frozen */

The existing !frozen(p) check in __thaw_task makes the
WARN_ON_ONCE(freezing(p)) warning redundant. Removing this warning enables
reverting commit cff5f49d433f ("cgroup_freezer: cgroup_freezing: Check if
not frozen") to resolve the issue.

This patch removes the warning from __thaw_task. A subsequent patch will
revert commit cff5f49d433f ("cgroup_freezer: cgroup_freezing: Check if
not frozen") to complete the fix.

Reported-by: Zhong Jiawei<zhongjiawei1@huawei.com>
Signed-off-by: Chen Ridong <chenridong@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/freezer.c | 15 +++------------
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/kernel/freezer.c b/kernel/freezer.c
index f57aaf96b829..d8db479af478 100644
--- a/kernel/freezer.c
+++ b/kernel/freezer.c
@@ -196,18 +196,9 @@ static int __restore_freezer_state(struct task_struct *p, void *arg)
 
 void __thaw_task(struct task_struct *p)
 {
-	unsigned long flags;
-
-	spin_lock_irqsave(&freezer_lock, flags);
-	if (WARN_ON_ONCE(freezing(p)))
-		goto unlock;
-
-	if (!frozen(p) || task_call_func(p, __restore_freezer_state, NULL))
-		goto unlock;
-
-	wake_up_state(p, TASK_FROZEN);
-unlock:
-	spin_unlock_irqrestore(&freezer_lock, flags);
+	guard(spinlock_irqsave)(&freezer_lock);
+	if (frozen(p) && !task_call_func(p, __restore_freezer_state, NULL))
+		wake_up_state(p, TASK_FROZEN);
 }
 
 /**
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 6.6 0/5] [Backport] sched,freezer: Remove unnecessary warning in __thaw_task
  2025-07-28  2:41 ` [PATCH 6.6 0/5] [Backport] sched,freezer: Remove unnecessary warning in __thaw_task Chen Ridong
                     ` (4 preceding siblings ...)
  2025-07-28  2:41   ` [PATCH 6.6 5/5] sched,freezer: Remove unnecessary warning in __thaw_task Chen Ridong
@ 2025-07-28  3:11   ` Chen Ridong
  5 siblings, 0 replies; 7+ messages in thread
From: Chen Ridong @ 2025-07-28  3:11 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, vincent.guittot, dietmar.eggemann,
	rostedt, bsegall, mgorman, bristot, vschneid, rafael, pavel
  Cc: linux-kernel, linux-pm, lujialin4, chenridong



On 2025/7/28 10:41, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
> 
> To fix the [1] issue, it needs to backport:
> 9beb8c5e77dc ("sched,freezer: Remove unnecessary warning...")
> 14a67b42cb6f ("Revert 'cgroup_freezer: cgroup freezing: Check if not...'").
> 
> This series aims to backport 9beb8c5e77dc. To avoid conflicts, backport the
> missing patches[2].
> 
> [1] https://lore.kernel.org/lkml/20250717085550.3828781-1-chenridong@huaweicloud.com/
> [2] https://lore.kernel.org/stable/2025072421-deviate-skintight-bbd5@gregkh/
> 
> Chen Ridong (1):
>   sched,freezer: Remove unnecessary warning in __thaw_task
> 
> Elliot Berman (4):
>   sched/core: Remove ifdeffery for saved_state
>   freezer,sched: Use saved_state to reduce some spurious wakeups
>   freezer,sched: Do not restore saved_state of a thawed task
>   freezer,sched: Clean saved_state when restoring it during thaw
> 
>  include/linux/sched.h |  2 --
>  kernel/freezer.c      | 51 +++++++++++++++++--------------------------
>  kernel/sched/core.c   | 31 +++++++++++++-------------
>  3 files changed, 35 insertions(+), 49 deletions(-)
> 

Please, ignore this, I should send this patches to the stable branch.

Best regards,
Ridong


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-07-28  3:11 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <2025072421-deviate-skintight-bbd5@gregkh>
2025-07-28  2:41 ` [PATCH 6.6 0/5] [Backport] sched,freezer: Remove unnecessary warning in __thaw_task Chen Ridong
2025-07-28  2:41   ` [PATCH 6.6 1/5] sched/core: Remove ifdeffery for saved_state Chen Ridong
2025-07-28  2:41   ` [PATCH 6.6 2/5] freezer,sched: Use saved_state to reduce some spurious wakeups Chen Ridong
2025-07-28  2:41   ` [PATCH 6.6 3/5] freezer,sched: Do not restore saved_state of a thawed task Chen Ridong
2025-07-28  2:41   ` [PATCH 6.6 4/5] freezer,sched: Clean saved_state when restoring it during thaw Chen Ridong
2025-07-28  2:41   ` [PATCH 6.6 5/5] sched,freezer: Remove unnecessary warning in __thaw_task Chen Ridong
2025-07-28  3:11   ` [PATCH 6.6 0/5] [Backport] " Chen Ridong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).