* [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out.
@ 2025-08-05 14:41 xupengbo
2025-08-05 16:10 ` Vincent Guittot
0 siblings, 1 reply; 9+ messages in thread
From: xupengbo @ 2025-08-05 14:41 UTC (permalink / raw)
To: ziqianlu, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, Aaron Lu, Mathieu Desnoyers,
linux-kernel
Cc: xupengbo, cgroups
When a task is migrated out, there is a probability that the tg->load_avg
value will become abnormal. The reason is as follows.
1. Due to the 1ms update period limitation in update_tg_load_avg(), there
is a possibility that the reduced load_avg is not updated to tg->load_avg
when a task migrates out.
2. Even though __update_blocked_fair() traverses the leaf_cfs_rq_list and
calls update_tg_load_avg() for cfs_rqs that are not fully decayed, the key
function cfs_rq_is_decayed() does not check whether
cfs->tg_load_avg_contrib is null. Consequently, in some cases,
__update_blocked_fair() removes cfs_rqs whose avg.load_avg has not been
updated to tg->load_avg.
I added a check of cfs_rq->tg_load_avg_contrib in cfs_rq_is_decayed(),
which blocks the case (2.) mentioned above. I follow the condition in
update_tg_load_avg() instead of directly checking if
cfs_rq->tg_load_avg_contrib is null. I think it's necessary to keep the
condition consistent in both places, otherwise unexpected problems may
occur.
Thanks for your comments,
Xu Pengbo
Fixes: 1528c661c24b ("sched/fair: Ratelimit update to tg->load_avg")
Signed-off-by: xupengbo <xupengbo@oppo.com>
---
Changes:
v1 -> v2:
- Another option to fix the bug. Check cfs_rq->tg_load_avg_contrib in
cfs_rq_is_decayed() to avoid early removal from the leaf_cfs_rq_list.
- Link to v1 : https://lore.kernel.org/cgroups/20250804130326.57523-1-xupengbo@oppo.com/T/#u
kernel/sched/fair.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b173a059315c..a35083a2d006 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4062,6 +4062,11 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
if (child_cfs_rq_on_list(cfs_rq))
return false;
+ long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
+
+ if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
+ return false;
+
return true;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out.
2025-08-05 14:41 [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out xupengbo
@ 2025-08-05 16:10 ` Vincent Guittot
0 siblings, 0 replies; 9+ messages in thread
From: Vincent Guittot @ 2025-08-05 16:10 UTC (permalink / raw)
To: xupengbo
Cc: ziqianlu, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Aaron Lu, Mathieu Desnoyers, linux-kernel,
cgroups
On Tue, 5 Aug 2025 at 16:42, xupengbo <xupengbo@oppo.com> wrote:
>
> When a task is migrated out, there is a probability that the tg->load_avg
> value will become abnormal. The reason is as follows.
>
> 1. Due to the 1ms update period limitation in update_tg_load_avg(), there
> is a possibility that the reduced load_avg is not updated to tg->load_avg
> when a task migrates out.
> 2. Even though __update_blocked_fair() traverses the leaf_cfs_rq_list and
> calls update_tg_load_avg() for cfs_rqs that are not fully decayed, the key
> function cfs_rq_is_decayed() does not check whether
> cfs->tg_load_avg_contrib is null. Consequently, in some cases,
> __update_blocked_fair() removes cfs_rqs whose avg.load_avg has not been
> updated to tg->load_avg.
>
> I added a check of cfs_rq->tg_load_avg_contrib in cfs_rq_is_decayed(),
> which blocks the case (2.) mentioned above. I follow the condition in
> update_tg_load_avg() instead of directly checking if
> cfs_rq->tg_load_avg_contrib is null. I think it's necessary to keep the
> condition consistent in both places, otherwise unexpected problems may
> occur.
>
> Thanks for your comments,
> Xu Pengbo
>
> Fixes: 1528c661c24b ("sched/fair: Ratelimit update to tg->load_avg")
> Signed-off-by: xupengbo <xupengbo@oppo.com>
> ---
> Changes:
> v1 -> v2:
> - Another option to fix the bug. Check cfs_rq->tg_load_avg_contrib in
> cfs_rq_is_decayed() to avoid early removal from the leaf_cfs_rq_list.
> - Link to v1 : https://lore.kernel.org/cgroups/20250804130326.57523-1-xupengbo@oppo.com/T/#u
>
> kernel/sched/fair.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index b173a059315c..a35083a2d006 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4062,6 +4062,11 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> if (child_cfs_rq_on_list(cfs_rq))
> return false;
>
> + long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
> +
> + if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
I don't understand why you use the above condition instead of if
(!cfs_rq->tg_load_avg_contrib). Can you elaborate ?
strictly speaking we want to keep the cfs_rq in the list if
(cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg) and
cfs_rq->avg.load_avg == 0 when we test this condition
> + return false;
> +
> return true;
> }
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out.
2025-08-05 9:17 [PATCH] " Vincent Guittot
@ 2025-08-06 6:31 ` xupengbo
2025-08-06 7:33 ` Aaron Lu
0 siblings, 1 reply; 9+ messages in thread
From: xupengbo @ 2025-08-06 6:31 UTC (permalink / raw)
To: vincent.guittot
Cc: bsegall, cgroups, dietmar.eggemann, juri.lelli, linux-kernel,
mgorman, mingo, peterz, rostedt, vschneid, xupengbo, ziqianlu
> >On Tue, 5 Aug 2025 at 16:42, xupengbo <xupengbo@oppo.com> wrote:
> >
> > When a task is migrated out, there is a probability that the tg->load_avg
> > value will become abnormal. The reason is as follows.
> >
> > 1. Due to the 1ms update period limitation in update_tg_load_avg(), there
> > is a possibility that the reduced load_avg is not updated to tg->load_avg
> > when a task migrates out.
> > 2. Even though __update_blocked_fair() traverses the leaf_cfs_rq_list and
> > calls update_tg_load_avg() for cfs_rqs that are not fully decayed, the key
> > function cfs_rq_is_decayed() does not check whether
> > cfs->tg_load_avg_contrib is null. Consequently, in some cases,
> > __update_blocked_fair() removes cfs_rqs whose avg.load_avg has not been
> > updated to tg->load_avg.
> >
> > I added a check of cfs_rq->tg_load_avg_contrib in cfs_rq_is_decayed(),
> > which blocks the case (2.) mentioned above. I follow the condition in
> > update_tg_load_avg() instead of directly checking if
> > cfs_rq->tg_load_avg_contrib is null. I think it's necessary to keep the
> > condition consistent in both places, otherwise unexpected problems may
> > occur.
> >
> > Thanks for your comments,
> > Xu Pengbo
> >
> > Fixes: 1528c661c24b ("sched/fair: Ratelimit update to tg->load_avg")
> > Signed-off-by: xupengbo <xupengbo@oppo.com>
> > ---
> > Changes:
> > v1 -> v2:
> > - Another option to fix the bug. Check cfs_rq->tg_load_avg_contrib in
> > cfs_rq_is_decayed() to avoid early removal from the leaf_cfs_rq_list.
> > - Link to v1 : https://lore.kernel.org/cgroups/20250804130326.57523-1-xupengbo@oppo.com/T/#u
> >
> > kernel/sched/fair.c | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index b173a059315c..a35083a2d006 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -4062,6 +4062,11 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> > if (child_cfs_rq_on_list(cfs_rq))
> > return false;
> >
> > + long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
> > +
> > + if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
>
>I don't understand why you use the above condition instead of if
>(!cfs_rq->tg_load_avg_contrib). Can you elaborate ?
>
>strictly speaking we want to keep the cfs_rq in the list if
>(cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg) and
>cfs_rq->avg.load_avg == 0 when we test this condition
I use this condition primarily based on the function update_tg_load_avg().
I want to absolutely avoid a situation where cfs_rq_is_decay() returns
false but update_tg_load_avg() cannot update its value due to the delta
check, which may cause the cfs_rq to remain on the list permanently.
Honestly, I am not sure if this will happen, so I took this conservative
approach.
In fact, in the second if-condition of cfs_rq_is_decay(), the comment in
the load_avg_is_decayed() function states:"_avg must be null when _sum is
null because _avg = _sum / divider". Therefore, when we check this newly
added condition, cfs_rq->avg.load_avg should already be 0, right?
After reading your comments, I carefully considered the differences
between these two approaches. Here, my condition is similar
to cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg but weaker. In
fact, when cfs_rq->avg.load_avg is already 0,
abs(delta) > cfs_rq->tg_load_avg_contrib / 64 is equivalent to
cfs_rq->tg_load_avg_contrib > cfs_rq->tg_load_avg_contrib / 64,
Further reasoning leads to the condition cfs_rq->tg_load_avg_contrib > 0.
However if cfs_rq->avg.load_avg is not necessarily 0 at this point, then
the condition you propose is obviously more accurate, simpler than the
delta check, and requires fewer calculations.
I think our perspectives differ. From the perspective of
update_tg_load_avg(), the semantics of this condition are as follows: if
there is no 1ms update limit, and update_tg_load_avg() can continue
updating after checking the delta, then in cfs_rq_is_decayed() we should
return false to keep the cfs_rq in the list for subsequent updates. As
mentioned in the first paragraph, this avoids that tricky situation. From
the perspective of cfs_rq_is_decayed(), the semantics of the condition you
proposed are that if cfs_rq->avg.load_avg is already 0, then it cannot be
removed from the list before all load_avg are updated to tg. That makes
sense to me, but I still feel like there's a little bit of a risk. Am I
being paranoid?
How do you view these two lines of thinking?
It's a pleasure to discuss this with you,
xupengbo.
> > + return false;
> > +
> > return true;
> > }
> >
> > --
> > 2.43.0
> >
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out.
2025-08-06 6:31 ` [PATCH v2] " xupengbo
@ 2025-08-06 7:33 ` Aaron Lu
2025-08-06 7:58 ` xupengbo
2025-08-06 8:38 ` xupengbo
0 siblings, 2 replies; 9+ messages in thread
From: Aaron Lu @ 2025-08-06 7:33 UTC (permalink / raw)
To: xupengbo
Cc: vincent.guittot, bsegall, cgroups, dietmar.eggemann, juri.lelli,
linux-kernel, mgorman, mingo, peterz, rostedt, vschneid
On Wed, Aug 06, 2025 at 02:31:58PM +0800, xupengbo wrote:
> > >On Tue, 5 Aug 2025 at 16:42, xupengbo <xupengbo@oppo.com> wrote:
> > >
> > > When a task is migrated out, there is a probability that the tg->load_avg
> > > value will become abnormal. The reason is as follows.
> > >
> > > 1. Due to the 1ms update period limitation in update_tg_load_avg(), there
> > > is a possibility that the reduced load_avg is not updated to tg->load_avg
> > > when a task migrates out.
> > > 2. Even though __update_blocked_fair() traverses the leaf_cfs_rq_list and
> > > calls update_tg_load_avg() for cfs_rqs that are not fully decayed, the key
> > > function cfs_rq_is_decayed() does not check whether
> > > cfs->tg_load_avg_contrib is null. Consequently, in some cases,
> > > __update_blocked_fair() removes cfs_rqs whose avg.load_avg has not been
> > > updated to tg->load_avg.
> > >
> > > I added a check of cfs_rq->tg_load_avg_contrib in cfs_rq_is_decayed(),
> > > which blocks the case (2.) mentioned above. I follow the condition in
> > > update_tg_load_avg() instead of directly checking if
> > > cfs_rq->tg_load_avg_contrib is null. I think it's necessary to keep the
> > > condition consistent in both places, otherwise unexpected problems may
> > > occur.
> > >
> > > Thanks for your comments,
> > > Xu Pengbo
> > >
> > > Fixes: 1528c661c24b ("sched/fair: Ratelimit update to tg->load_avg")
> > > Signed-off-by: xupengbo <xupengbo@oppo.com>
> > > ---
> > > Changes:
> > > v1 -> v2:
> > > - Another option to fix the bug. Check cfs_rq->tg_load_avg_contrib in
> > > cfs_rq_is_decayed() to avoid early removal from the leaf_cfs_rq_list.
> > > - Link to v1 : https://lore.kernel.org/cgroups/20250804130326.57523-1-xupengbo@oppo.com/T/#u
> > >
> > > kernel/sched/fair.c | 5 +++++
> > > 1 file changed, 5 insertions(+)
> > >
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index b173a059315c..a35083a2d006 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -4062,6 +4062,11 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> > > if (child_cfs_rq_on_list(cfs_rq))
> > > return false;
> > >
> > > + long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
> > > +
> > > + if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
> >
> >I don't understand why you use the above condition instead of if
> >(!cfs_rq->tg_load_avg_contrib). Can you elaborate ?
> >
> >strictly speaking we want to keep the cfs_rq in the list if
> >(cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg) and
> >cfs_rq->avg.load_avg == 0 when we test this condition
>
>
> I use this condition primarily based on the function update_tg_load_avg().
> I want to absolutely avoid a situation where cfs_rq_is_decay() returns
> false but update_tg_load_avg() cannot update its value due to the delta
> check, which may cause the cfs_rq to remain on the list permanently.
> Honestly, I am not sure if this will happen, so I took this conservative
> approach.
Hmm...it doesn't seem we need worry about this situation.
Because when cfs_rq->load_avg is 0, abs(delta) will be
cfs_rq->tg_load_avg_contrib and the following condition:
if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
becomes:
if (cfs_rq->tg_load_avg_contrib > cfs_rq->tg_load_avg_contrib / 64)
which should always be true, right?
Thanks,
Aaron
>
> In fact, in the second if-condition of cfs_rq_is_decay(), the comment in
> the load_avg_is_decayed() function states:"_avg must be null when _sum is
> null because _avg = _sum / divider". Therefore, when we check this newly
> added condition, cfs_rq->avg.load_avg should already be 0, right?
>
> After reading your comments, I carefully considered the differences
> between these two approaches. Here, my condition is similar
> to cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg but weaker. In
> fact, when cfs_rq->avg.load_avg is already 0,
> abs(delta) > cfs_rq->tg_load_avg_contrib / 64 is equivalent to
> cfs_rq->tg_load_avg_contrib > cfs_rq->tg_load_avg_contrib / 64,
> Further reasoning leads to the condition cfs_rq->tg_load_avg_contrib > 0.
> However if cfs_rq->avg.load_avg is not necessarily 0 at this point, then
> the condition you propose is obviously more accurate, simpler than the
> delta check, and requires fewer calculations.
>
> I think our perspectives differ. From the perspective of
> update_tg_load_avg(), the semantics of this condition are as follows: if
> there is no 1ms update limit, and update_tg_load_avg() can continue
> updating after checking the delta, then in cfs_rq_is_decayed() we should
> return false to keep the cfs_rq in the list for subsequent updates. As
> mentioned in the first paragraph, this avoids that tricky situation. From
> the perspective of cfs_rq_is_decayed(), the semantics of the condition you
> proposed are that if cfs_rq->avg.load_avg is already 0, then it cannot be
> removed from the list before all load_avg are updated to tg. That makes
> sense to me, but I still feel like there's a little bit of a risk. Am I
> being paranoid?
>
> How do you view these two lines of thinking?
>
> It's a pleasure to discuss this with you,
> xupengbo.
>
> > > + return false;
> > > +
> > > return true;
> > > }
> > >
> > > --
> > > 2.43.0
> > >
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out.
2025-08-06 7:33 ` Aaron Lu
@ 2025-08-06 7:58 ` xupengbo
2025-08-06 8:38 ` xupengbo
1 sibling, 0 replies; 9+ messages in thread
From: xupengbo @ 2025-08-06 7:58 UTC (permalink / raw)
To: ziqianlu
Cc: bsegall, cgroups, dietmar.eggemann, juri.lelli, linux-kernel,
mgorman, mingo, peterz, rostedt, vincent.guittot, vschneid,
xupengbo
On Wed, Aug 06, 2025 at 02:31:58PM +0800, xupengbo wrote:
> > >On Tue, 5 Aug 2025 at 16:42, xupengbo <xupengbo@oppo.com> wrote:
> > > >
> > > > When a task is migrated out, there is a probability that the tg->load_avg
> > > > value will become abnormal. The reason is as follows.
> > > >
> > > > 1. Due to the 1ms update period limitation in update_tg_load_avg(), there
> > > > is a possibility that the reduced load_avg is not updated to tg->load_avg
> > > > when a task migrates out.
> > > > 2. Even though __update_blocked_fair() traverses the leaf_cfs_rq_list and
> > > > calls update_tg_load_avg() for cfs_rqs that are not fully decayed, the key
> > > > function cfs_rq_is_decayed() does not check whether
> > > > cfs->tg_load_avg_contrib is null. Consequently, in some cases,
> > > > __update_blocked_fair() removes cfs_rqs whose avg.load_avg has not been
> > > > updated to tg->load_avg.
> > > >
> > > > I added a check of cfs_rq->tg_load_avg_contrib in cfs_rq_is_decayed(),
> > > > which blocks the case (2.) mentioned above. I follow the condition in
> > > > update_tg_load_avg() instead of directly checking if
> > > > cfs_rq->tg_load_avg_contrib is null. I think it's necessary to keep the
> > > > condition consistent in both places, otherwise unexpected problems may
> > > > occur.
> > > >
> > > > Thanks for your comments,
> > > > Xu Pengbo
> > > >
> > > > Fixes: 1528c661c24b ("sched/fair: Ratelimit update to tg->load_avg")
> > > > Signed-off-by: xupengbo <xupengbo@oppo.com>
> > > > ---
> > > > Changes:
> > > > v1 -> v2:
> > > > - Another option to fix the bug. Check cfs_rq->tg_load_avg_contrib in
> > > > cfs_rq_is_decayed() to avoid early removal from the leaf_cfs_rq_list.
> > > > - Link to v1 : https://lore.kernel.org/cgroups/20250804130326.57523-1-xupengbo@oppo.com/T/#u
> > > >
> > > > kernel/sched/fair.c | 5 +++++
> > > > 1 file changed, 5 insertions(+)
> > > >
> > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > index b173a059315c..a35083a2d006 100644
> > > > --- a/kernel/sched/fair.c
> > > > +++ b/kernel/sched/fair.c
> > > > @@ -4062,6 +4062,11 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> > > > if (child_cfs_rq_on_list(cfs_rq))
> > > > return false;
> > > >
> > > > + long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
> > > > +
> > > > + if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
> > >
> > >I don't understand why you use the above condition instead of if
> > >(!cfs_rq->tg_load_avg_contrib). Can you elaborate ?
> > >
> > >strictly speaking we want to keep the cfs_rq in the list if
> > >(cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg) and
> > >cfs_rq->avg.load_avg == 0 when we test this condition
> >
> >
> > I use this condition primarily based on the function update_tg_load_avg().
> > I want to absolutely avoid a situation where cfs_rq_is_decay() returns
> > false but update_tg_load_avg() cannot update its value due to the delta
> > check, which may cause the cfs_rq to remain on the list permanently.
> > Honestly, I am not sure if this will happen, so I took this conservative
> > approach.
>
> Hmm...it doesn't seem we need worry about this situation.
yeah, I am worried about this situation, but I can't find any evidence
that it exists.
> Because when cfs_rq->load_avg is 0, abs(delta) will be
> cfs_rq->tg_load_avg_contrib and the following condition:
>
> if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
> becomes:
> if (cfs_rq->tg_load_avg_contrib > cfs_rq->tg_load_avg_contrib / 64)
>
> which should always be true, right?
It actually becomes:
if (cfs_rq->tg_load_avg_contrib > 0)
if cfs_rq->tg_load_avg_contrib == 0 , it will be false. As it is an unsigned
long, this condition is equivalent to :
if (!cfs_rq->tg_load_avg_contrib)
Thanks,
Xupengbo
> Thanks,
> Aaron
>
> >
> > In fact, in the second if-condition of cfs_rq_is_decay(), the comment in
> > the load_avg_is_decayed() function states:"_avg must be null when _sum is
> > null because _avg = _sum / divider". Therefore, when we check this newly
> > added condition, cfs_rq->avg.load_avg should already be 0, right?
> >
> > After reading your comments, I carefully considered the differences
> > between these two approaches. Here, my condition is similar
> > to cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg but weaker. In
> > fact, when cfs_rq->avg.load_avg is already 0,
> > abs(delta) > cfs_rq->tg_load_avg_contrib / 64 is equivalent to
> > cfs_rq->tg_load_avg_contrib > cfs_rq->tg_load_avg_contrib / 64,
> > Further reasoning leads to the condition cfs_rq->tg_load_avg_contrib > 0.
> > However if cfs_rq->avg.load_avg is not necessarily 0 at this point, then
> > the condition you propose is obviously more accurate, simpler than the
> > delta check, and requires fewer calculations.
> >
> > I think our perspectives differ. From the perspective of
> > update_tg_load_avg(), the semantics of this condition are as follows: if
> > there is no 1ms update limit, and update_tg_load_avg() can continue
> > updating after checking the delta, then in cfs_rq_is_decayed() we should
> > return false to keep the cfs_rq in the list for subsequent updates. As
> > mentioned in the first paragraph, this avoids that tricky situation. From
> > the perspective of cfs_rq_is_decayed(), the semantics of the condition you
> > proposed are that if cfs_rq->avg.load_avg is already 0, then it cannot be
> > removed from the list before all load_avg are updated to tg. That makes
> > sense to me, but I still feel like there's a little bit of a risk. Am I
> > being paranoid?
> >
> > How do you view these two lines of thinking?
> >
> > It's a pleasure to discuss this with you,
> > xupengbo.
> >
> > > > + return false;
> > > > +
> > > > return true;
> > > > }
> > > >
> > > > --
> > > > 2.43.0
> > > >
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out.
2025-08-06 7:33 ` Aaron Lu
2025-08-06 7:58 ` xupengbo
@ 2025-08-06 8:38 ` xupengbo
2025-08-06 9:22 ` Vincent Guittot
2025-08-25 9:50 ` Aaron Lu
1 sibling, 2 replies; 9+ messages in thread
From: xupengbo @ 2025-08-06 8:38 UTC (permalink / raw)
To: ziqianlu
Cc: bsegall, cgroups, dietmar.eggemann, juri.lelli, linux-kernel,
mgorman, mingo, peterz, rostedt, vincent.guittot, vschneid,
xupengbo
On Wed, Aug 06, 2025 at 02:31:58PM +0800, xupengbo wrote:
> > >On Tue, 5 Aug 2025 at 16:42, xupengbo <xupengbo@oppo.com> wrote:
> > > >
> > > > When a task is migrated out, there is a probability that the tg->load_avg
> > > > value will become abnormal. The reason is as follows.
> > > >
> > > > 1. Due to the 1ms update period limitation in update_tg_load_avg(), there
> > > > is a possibility that the reduced load_avg is not updated to tg->load_avg
> > > > when a task migrates out.
> > > > 2. Even though __update_blocked_fair() traverses the leaf_cfs_rq_list and
> > > > calls update_tg_load_avg() for cfs_rqs that are not fully decayed, the key
> > > > function cfs_rq_is_decayed() does not check whether
> > > > cfs->tg_load_avg_contrib is null. Consequently, in some cases,
> > > > __update_blocked_fair() removes cfs_rqs whose avg.load_avg has not been
> > > > updated to tg->load_avg.
> > > >
> > > > I added a check of cfs_rq->tg_load_avg_contrib in cfs_rq_is_decayed(),
> > > > which blocks the case (2.) mentioned above. I follow the condition in
> > > > update_tg_load_avg() instead of directly checking if
> > > > cfs_rq->tg_load_avg_contrib is null. I think it's necessary to keep the
> > > > condition consistent in both places, otherwise unexpected problems may
> > > > occur.
> > > >
> > > > Thanks for your comments,
> > > > Xu Pengbo
> > > >
> > > > Fixes: 1528c661c24b ("sched/fair: Ratelimit update to tg->load_avg")
> > > > Signed-off-by: xupengbo <xupengbo@oppo.com>
> > > > ---
> > > > Changes:
> > > > v1 -> v2:
> > > > - Another option to fix the bug. Check cfs_rq->tg_load_avg_contrib in
> > > > cfs_rq_is_decayed() to avoid early removal from the leaf_cfs_rq_list.
> > > > - Link to v1 : https://lore.kernel.org/cgroups/20250804130326.57523-1-xupengbo@oppo.com/T/#u
> > > >
> > > > kernel/sched/fair.c | 5 +++++
> > > > 1 file changed, 5 insertions(+)
> > > >
> > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > index b173a059315c..a35083a2d006 100644
> > > > --- a/kernel/sched/fair.c
> > > > +++ b/kernel/sched/fair.c
> > > > @@ -4062,6 +4062,11 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> > > > if (child_cfs_rq_on_list(cfs_rq))
> > > > return false;
> > > >
> > > > + long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
> > > > +
> > > > + if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
> > >
> > >I don't understand why you use the above condition instead of if
> > >(!cfs_rq->tg_load_avg_contrib). Can you elaborate ?
Sorry I was misled here, I think it should be if (cfs_rq->tg_load_avg_contrib ! = 0)
> > >
> > >strictly speaking we want to keep the cfs_rq in the list if
> > >(cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg) and
> > >cfs_rq->avg.load_avg == 0 when we test this condition
> >
> >
> > I use this condition primarily based on the function update_tg_load_avg().
> > I want to absolutely avoid a situation where cfs_rq_is_decay() returns
> > false but update_tg_load_avg() cannot update its value due to the delta
> > check, which may cause the cfs_rq to remain on the list permanently.
> > Honestly, I am not sure if this will happen, so I took this conservative
> > approach.
>
> Hmm...it doesn't seem we need worry about this situation.
yeah, I am worried about this situation, but I can't find any evidence
that it exists.
> Because when cfs_rq->load_avg is 0, abs(delta) will be
> cfs_rq->tg_load_avg_contrib and the following condition:
>
> if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
> becomes:
> if (cfs_rq->tg_load_avg_contrib > cfs_rq->tg_load_avg_contrib / 64)
>
> which should always be true, right?
It actually becomes:
if (cfs_rq->tg_load_avg_contrib > 0)
if cfs_rq->tg_load_avg_contrib == 0 , it will be false. As it is an unsigned
long, this condition is equivalent to :
if (cfs_rq->tg_load_avg_contrib)
Sorry I just made a mistake.
Thanks,
Xupengbo
> Thanks,
> Aaron
>
> >
> > In fact, in the second if-condition of cfs_rq_is_decay(), the comment in
> > the load_avg_is_decayed() function states:"_avg must be null when _sum is
> > null because _avg = _sum / divider". Therefore, when we check this newly
> > added condition, cfs_rq->avg.load_avg should already be 0, right?
> >
> > After reading your comments, I carefully considered the differences
> > between these two approaches. Here, my condition is similar
> > to cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg but weaker. In
> > fact, when cfs_rq->avg.load_avg is already 0,
> > abs(delta) > cfs_rq->tg_load_avg_contrib / 64 is equivalent to
> > cfs_rq->tg_load_avg_contrib > cfs_rq->tg_load_avg_contrib / 64,
> > Further reasoning leads to the condition cfs_rq->tg_load_avg_contrib > 0.
> > However if cfs_rq->avg.load_avg is not necessarily 0 at this point, then
> > the condition you propose is obviously more accurate, simpler than the
> > delta check, and requires fewer calculations.
> >
> > I think our perspectives differ. From the perspective of
> > update_tg_load_avg(), the semantics of this condition are as follows: if
> > there is no 1ms update limit, and update_tg_load_avg() can continue
> > updating after checking the delta, then in cfs_rq_is_decayed() we should
> > return false to keep the cfs_rq in the list for subsequent updates. As
> > mentioned in the first paragraph, this avoids that tricky situation. From
> > the perspective of cfs_rq_is_decayed(), the semantics of the condition you
> > proposed are that if cfs_rq->avg.load_avg is already 0, then it cannot be
> > removed from the list before all load_avg are updated to tg. That makes
> > sense to me, but I still feel like there's a little bit of a risk. Am I
> > being paranoid?
> >
> > How do you view these two lines of thinking?
> >
> > It's a pleasure to discuss this with you,
> > xupengbo.
> >
> > > > + return false;
> > > > +
> > > > return true;
> > > > }
> > > >
> > > > --
> > > > 2.43.0
> > > >
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out.
2025-08-06 8:38 ` xupengbo
@ 2025-08-06 9:22 ` Vincent Guittot
2025-08-25 9:50 ` Aaron Lu
1 sibling, 0 replies; 9+ messages in thread
From: Vincent Guittot @ 2025-08-06 9:22 UTC (permalink / raw)
To: xupengbo
Cc: ziqianlu, bsegall, cgroups, dietmar.eggemann, juri.lelli,
linux-kernel, mgorman, mingo, peterz, rostedt, vschneid
On Wed, 6 Aug 2025 at 10:38, xupengbo <xupengbo@oppo.com> wrote:
>
> On Wed, Aug 06, 2025 at 02:31:58PM +0800, xupengbo wrote:
> > > >On Tue, 5 Aug 2025 at 16:42, xupengbo <xupengbo@oppo.com> wrote:
> > > > >
> > > > > When a task is migrated out, there is a probability that the tg->load_avg
> > > > > value will become abnormal. The reason is as follows.
> > > > >
> > > > > 1. Due to the 1ms update period limitation in update_tg_load_avg(), there
> > > > > is a possibility that the reduced load_avg is not updated to tg->load_avg
> > > > > when a task migrates out.
> > > > > 2. Even though __update_blocked_fair() traverses the leaf_cfs_rq_list and
> > > > > calls update_tg_load_avg() for cfs_rqs that are not fully decayed, the key
> > > > > function cfs_rq_is_decayed() does not check whether
> > > > > cfs->tg_load_avg_contrib is null. Consequently, in some cases,
> > > > > __update_blocked_fair() removes cfs_rqs whose avg.load_avg has not been
> > > > > updated to tg->load_avg.
> > > > >
> > > > > I added a check of cfs_rq->tg_load_avg_contrib in cfs_rq_is_decayed(),
> > > > > which blocks the case (2.) mentioned above. I follow the condition in
> > > > > update_tg_load_avg() instead of directly checking if
> > > > > cfs_rq->tg_load_avg_contrib is null. I think it's necessary to keep the
> > > > > condition consistent in both places, otherwise unexpected problems may
> > > > > occur.
> > > > >
> > > > > Thanks for your comments,
> > > > > Xu Pengbo
> > > > >
> > > > > Fixes: 1528c661c24b ("sched/fair: Ratelimit update to tg->load_avg")
> > > > > Signed-off-by: xupengbo <xupengbo@oppo.com>
> > > > > ---
> > > > > Changes:
> > > > > v1 -> v2:
> > > > > - Another option to fix the bug. Check cfs_rq->tg_load_avg_contrib in
> > > > > cfs_rq_is_decayed() to avoid early removal from the leaf_cfs_rq_list.
> > > > > - Link to v1 : https://lore.kernel.org/cgroups/20250804130326.57523-1-xupengbo@oppo.com/T/#u
> > > > >
> > > > > kernel/sched/fair.c | 5 +++++
> > > > > 1 file changed, 5 insertions(+)
> > > > >
> > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > > index b173a059315c..a35083a2d006 100644
> > > > > --- a/kernel/sched/fair.c
> > > > > +++ b/kernel/sched/fair.c
> > > > > @@ -4062,6 +4062,11 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> > > > > if (child_cfs_rq_on_list(cfs_rq))
> > > > > return false;
> > > > >
> > > > > + long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
> > > > > +
> > > > > + if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
> > > >
> > > >I don't understand why you use the above condition instead of if
> > > >(!cfs_rq->tg_load_avg_contrib). Can you elaborate ?
>
> Sorry I was misled here, I think it should be if (cfs_rq->tg_load_avg_contrib ! = 0)
yes I made a mistake. It should be
if (cfs_rq->tg_load_avg_contrib ! = 0)
or
if (cfs_rq->tg_load_avg_contrib)
>
> > > >
> > > >strictly speaking we want to keep the cfs_rq in the list if
> > > >(cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg) and
> > > >cfs_rq->avg.load_avg == 0 when we test this condition
> > >
> > >
> > > I use this condition primarily based on the function update_tg_load_avg().
> > > I want to absolutely avoid a situation where cfs_rq_is_decay() returns
> > > false but update_tg_load_avg() cannot update its value due to the delta
> > > check, which may cause the cfs_rq to remain on the list permanently.
> > > Honestly, I am not sure if this will happen, so I took this conservative
> > > approach.
> >
> > Hmm...it doesn't seem we need worry about this situation.
>
> yeah, I am worried about this situation, but I can't find any evidence
> that it exists.
>
> > Because when cfs_rq->load_avg is 0, abs(delta) will be
> > cfs_rq->tg_load_avg_contrib and the following condition:
> >
> > if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64)
> > becomes:
> > if (cfs_rq->tg_load_avg_contrib > cfs_rq->tg_load_avg_contrib / 64)
> >
> > which should always be true, right?
>
>
> It actually becomes:
> if (cfs_rq->tg_load_avg_contrib > 0)
> if cfs_rq->tg_load_avg_contrib == 0 , it will be false. As it is an unsigned
> long, this condition is equivalent to :
> if (cfs_rq->tg_load_avg_contrib)
>
> Sorry I just made a mistake.
> Thanks,
> Xupengbo
>
> > Thanks,
> > Aaron
> >
> > >
> > > In fact, in the second if-condition of cfs_rq_is_decay(), the comment in
> > > the load_avg_is_decayed() function states:"_avg must be null when _sum is
> > > null because _avg = _sum / divider". Therefore, when we check this newly
> > > added condition, cfs_rq->avg.load_avg should already be 0, right?
> > >
> > > After reading your comments, I carefully considered the differences
> > > between these two approaches. Here, my condition is similar
> > > to cfs_rq->tg_load_avg_contrib != cfs_rq->avg.load_avg but weaker. In
> > > fact, when cfs_rq->avg.load_avg is already 0,
> > > abs(delta) > cfs_rq->tg_load_avg_contrib / 64 is equivalent to
> > > cfs_rq->tg_load_avg_contrib > cfs_rq->tg_load_avg_contrib / 64,
> > > Further reasoning leads to the condition cfs_rq->tg_load_avg_contrib > 0.
> > > However if cfs_rq->avg.load_avg is not necessarily 0 at this point, then
> > > the condition you propose is obviously more accurate, simpler than the
> > > delta check, and requires fewer calculations.
> > >
> > > I think our perspectives differ. From the perspective of
> > > update_tg_load_avg(), the semantics of this condition are as follows: if
> > > there is no 1ms update limit, and update_tg_load_avg() can continue
> > > updating after checking the delta, then in cfs_rq_is_decayed() we should
> > > return false to keep the cfs_rq in the list for subsequent updates. As
> > > mentioned in the first paragraph, this avoids that tricky situation. From
> > > the perspective of cfs_rq_is_decayed(), the semantics of the condition you
> > > proposed are that if cfs_rq->avg.load_avg is already 0, then it cannot be
> > > removed from the list before all load_avg are updated to tg. That makes
> > > sense to me, but I still feel like there's a little bit of a risk. Am I
> > > being paranoid?
> > >
> > > How do you view these two lines of thinking?
> > >
> > > It's a pleasure to discuss this with you,
> > > xupengbo.
> > >
> > > > > + return false;
> > > > > +
> > > > > return true;
> > > > > }
> > > > >
> > > > > --
> > > > > 2.43.0
> > > > >
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out.
2025-08-06 8:38 ` xupengbo
2025-08-06 9:22 ` Vincent Guittot
@ 2025-08-25 9:50 ` Aaron Lu
1 sibling, 0 replies; 9+ messages in thread
From: Aaron Lu @ 2025-08-25 9:50 UTC (permalink / raw)
To: xupengbo
Cc: bsegall, cgroups, dietmar.eggemann, juri.lelli, linux-kernel,
mgorman, mingo, peterz, rostedt, vincent.guittot, vschneid
Hi xupengbo,
On Wed, Aug 06, 2025 at 04:38:10PM +0800, xupengbo wrote:
... ...
>
> It actually becomes:
> if (cfs_rq->tg_load_avg_contrib > 0)
> if cfs_rq->tg_load_avg_contrib == 0 , it will be false. As it is an unsigned
> long, this condition is equivalent to :
> if (cfs_rq->tg_load_avg_contrib)
I suppose we have reached a conclusion that the right fix is to add a
check of cfs_rq->tg_load_avg_contrib in cfs_rq_is_decayed()? Something
like below:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index af33d107d8034..3ebcb683063f0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4056,6 +4056,9 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
if (child_cfs_rq_on_list(cfs_rq))
return false;
+ if (cfs_rq->tg_load_avg_contrib)
+ return false;
+
return true;
}
If you also agree, can you send an updated patch to fix this problem?
Thanks.
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out.
2025-08-26 7:57 [PATCH v3] " xupengbo
@ 2025-08-26 8:17 ` xupengbo
0 siblings, 0 replies; 9+ messages in thread
From: xupengbo @ 2025-08-26 8:17 UTC (permalink / raw)
To: xupengbo
Cc: aaron.lu, bsegall, cgroups, dietmar.eggemann, juri.lelli,
linux-kernel, mgorman, mingo, peterz, rostedt, vincent.guittot,
void, vschneid, xupengbo1029, ziqianlu
>Hi xupengbo,
>
>On Wed, Aug 06, 2025 at 04:38:10PM +0800, xupengbo wrote:
>... ...
>>
>> It actually becomes:
>> if (cfs_rq->tg_load_avg_contrib > 0)
>> if cfs_rq->tg_load_avg_contrib == 0 , it will be false. As it is an unsigned
>> long, this condition is equivalent to :
>> if (cfs_rq->tg_load_avg_contrib)
>
>I suppose we have reached a conclusion that the right fix is to add a
>check of cfs_rq->tg_load_avg_contrib in cfs_rq_is_decayed()? Something
>like below:
>
>diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>index af33d107d8034..3ebcb683063f0 100644
>--- a/kernel/sched/fair.c
>+++ b/kernel/sched/fair.c
>@@ -4056,6 +4056,9 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> if (child_cfs_rq_on_list(cfs_rq))
> return false;
>
>+ if (cfs_rq->tg_load_avg_contrib)
>+ return false;
>+
> return true;
> }
>
>If you also agree, can you send an updated patch to fix this problem?
>Thanks.
I already sent an updated patch v3.
Link: https://lore.kernel.org/cgroups/20250826075743.19106-1-xupengbo@oppo.com/
Thanks,
xupengbo
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-08-26 8:17 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-05 14:41 [PATCH v2] sched/fair: Fix unfairness caused by stalled tg_load_avg_contrib when the last task migrates out xupengbo
2025-08-05 16:10 ` Vincent Guittot
-- strict thread matches above, loose matches on Subject: below --
2025-08-26 7:57 [PATCH v3] " xupengbo
2025-08-26 8:17 ` [PATCH v2] " xupengbo
2025-08-05 9:17 [PATCH] " Vincent Guittot
2025-08-06 6:31 ` [PATCH v2] " xupengbo
2025-08-06 7:33 ` Aaron Lu
2025-08-06 7:58 ` xupengbo
2025-08-06 8:38 ` xupengbo
2025-08-06 9:22 ` Vincent Guittot
2025-08-25 9:50 ` Aaron Lu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).