* [PATCH v3 1/5] sched/fair: Clean up attach_entity_load_avg()
2016-06-01 3:41 [PATCH v3 0/5] sched/fair: Fix attach and detach sched avgs for task group change or sched class change Yuyang Du
@ 2016-06-01 3:41 ` Yuyang Du
2016-06-01 3:41 ` [PATCH v3 2/5] sched/fair: Skip detach and attach new group task Yuyang Du
` (3 subsequent siblings)
4 siblings, 0 replies; 14+ messages in thread
From: Yuyang Du @ 2016-06-01 3:41 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: umgwanakikbuti, bsegall, pjt, morten.rasmussen, vincent.guittot,
dietmar.eggemann, Yuyang Du
attach_entity_load_avg() is called (indirectly) from:
- switched_to_fair(): switch between classes to fair
- task_move_group_fair(): move between task groups
- enqueue_entity_load_avg(): enqueue entity
Only in switched_to_fair() is it possible that the task's last_update_time
is not 0 and therefore the task needs sched avgs update, so move the task
sched avgs update to switched_to_fair() only. In addition, the code is
refactored and code comments are updated.
No functionality change.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
---
kernel/sched/fair.c | 43 ++++++++++++++++++++-----------------------
1 file changed, 20 insertions(+), 23 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 218f8e8..3270598 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2961,24 +2961,6 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg)
static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
- if (!sched_feat(ATTACH_AGE_LOAD))
- goto skip_aging;
-
- /*
- * If we got migrated (either between CPUs or between cgroups) we'll
- * have aged the average right before clearing @last_update_time.
- */
- if (se->avg.last_update_time) {
- __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
- &se->avg, 0, 0, NULL);
-
- /*
- * XXX: we could have just aged the entire load away if we've been
- * absent from the fair class for too long.
- */
- }
-
-skip_aging:
se->avg.last_update_time = cfs_rq->avg.last_update_time;
cfs_rq->avg.load_avg += se->avg.load_avg;
cfs_rq->avg.load_sum += se->avg.load_sum;
@@ -2988,6 +2970,19 @@ skip_aging:
cfs_rq_util_change(cfs_rq);
}
+static inline void attach_age_load_task(struct rq *rq, struct task_struct *p)
+{
+ struct sched_entity *se = &p->se;
+
+ if (!sched_feat(ATTACH_AGE_LOAD))
+ return;
+
+ if (se->avg.last_update_time) {
+ __update_load_avg(cfs_rq_of(se)->avg.last_update_time, cpu_of(rq),
+ &se->avg, 0, 0, NULL);
+ }
+}
+
static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
__update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
@@ -3117,6 +3112,7 @@ static inline void
attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
static inline void
detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
+static inline void attach_age_load_task(struct rq *rq, struct task_struct *p) {}
static inline int idle_balance(struct rq *rq)
{
@@ -8418,6 +8414,12 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p)
static void switched_to_fair(struct rq *rq, struct task_struct *p)
{
+ /*
+ * If we change between classes, age the averages before attaching them.
+ * XXX: we could have just aged the entire load away if we've been
+ * absent from the fair class for too long.
+ */
+ attach_age_load_task(rq, p);
attach_task_cfs_rq(p);
if (task_on_rq_queued(p)) {
@@ -8469,11 +8471,6 @@ static void task_move_group_fair(struct task_struct *p)
{
detach_task_cfs_rq(p);
set_task_rq(p, task_cpu(p));
-
-#ifdef CONFIG_SMP
- /* Tell se's cfs_rq has been changed -- migrated */
- p->se.avg.last_update_time = 0;
-#endif
attach_task_cfs_rq(p);
}
--
1.7.9.5
^ permalink raw reply related [flat|nested] 14+ messages in thread* [PATCH v3 2/5] sched/fair: Skip detach and attach new group task
2016-06-01 3:41 [PATCH v3 0/5] sched/fair: Fix attach and detach sched avgs for task group change or sched class change Yuyang Du
2016-06-01 3:41 ` [PATCH v3 1/5] sched/fair: Clean up attach_entity_load_avg() Yuyang Du
@ 2016-06-01 3:41 ` Yuyang Du
2016-06-01 12:20 ` Vincent Guittot
2016-06-01 3:41 ` [PATCH v3 3/5] sched/fair: Skip detach sched avgs for new task when changing task groups Yuyang Du
` (2 subsequent siblings)
4 siblings, 1 reply; 14+ messages in thread
From: Yuyang Du @ 2016-06-01 3:41 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: umgwanakikbuti, bsegall, pjt, morten.rasmussen, vincent.guittot,
dietmar.eggemann, Yuyang Du
Vincent reported that the first task to a new task group's cfs_rq will
be attached in attach_task_cfs_rq() and once more when it is enqueued
(see https://lkml.org/lkml/2016/5/25/388).
Actually, it is much worse. The load is currently attached mostly twice
every time when we switch to fair class or change task groups. These two
scenarios are concerned, which we will descripbe in the following
respectively
1) Switch to fair class:
The sched class change is done like this:
if (queued)
enqueue_task();
check_class_changed()
switched_from()
switched_to()
If the task is on_rq, it should have already been enqueued, which
MAY have attached the load to the cfs_rq, if so, we shouldn't attach
it again in switched_to(), otherwise, we will attach it twice. This is
what the current situation is.
So to cover both the on_rq and !on_rq cases, as well as both the task
was switched from fair and otherwise, the simplest solution is to reset
the task's last_update_time to 0, when the task is switched from fair.
Then let task enqueue do the load attachment.
2) Change between fair task groups:
The task groups are changed like this:
if (queued)
dequeue_task()
task_move_group()
if (queued)
enqueue_task()
Unlike the switch to fair class, if the task is on_rq, it will be enqueued
after we move task groups, so the simplest solution is to reset the
task's last_update_time when we do task_move_group(), and then let
enqueue_task() do the load attachment.
Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
---
kernel/sched/fair.c | 47 +++++++++++++++++++++--------------------------
1 file changed, 21 insertions(+), 26 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3270598..89513b6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2959,7 +2959,8 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg)
update_tg_load_avg(cfs_rq, 0);
}
-static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
+/* Virtually synchronize task with its cfs_rq */
+static inline void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
se->avg.last_update_time = cfs_rq->avg.last_update_time;
cfs_rq->avg.load_avg += se->avg.load_avg;
@@ -2970,19 +2971,6 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
cfs_rq_util_change(cfs_rq);
}
-static inline void attach_age_load_task(struct rq *rq, struct task_struct *p)
-{
- struct sched_entity *se = &p->se;
-
- if (!sched_feat(ATTACH_AGE_LOAD))
- return;
-
- if (se->avg.last_update_time) {
- __update_load_avg(cfs_rq_of(se)->avg.last_update_time, cpu_of(rq),
- &se->avg, 0, 0, NULL);
- }
-}
-
static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
__update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
@@ -3057,6 +3045,11 @@ static inline u64 cfs_rq_last_update_time(struct cfs_rq *cfs_rq)
}
#endif
+static inline void reset_task_last_update_time(struct task_struct *p)
+{
+ p->se.avg.last_update_time = 0;
+}
+
/*
* Task first catches up with cfs_rq, and then subtract
* itself from the cfs_rq (task must be off the queue now).
@@ -3109,10 +3102,8 @@ dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
static inline void remove_entity_load_avg(struct sched_entity *se) {}
static inline void
-attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
-static inline void
detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
-static inline void attach_age_load_task(struct rq *rq, struct task_struct *p) {}
+static inline void reset_task_last_update_time(struct task_struct *p) {}
static inline int idle_balance(struct rq *rq)
{
@@ -8400,9 +8391,6 @@ static void attach_task_cfs_rq(struct task_struct *p)
se->depth = se->parent ? se->parent->depth + 1 : 0;
#endif
- /* Synchronize task with its cfs_rq */
- attach_entity_load_avg(cfs_rq, se);
-
if (!vruntime_normalized(p))
se->vruntime += cfs_rq->min_vruntime;
}
@@ -8410,16 +8398,18 @@ static void attach_task_cfs_rq(struct task_struct *p)
static void switched_from_fair(struct rq *rq, struct task_struct *p)
{
detach_task_cfs_rq(p);
+ reset_task_last_update_time(p);
+ /*
+ * If we change back to fair class, we will attach the sched
+ * avgs when we are enqueued, which will be done only once. We
+ * won't have the chance to consistently age the avgs before
+ * attaching them, so we have to continue with the last updated
+ * sched avgs when we were detached.
+ */
}
static void switched_to_fair(struct rq *rq, struct task_struct *p)
{
- /*
- * If we change between classes, age the averages before attaching them.
- * XXX: we could have just aged the entire load away if we've been
- * absent from the fair class for too long.
- */
- attach_age_load_task(rq, p);
attach_task_cfs_rq(p);
if (task_on_rq_queued(p)) {
@@ -8472,6 +8462,11 @@ static void task_move_group_fair(struct task_struct *p)
detach_task_cfs_rq(p);
set_task_rq(p, task_cpu(p));
attach_task_cfs_rq(p);
+ /*
+ * This assures we will attach the sched avgs when we are enqueued,
+ * which will be done only once.
+ */
+ reset_task_last_update_time(p);
}
void free_fair_sched_group(struct task_group *tg)
--
1.7.9.5
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH v3 2/5] sched/fair: Skip detach and attach new group task
2016-06-01 3:41 ` [PATCH v3 2/5] sched/fair: Skip detach and attach new group task Yuyang Du
@ 2016-06-01 12:20 ` Vincent Guittot
2016-06-01 19:21 ` Yuyang Du
0 siblings, 1 reply; 14+ messages in thread
From: Vincent Guittot @ 2016-06-01 12:20 UTC (permalink / raw)
To: Yuyang Du
Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Mike Galbraith,
Benjamin Segall, Paul Turner, Morten Rasmussen, Dietmar Eggemann
On 1 June 2016 at 05:41, Yuyang Du <yuyang.du@intel.com> wrote:
> Vincent reported that the first task to a new task group's cfs_rq will
> be attached in attach_task_cfs_rq() and once more when it is enqueued
> (see https://lkml.org/lkml/2016/5/25/388).
>
> Actually, it is much worse. The load is currently attached mostly twice
> every time when we switch to fair class or change task groups. These two
> scenarios are concerned, which we will descripbe in the following
> respectively
AFAICT and according to tests that i have done around these 2 use
cases, the task is attached only once during a switched to fair and a
sched_move_task. Have you face such situation during tests ? What is
the sequence that generates this issue ?
>
> 1) Switch to fair class:
>
> The sched class change is done like this:
>
> if (queued)
> enqueue_task();
> check_class_changed()
> switched_from()
> switched_to()
>
> If the task is on_rq, it should have already been enqueued, which
> MAY have attached the load to the cfs_rq, if so, we shouldn't attach
No, it can't. The only way to attach task during enqueue is if
last_update_time has been reset which is not the case during a
switched_to_fair
> it again in switched_to(), otherwise, we will attach it twice. This is
> what the current situation is.
>
> So to cover both the on_rq and !on_rq cases, as well as both the task
> was switched from fair and otherwise, the simplest solution is to reset
> the task's last_update_time to 0, when the task is switched from fair.
> Then let task enqueue do the load attachment.
>
> 2) Change between fair task groups:
>
> The task groups are changed like this:
>
> if (queued)
> dequeue_task()
> task_move_group()
> if (queued)
> enqueue_task()
>
> Unlike the switch to fair class, if the task is on_rq, it will be enqueued
> after we move task groups, so the simplest solution is to reset the
> task's last_update_time when we do task_move_group(), and then let
> enqueue_task() do the load attachment.
Same for this sequence, the task is explicitly attached only once
during the task_move_group but never during the enqueue.
So you want to delay the attach during the enqueue ? But what happen
if the task was not enqueue when it has been moved between groups ?
The load_avg of the task stays frozen during the period because its
last_update_time is reset
>
> Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
> Signed-off-by: Yuyang Du <yuyang.du@intel.com>
> ---
> kernel/sched/fair.c | 47 +++++++++++++++++++++--------------------------
> 1 file changed, 21 insertions(+), 26 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 3270598..89513b6 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2959,7 +2959,8 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg)
> update_tg_load_avg(cfs_rq, 0);
> }
>
> -static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
> +/* Virtually synchronize task with its cfs_rq */
> +static inline void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
> {
> se->avg.last_update_time = cfs_rq->avg.last_update_time;
> cfs_rq->avg.load_avg += se->avg.load_avg;
> @@ -2970,19 +2971,6 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
> cfs_rq_util_change(cfs_rq);
> }
>
> -static inline void attach_age_load_task(struct rq *rq, struct task_struct *p)
> -{
> - struct sched_entity *se = &p->se;
> -
> - if (!sched_feat(ATTACH_AGE_LOAD))
> - return;
> -
> - if (se->avg.last_update_time) {
> - __update_load_avg(cfs_rq_of(se)->avg.last_update_time, cpu_of(rq),
> - &se->avg, 0, 0, NULL);
> - }
> -}
> -
> static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
> {
> __update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
> @@ -3057,6 +3045,11 @@ static inline u64 cfs_rq_last_update_time(struct cfs_rq *cfs_rq)
> }
> #endif
>
> +static inline void reset_task_last_update_time(struct task_struct *p)
> +{
> + p->se.avg.last_update_time = 0;
> +}
> +
> /*
> * Task first catches up with cfs_rq, and then subtract
> * itself from the cfs_rq (task must be off the queue now).
> @@ -3109,10 +3102,8 @@ dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
> static inline void remove_entity_load_avg(struct sched_entity *se) {}
>
> static inline void
> -attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
> -static inline void
> detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) {}
> -static inline void attach_age_load_task(struct rq *rq, struct task_struct *p) {}
> +static inline void reset_task_last_update_time(struct task_struct *p) {}
>
> static inline int idle_balance(struct rq *rq)
> {
> @@ -8400,9 +8391,6 @@ static void attach_task_cfs_rq(struct task_struct *p)
> se->depth = se->parent ? se->parent->depth + 1 : 0;
> #endif
>
> - /* Synchronize task with its cfs_rq */
> - attach_entity_load_avg(cfs_rq, se);
> -
> if (!vruntime_normalized(p))
> se->vruntime += cfs_rq->min_vruntime;
> }
> @@ -8410,16 +8398,18 @@ static void attach_task_cfs_rq(struct task_struct *p)
> static void switched_from_fair(struct rq *rq, struct task_struct *p)
> {
> detach_task_cfs_rq(p);
> + reset_task_last_update_time(p);
> + /*
> + * If we change back to fair class, we will attach the sched
> + * avgs when we are enqueued, which will be done only once. We
> + * won't have the chance to consistently age the avgs before
> + * attaching them, so we have to continue with the last updated
> + * sched avgs when we were detached.
> + */
> }
>
> static void switched_to_fair(struct rq *rq, struct task_struct *p)
> {
> - /*
> - * If we change between classes, age the averages before attaching them.
> - * XXX: we could have just aged the entire load away if we've been
> - * absent from the fair class for too long.
> - */
> - attach_age_load_task(rq, p);
> attach_task_cfs_rq(p);
>
> if (task_on_rq_queued(p)) {
> @@ -8472,6 +8462,11 @@ static void task_move_group_fair(struct task_struct *p)
> detach_task_cfs_rq(p);
> set_task_rq(p, task_cpu(p));
> attach_task_cfs_rq(p);
> + /*
> + * This assures we will attach the sched avgs when we are enqueued,
> + * which will be done only once.
> + */
> + reset_task_last_update_time(p);
> }
>
> void free_fair_sched_group(struct task_group *tg)
> --
> 1.7.9.5
>
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH v3 2/5] sched/fair: Skip detach and attach new group task
2016-06-01 12:20 ` Vincent Guittot
@ 2016-06-01 19:21 ` Yuyang Du
2016-06-02 7:29 ` Vincent Guittot
0 siblings, 1 reply; 14+ messages in thread
From: Yuyang Du @ 2016-06-01 19:21 UTC (permalink / raw)
To: Vincent Guittot
Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Mike Galbraith,
Benjamin Segall, Paul Turner, Morten Rasmussen, Dietmar Eggemann
On Wed, Jun 01, 2016 at 02:20:09PM +0200, Vincent Guittot wrote:
> On 1 June 2016 at 05:41, Yuyang Du <yuyang.du@intel.com> wrote:
> > Vincent reported that the first task to a new task group's cfs_rq will
> > be attached in attach_task_cfs_rq() and once more when it is enqueued
> > (see https://lkml.org/lkml/2016/5/25/388).
> >
> > Actually, it is much worse. The load is currently attached mostly twice
> > every time when we switch to fair class or change task groups. These two
> > scenarios are concerned, which we will descripbe in the following
> > respectively
>
> AFAICT and according to tests that i have done around these 2 use
> cases, the task is attached only once during a switched to fair and a
> sched_move_task. Have you face such situation during tests ? What is
> the sequence that generates this issue ?
>
> >
> > 1) Switch to fair class:
> >
> > The sched class change is done like this:
> >
> > if (queued)
> > enqueue_task();
> > check_class_changed()
> > switched_from()
> > switched_to()
> >
> > If the task is on_rq, it should have already been enqueued, which
> > MAY have attached the load to the cfs_rq, if so, we shouldn't attach
>
> No, it can't. The only way to attach task during enqueue is if
> last_update_time has been reset which is not the case during a
> switched_to_fair
My response to your above two comments:
As I said, there can be four possibilities going through the above sequences:
(1) on_rq, (2) !on_rq, (a) was fair class (representing last_update_time != 0),
(b) never was fair class (representing last_update_time == 0, but may not be
limited to this)
Crossing them, we have (1)(a), (1)(b), (2)(a), and (2)(b).
Some will attach twice, which are (1)(b) and (2)(b), the other will attach
once, which are (1)(a) and (2)(a). The difficult part is they can be attached
at different places.
So, the simplest sulution is to reset the task's last_update_time to 0, when
the task is switched from fair. Then let task enqueue do the load attachment,
only once at this place under all circumstances.
> > 2) Change between fair task groups:
> >
> > The task groups are changed like this:
> >
> > if (queued)
> > dequeue_task()
> > task_move_group()
> > if (queued)
> > enqueue_task()
> >
> > Unlike the switch to fair class, if the task is on_rq, it will be enqueued
> > after we move task groups, so the simplest solution is to reset the
> > task's last_update_time when we do task_move_group(), and then let
> > enqueue_task() do the load attachment.
>
> Same for this sequence, the task is explicitly attached only once
> during the task_move_group but never during the enqueue.
Your patch said there can be twice, :)
> So you want to delay the attach during the enqueue ?
Yes, despite of delay or not delay, the key is to only attach at enqueue(),
this is the simplest solution.
> But what happen
> if the task was not enqueue when it has been moved between groups ?
> The load_avg of the task stays frozen during the period because its
> last_update_time is reset
That is the !on_rq case. By "frozen", you mean it won't be decayed, right?
Yes, this is the downside. But what if the task will never be enqueued,
that legacy load does not mean anything in this case.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 2/5] sched/fair: Skip detach and attach new group task
2016-06-01 19:21 ` Yuyang Du
@ 2016-06-02 7:29 ` Vincent Guittot
2016-06-01 23:41 ` Yuyang Du
0 siblings, 1 reply; 14+ messages in thread
From: Vincent Guittot @ 2016-06-02 7:29 UTC (permalink / raw)
To: Yuyang Du
Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Mike Galbraith,
Benjamin Segall, Paul Turner, Morten Rasmussen, Dietmar Eggemann
On 1 June 2016 at 21:21, Yuyang Du <yuyang.du@intel.com> wrote:
> On Wed, Jun 01, 2016 at 02:20:09PM +0200, Vincent Guittot wrote:
>> On 1 June 2016 at 05:41, Yuyang Du <yuyang.du@intel.com> wrote:
>> > Vincent reported that the first task to a new task group's cfs_rq will
>> > be attached in attach_task_cfs_rq() and once more when it is enqueued
>> > (see https://lkml.org/lkml/2016/5/25/388).
>> >
>> > Actually, it is much worse. The load is currently attached mostly twice
>> > every time when we switch to fair class or change task groups. These two
>> > scenarios are concerned, which we will descripbe in the following
>> > respectively
>>
>> AFAICT and according to tests that i have done around these 2 use
>> cases, the task is attached only once during a switched to fair and a
>> sched_move_task. Have you face such situation during tests ? What is
>> the sequence that generates this issue ?
>>
>> >
>> > 1) Switch to fair class:
>> >
>> > The sched class change is done like this:
>> >
>> > if (queued)
>> > enqueue_task();
>> > check_class_changed()
>> > switched_from()
>> > switched_to()
>> >
>> > If the task is on_rq, it should have already been enqueued, which
>> > MAY have attached the load to the cfs_rq, if so, we shouldn't attach
>>
>> No, it can't. The only way to attach task during enqueue is if
>> last_update_time has been reset which is not the case during a
>> switched_to_fair
>
> My response to your above two comments:
>
> As I said, there can be four possibilities going through the above sequences:
>
> (1) on_rq, (2) !on_rq, (a) was fair class (representing last_update_time != 0),
> (b) never was fair class (representing last_update_time == 0, but may not be
> limited to this)
>
> Crossing them, we have (1)(a), (1)(b), (2)(a), and (2)(b).
>
> Some will attach twice, which are (1)(b) and (2)(b), the other will attach
> once, which are (1)(a) and (2)(a). The difficult part is they can be attached
> at different places.
ok for (1)(b) but not for (2)(b) and it's far from "attached mostly
twice every time"
The root cause is that the last_update_time is initialize to 0 which
have a special meaning for the load_avg. We should better initialize
it to something different like for cfs_rq
>
> So, the simplest sulution is to reset the task's last_update_time to 0, when
> the task is switched from fair. Then let task enqueue do the load attachment,
> only once at this place under all circumstances.
IMHO, the better solution is to not initialize last_update_time to
something different from 0 which has a special meaning
>
>> > 2) Change between fair task groups:
>> >
>> > The task groups are changed like this:
>> >
>> > if (queued)
>> > dequeue_task()
>> > task_move_group()
>> > if (queued)
>> > enqueue_task()
>> >
>> > Unlike the switch to fair class, if the task is on_rq, it will be enqueued
>> > after we move task groups, so the simplest solution is to reset the
>> > task's last_update_time when we do task_move_group(), and then let
>> > enqueue_task() do the load attachment.
>>
>> Same for this sequence, the task is explicitly attached only once
>> during the task_move_group but never during the enqueue.
>
> Your patch said there can be twice, :)
My patch says the 1st task that is attached on a cfs rq wil be
attached twice not "The load is currently attached mostly twice
every time when we switch to fair class or change task groups. " You
say that it's happen mostly every time and i disagree on that.
For Change between fair task group, i still don't see how it can
attached mostly twice every time
>
>> So you want to delay the attach during the enqueue ?
>
> Yes, despite of delay or not delay, the key is to only attach at enqueue(),
> this is the simplest solution.
>
>> But what happen
>> if the task was not enqueue when it has been moved between groups ?
>> The load_avg of the task stays frozen during the period because its
>> last_update_time is reset
>
> That is the !on_rq case. By "frozen", you mean it won't be decayed, right?
> Yes, this is the downside. But what if the task will never be enqueued,
That's a big downside IMO
> that legacy load does not mean anything in this case.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 2/5] sched/fair: Skip detach and attach new group task
2016-06-02 7:29 ` Vincent Guittot
@ 2016-06-01 23:41 ` Yuyang Du
2016-06-02 7:40 ` Vincent Guittot
0 siblings, 1 reply; 14+ messages in thread
From: Yuyang Du @ 2016-06-01 23:41 UTC (permalink / raw)
To: Vincent Guittot
Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Mike Galbraith,
Benjamin Segall, Paul Turner, Morten Rasmussen, Dietmar Eggemann
On Thu, Jun 02, 2016 at 09:29:53AM +0200, Vincent Guittot wrote:
> > My response to your above two comments:
> >
> > As I said, there can be four possibilities going through the above sequences:
> >
> > (1) on_rq, (2) !on_rq, (a) was fair class (representing last_update_time != 0),
> > (b) never was fair class (representing last_update_time == 0, but may not be
> > limited to this)
> >
> > Crossing them, we have (1)(a), (1)(b), (2)(a), and (2)(b).
> >
> > Some will attach twice, which are (1)(b) and (2)(b), the other will attach
> > once, which are (1)(a) and (2)(a). The difficult part is they can be attached
> > at different places.
>
> ok for (1)(b) but not for (2)(b) and it's far from "attached mostly
> twice every time"
You are right. That claim is reckless, I will change it to:
"sometimes attached twice".
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 2/5] sched/fair: Skip detach and attach new group task
2016-06-01 23:41 ` Yuyang Du
@ 2016-06-02 7:40 ` Vincent Guittot
2016-06-01 23:50 ` Yuyang Du
0 siblings, 1 reply; 14+ messages in thread
From: Vincent Guittot @ 2016-06-02 7:40 UTC (permalink / raw)
To: Yuyang Du
Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Mike Galbraith,
Benjamin Segall, Paul Turner, Morten Rasmussen, Dietmar Eggemann
On 2 June 2016 at 01:41, Yuyang Du <yuyang.du@intel.com> wrote:
> On Thu, Jun 02, 2016 at 09:29:53AM +0200, Vincent Guittot wrote:
>> > My response to your above two comments:
>> >
>> > As I said, there can be four possibilities going through the above sequences:
>> >
>> > (1) on_rq, (2) !on_rq, (a) was fair class (representing last_update_time != 0),
>> > (b) never was fair class (representing last_update_time == 0, but may not be
>> > limited to this)
>> >
>> > Crossing them, we have (1)(a), (1)(b), (2)(a), and (2)(b).
>> >
>> > Some will attach twice, which are (1)(b) and (2)(b), the other will attach
>> > once, which are (1)(a) and (2)(a). The difficult part is they can be attached
>> > at different places.
>>
>> ok for (1)(b) but not for (2)(b) and it's far from "attached mostly
>> twice every time"
>
> You are right. That claim is reckless, I will change it to:
> "sometimes attached twice".
Or you can just describe the used case (1)(b) which is the only one AFAICT
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 2/5] sched/fair: Skip detach and attach new group task
2016-06-02 7:40 ` Vincent Guittot
@ 2016-06-01 23:50 ` Yuyang Du
0 siblings, 0 replies; 14+ messages in thread
From: Yuyang Du @ 2016-06-01 23:50 UTC (permalink / raw)
To: Vincent Guittot
Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Mike Galbraith,
Benjamin Segall, Paul Turner, Morten Rasmussen, Dietmar Eggemann
On Thu, Jun 02, 2016 at 09:40:18AM +0200, Vincent Guittot wrote:
> On 2 June 2016 at 01:41, Yuyang Du <yuyang.du@intel.com> wrote:
> > On Thu, Jun 02, 2016 at 09:29:53AM +0200, Vincent Guittot wrote:
> >> > My response to your above two comments:
> >> >
> >> > As I said, there can be four possibilities going through the above sequences:
> >> >
> >> > (1) on_rq, (2) !on_rq, (a) was fair class (representing last_update_time != 0),
> >> > (b) never was fair class (representing last_update_time == 0, but may not be
> >> > limited to this)
> >> >
> >> > Crossing them, we have (1)(a), (1)(b), (2)(a), and (2)(b).
> >> >
> >> > Some will attach twice, which are (1)(b) and (2)(b), the other will attach
> >> > once, which are (1)(a) and (2)(a). The difficult part is they can be attached
> >> > at different places.
> >>
> >> ok for (1)(b) but not for (2)(b) and it's far from "attached mostly
> >> twice every time"
> >
> > You are right. That claim is reckless, I will change it to:
> > "sometimes attached twice".
>
> Or you can just describe the used case (1)(b) which is the only one AFAICT
You are right again, ;)
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v3 3/5] sched/fair: Skip detach sched avgs for new task when changing task groups
2016-06-01 3:41 [PATCH v3 0/5] sched/fair: Fix attach and detach sched avgs for task group change or sched class change Yuyang Du
2016-06-01 3:41 ` [PATCH v3 1/5] sched/fair: Clean up attach_entity_load_avg() Yuyang Du
2016-06-01 3:41 ` [PATCH v3 2/5] sched/fair: Skip detach and attach new group task Yuyang Du
@ 2016-06-01 3:41 ` Yuyang Du
2016-06-01 3:41 ` [PATCH v3 4/5] sched/fair: Move load and util avgs from wake_up_new_task() to sched_fork() Yuyang Du
2016-06-01 3:41 ` [PATCH v3 5/5] sched/fair: Add inline to detach_entity_load_evg() Yuyang Du
4 siblings, 0 replies; 14+ messages in thread
From: Yuyang Du @ 2016-06-01 3:41 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: umgwanakikbuti, bsegall, pjt, morten.rasmussen, vincent.guittot,
dietmar.eggemann, Yuyang Du
Newly forked task has not been enqueued, so should not be removed from
cfs_rq. To do so, we need to pass the fork information all the way
from sched_move_task() to task_move_group_fair().
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
---
kernel/sched/auto_group.c | 2 +-
kernel/sched/core.c | 8 ++++----
kernel/sched/fair.c | 8 ++++++--
kernel/sched/sched.h | 4 ++--
4 files changed, 13 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/auto_group.c b/kernel/sched/auto_group.c
index a5d966c..15f2eb7 100644
--- a/kernel/sched/auto_group.c
+++ b/kernel/sched/auto_group.c
@@ -143,7 +143,7 @@ autogroup_move_group(struct task_struct *p, struct autogroup *ag)
goto out;
for_each_thread(p, t)
- sched_move_task(t);
+ sched_move_task(t, false);
out:
unlock_task_sighand(p, &flags);
autogroup_kref_put(prev);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7f2cae4..ae5b8a8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7724,7 +7724,7 @@ void sched_offline_group(struct task_group *tg)
* by now. This function just updates tsk->se.cfs_rq and tsk->se.parent to
* reflect its new group.
*/
-void sched_move_task(struct task_struct *tsk)
+void sched_move_task(struct task_struct *tsk, bool fork)
{
struct task_group *tg;
int queued, running;
@@ -7753,7 +7753,7 @@ void sched_move_task(struct task_struct *tsk)
#ifdef CONFIG_FAIR_GROUP_SCHED
if (tsk->sched_class->task_move_group)
- tsk->sched_class->task_move_group(tsk);
+ tsk->sched_class->task_move_group(tsk, fork);
else
#endif
set_task_rq(tsk, task_cpu(tsk));
@@ -8186,7 +8186,7 @@ static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
static void cpu_cgroup_fork(struct task_struct *task)
{
- sched_move_task(task);
+ sched_move_task(task, true);
}
static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
@@ -8213,7 +8213,7 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset)
struct cgroup_subsys_state *css;
cgroup_taskset_for_each(task, css, tset)
- sched_move_task(task);
+ sched_move_task(task, false);
}
#ifdef CONFIG_FAIR_GROUP_SCHED
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 89513b6..0b4914d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8457,9 +8457,13 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
}
#ifdef CONFIG_FAIR_GROUP_SCHED
-static void task_move_group_fair(struct task_struct *p)
+static void task_move_group_fair(struct task_struct *p, bool fork)
{
- detach_task_cfs_rq(p);
+ /*
+ * Newly forked task should not be removed from any cfs_rq
+ */
+ if (!fork)
+ detach_task_cfs_rq(p);
set_task_rq(p, task_cpu(p));
attach_task_cfs_rq(p);
/*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 72f1f30..c139ec4 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -343,7 +343,7 @@ extern void sched_online_group(struct task_group *tg,
extern void sched_destroy_group(struct task_group *tg);
extern void sched_offline_group(struct task_group *tg);
-extern void sched_move_task(struct task_struct *tsk);
+extern void sched_move_task(struct task_struct *tsk, bool fork);
#ifdef CONFIG_FAIR_GROUP_SCHED
extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
@@ -1247,7 +1247,7 @@ struct sched_class {
void (*update_curr) (struct rq *rq);
#ifdef CONFIG_FAIR_GROUP_SCHED
- void (*task_move_group) (struct task_struct *p);
+ void (*task_move_group) (struct task_struct *p, bool fork);
#endif
};
--
1.7.9.5
^ permalink raw reply related [flat|nested] 14+ messages in thread* [PATCH v3 4/5] sched/fair: Move load and util avgs from wake_up_new_task() to sched_fork()
2016-06-01 3:41 [PATCH v3 0/5] sched/fair: Fix attach and detach sched avgs for task group change or sched class change Yuyang Du
` (2 preceding siblings ...)
2016-06-01 3:41 ` [PATCH v3 3/5] sched/fair: Skip detach sched avgs for new task when changing task groups Yuyang Du
@ 2016-06-01 3:41 ` Yuyang Du
2016-06-01 12:24 ` Vincent Guittot
2016-06-01 3:41 ` [PATCH v3 5/5] sched/fair: Add inline to detach_entity_load_evg() Yuyang Du
4 siblings, 1 reply; 14+ messages in thread
From: Yuyang Du @ 2016-06-01 3:41 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: umgwanakikbuti, bsegall, pjt, morten.rasmussen, vincent.guittot,
dietmar.eggemann, Yuyang Du
Move new task initialization to sched_fork(). For initial non-fair class
task, the first switched_to_fair() will do the attach correctly.
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
---
kernel/sched/core.c | 5 +++--
kernel/sched/fair.c | 14 +++++---------
kernel/sched/sched.h | 2 +-
3 files changed, 9 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ae5b8a8..77a8a2b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2370,6 +2370,9 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
if (p->sched_class->task_fork)
p->sched_class->task_fork(p);
+ /* Initialize new task's sched averages */
+ init_entity_sched_avg(&p->se);
+
/*
* The child is not yet in the pid-hash so no cgroup attach races,
* and the cgroup is pinned to this child due to cgroup_fork()
@@ -2510,8 +2513,6 @@ void wake_up_new_task(struct task_struct *p)
struct rq_flags rf;
struct rq *rq;
- /* Initialize new task's runnable average */
- init_entity_runnable_average(&p->se);
raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
#ifdef CONFIG_SMP
/*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0b4914d..eb9041c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -668,8 +668,8 @@ static unsigned long task_h_load(struct task_struct *p);
#define LOAD_AVG_MAX 47742 /* maximum possible load avg */
#define LOAD_AVG_MAX_N 345 /* number of full periods to produce LOAD_AVG_MAX */
-/* Give new sched_entity start runnable values to heavy its load in infant time */
-void init_entity_runnable_average(struct sched_entity *se)
+/* Give new sched_entity start load values to heavy its load in infant time */
+void init_entity_sched_avg(struct sched_entity *se)
{
struct sched_avg *sa = &se->avg;
@@ -738,12 +738,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq);
static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq);
#else
-void init_entity_runnable_average(struct sched_entity *se)
-{
-}
-void post_init_entity_util_avg(struct sched_entity *se)
-{
-}
+void init_entity_sched_avg(struct sched_entity *se) { }
+void post_init_entity_util_avg(struct sched_entity *se) { }
#endif
/*
@@ -8520,7 +8516,7 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
init_cfs_rq(cfs_rq);
init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]);
- init_entity_runnable_average(se);
+ init_entity_sched_avg(se);
post_init_entity_util_avg(se);
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index c139ec4..bc9c99e 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1321,7 +1321,7 @@ extern void init_dl_task_timer(struct sched_dl_entity *dl_se);
unsigned long to_ratio(u64 period, u64 runtime);
-extern void init_entity_runnable_average(struct sched_entity *se);
+extern void init_entity_sched_avg(struct sched_entity *se);
extern void post_init_entity_util_avg(struct sched_entity *se);
#ifdef CONFIG_NO_HZ_FULL
--
1.7.9.5
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH v3 4/5] sched/fair: Move load and util avgs from wake_up_new_task() to sched_fork()
2016-06-01 3:41 ` [PATCH v3 4/5] sched/fair: Move load and util avgs from wake_up_new_task() to sched_fork() Yuyang Du
@ 2016-06-01 12:24 ` Vincent Guittot
2016-06-01 18:58 ` Yuyang Du
0 siblings, 1 reply; 14+ messages in thread
From: Vincent Guittot @ 2016-06-01 12:24 UTC (permalink / raw)
To: Yuyang Du
Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Mike Galbraith,
Benjamin Segall, Paul Turner, Morten Rasmussen, Dietmar Eggemann
On 1 June 2016 at 05:41, Yuyang Du <yuyang.du@intel.com> wrote:
> Move new task initialization to sched_fork(). For initial non-fair class
> task, the first switched_to_fair() will do the attach correctly.
Not sure to catch the explanation. you have only moved and renamed
init_entity_runnable_average but you speak about initial non-fair
class
>
> Suggested-by: Peter Zijlstra <peterz@infradead.org>
> Signed-off-by: Yuyang Du <yuyang.du@intel.com>
> ---
> kernel/sched/core.c | 5 +++--
> kernel/sched/fair.c | 14 +++++---------
> kernel/sched/sched.h | 2 +-
> 3 files changed, 9 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index ae5b8a8..77a8a2b 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2370,6 +2370,9 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
> if (p->sched_class->task_fork)
> p->sched_class->task_fork(p);
>
> + /* Initialize new task's sched averages */
> + init_entity_sched_avg(&p->se);
> +
> /*
> * The child is not yet in the pid-hash so no cgroup attach races,
> * and the cgroup is pinned to this child due to cgroup_fork()
> @@ -2510,8 +2513,6 @@ void wake_up_new_task(struct task_struct *p)
> struct rq_flags rf;
> struct rq *rq;
>
> - /* Initialize new task's runnable average */
> - init_entity_runnable_average(&p->se);
> raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
> #ifdef CONFIG_SMP
> /*
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0b4914d..eb9041c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -668,8 +668,8 @@ static unsigned long task_h_load(struct task_struct *p);
> #define LOAD_AVG_MAX 47742 /* maximum possible load avg */
> #define LOAD_AVG_MAX_N 345 /* number of full periods to produce LOAD_AVG_MAX */
>
> -/* Give new sched_entity start runnable values to heavy its load in infant time */
> -void init_entity_runnable_average(struct sched_entity *se)
> +/* Give new sched_entity start load values to heavy its load in infant time */
> +void init_entity_sched_avg(struct sched_entity *se)
> {
> struct sched_avg *sa = &se->avg;
>
> @@ -738,12 +738,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
> static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq);
> static inline unsigned long cfs_rq_load_avg(struct cfs_rq *cfs_rq);
> #else
> -void init_entity_runnable_average(struct sched_entity *se)
> -{
> -}
> -void post_init_entity_util_avg(struct sched_entity *se)
> -{
> -}
> +void init_entity_sched_avg(struct sched_entity *se) { }
> +void post_init_entity_util_avg(struct sched_entity *se) { }
> #endif
>
> /*
> @@ -8520,7 +8516,7 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
>
> init_cfs_rq(cfs_rq);
> init_tg_cfs_entry(tg, cfs_rq, se, i, parent->se[i]);
> - init_entity_runnable_average(se);
> + init_entity_sched_avg(se);
> post_init_entity_util_avg(se);
> }
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index c139ec4..bc9c99e 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1321,7 +1321,7 @@ extern void init_dl_task_timer(struct sched_dl_entity *dl_se);
>
> unsigned long to_ratio(u64 period, u64 runtime);
>
> -extern void init_entity_runnable_average(struct sched_entity *se);
> +extern void init_entity_sched_avg(struct sched_entity *se);
> extern void post_init_entity_util_avg(struct sched_entity *se);
>
> #ifdef CONFIG_NO_HZ_FULL
> --
> 1.7.9.5
>
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH v3 4/5] sched/fair: Move load and util avgs from wake_up_new_task() to sched_fork()
2016-06-01 12:24 ` Vincent Guittot
@ 2016-06-01 18:58 ` Yuyang Du
0 siblings, 0 replies; 14+ messages in thread
From: Yuyang Du @ 2016-06-01 18:58 UTC (permalink / raw)
To: Vincent Guittot
Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Mike Galbraith,
Benjamin Segall, Paul Turner, Morten Rasmussen, Dietmar Eggemann
On Wed, Jun 01, 2016 at 02:24:29PM +0200, Vincent Guittot wrote:
> On 1 June 2016 at 05:41, Yuyang Du <yuyang.du@intel.com> wrote:
> > Move new task initialization to sched_fork(). For initial non-fair class
> > task, the first switched_to_fair() will do the attach correctly.
>
> Not sure to catch the explanation. you have only moved and renamed
> init_entity_runnable_average but you speak about initial non-fair
> class
The non-fair class task mentioning is that I stopped a bit to figure
out whether only call sched_fork_fair(), but obviously sched classes
can change, so we should call sched_fork.
And this patch is ammeneable to the previous fix to switched_to().
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v3 5/5] sched/fair: Add inline to detach_entity_load_evg()
2016-06-01 3:41 [PATCH v3 0/5] sched/fair: Fix attach and detach sched avgs for task group change or sched class change Yuyang Du
` (3 preceding siblings ...)
2016-06-01 3:41 ` [PATCH v3 4/5] sched/fair: Move load and util avgs from wake_up_new_task() to sched_fork() Yuyang Du
@ 2016-06-01 3:41 ` Yuyang Du
4 siblings, 0 replies; 14+ messages in thread
From: Yuyang Du @ 2016-06-01 3:41 UTC (permalink / raw)
To: peterz, mingo, linux-kernel
Cc: umgwanakikbuti, bsegall, pjt, morten.rasmussen, vincent.guittot,
dietmar.eggemann, Yuyang Du
detach_entity_load_evg() is only called by detach_task_cfs_rq(), so
explicitly add inline attribute to it.
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
---
kernel/sched/fair.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index eb9041c..77428c4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2967,7 +2967,8 @@ static inline void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_en
cfs_rq_util_change(cfs_rq);
}
-static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
+/* Catch up with the cfs_rq and remove our load when we leave */
+static inline void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
__update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
&se->avg, se->on_rq * scale_load_down(se->load.weight),
@@ -8370,7 +8371,6 @@ static void detach_task_cfs_rq(struct task_struct *p)
se->vruntime -= cfs_rq->min_vruntime;
}
- /* Catch up with the cfs_rq and remove our load when we leave */
detach_entity_load_avg(cfs_rq, se);
}
--
1.7.9.5
^ permalink raw reply related [flat|nested] 14+ messages in thread