public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched: fix lose fair sleeper bonus in switch_to_fair()
@ 2015-09-07  9:45 Wanpeng Li
  2015-09-07 14:02 ` Peter Zijlstra
  2015-09-08  2:10 ` Byungchul Park
  0 siblings, 2 replies; 25+ messages in thread
From: Wanpeng Li @ 2015-09-07  9:45 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra; +Cc: linux-kernel, Wanpeng Li

The sleeper task will be normalized when moved from fair_sched_class, in 
order that vruntime will be adjusted either the task is running or sleeping 
when moved back. The nomalization in switch_to_fair for sleep task will 
result in lose fair sleeper bonus in place_entity() once the vruntime - 
cfs_rq->min_vruntime is big when moved from fair_sched_class.

This patch fix it by adjusting vruntime just during migrating as original 
codes since the vruntime of the task has usually NOT been normalized in 
this case.

Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
---
 kernel/sched/fair.c |   11 +++++++----
 1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d26d3b7..eb9aa35 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8005,9 +8005,6 @@ static void attach_task_cfs_rq(struct task_struct *p)
 
 	/* Synchronize task with its cfs_rq */
 	attach_entity_load_avg(cfs_rq, se);
-
-	if (!vruntime_normalized(p))
-		se->vruntime += cfs_rq->min_vruntime;
 }
 
 static void switched_from_fair(struct rq *rq, struct task_struct *p)
@@ -8066,14 +8063,20 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
 #ifdef CONFIG_FAIR_GROUP_SCHED
 static void task_move_group_fair(struct task_struct *p)
 {
+	struct sched_entity *se = &p->se;
+	struct cfs_rq *cfs_rq = cfs_rq_of(se);
+
 	detach_task_cfs_rq(p);
 	set_task_rq(p, task_cpu(p));
 
 #ifdef CONFIG_SMP
 	/* Tell se's cfs_rq has been changed -- migrated */
-	p->se.avg.last_update_time = 0;
+	se->avg.last_update_time = 0;
 #endif
 	attach_task_cfs_rq(p);
+
+	if (!vruntime_normalized(p))
+		se->vruntime += cfs_rq->min_vruntime;
 }
 
 void free_fair_sched_group(struct task_group *tg)
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2015-09-08 11:22 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-07  9:45 [PATCH] sched: fix lose fair sleeper bonus in switch_to_fair() Wanpeng Li
2015-09-07 14:02 ` Peter Zijlstra
2015-09-08  3:46   ` Wanpeng Li
2015-09-08  5:28     ` Byungchul Park
2015-09-08  5:38       ` Wanpeng Li
2015-09-08  6:14         ` Byungchul Park
2015-09-08  6:23           ` Wanpeng Li
2015-09-08  6:45             ` Byungchul Park
2015-09-08  6:32           ` Byungchul Park
2015-09-08  6:42             ` Wanpeng Li
2015-09-08  7:11               ` Byungchul Park
2015-09-08  7:30                 ` Wanpeng Li
2015-09-08  7:57                   ` Byungchul Park
2015-09-08  8:04                     ` Wanpeng Li
2015-09-08  8:22                       ` Byungchul Park
2015-09-08  8:38                         ` Wanpeng Li
2015-09-08  8:45                           ` Wanpeng Li
2015-09-08  8:55                             ` byungchul.park
2015-09-08  9:17                             ` Byungchul Park
2015-09-08 11:22                               ` Peter Zijlstra
2015-09-08  8:48                           ` byungchul.park
2015-09-08  6:43       ` Byungchul Park
     [not found]   ` <BLU437-SMTP75CDA80FC247CB1C9E5A2480540@phx.gbl>
2015-09-08  9:49     ` Peter Zijlstra
2015-09-08  2:10 ` Byungchul Park
2015-09-08  6:55   ` Wanpeng Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox