public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched: fix the calculation of __sched_period in sched_slice()
@ 2016-05-09  2:45 Zhou Chengming
  2016-05-09  7:05 ` Peter Zijlstra
  0 siblings, 1 reply; 2+ messages in thread
From: Zhou Chengming @ 2016-05-09  2:45 UTC (permalink / raw)
  To: mingo, peterz; +Cc: linux-kernel, guohanjun, huawei.libin

When we get the sched_slice of a sched_entity, we use cfs_rq->nr_running
to calculate the whole __sched_period. But cfs_rq->nr_running is the
number of sched_entity in that cfs_rq, rq->nr_running is the number
of all the tasks that are not throttled. So we should use the
rq->nr_running to calculate the whole __sched_period value.

Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com>
---
 kernel/sched/fair.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0fe30e6..59c9378 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -625,7 +625,7 @@ static u64 __sched_period(unsigned long nr_running)
  */
 static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
 {
-	u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);
+	u64 slice = __sched_period(rq_of(cfs_rq)->nr_running + !se->on_rq);
 
 	for_each_sched_entity(se) {
 		struct load_weight *load;
-- 
1.7.7

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-05-09  7:05 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-09  2:45 [PATCH] sched: fix the calculation of __sched_period in sched_slice() Zhou Chengming
2016-05-09  7:05 ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox