From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: stable@vger.kernel.org, Greg KH <gregkh@linuxfoundation.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>,
Peter Zijlstra <peterz@infradead.org>,
Mike Galbraith <efault@gmx.de>,
Thomas Gleixner <tglx@linutronix.de>
Subject: sched/fair: Fix fixed point arithmetic width for shares and effective load
Date: Sat, 7 Jan 2017 21:38:36 +0100 [thread overview]
Message-ID: <20170107203836.GJ2641@var.home> (raw)
Hello,
Please backport
commit ab522e33f91799661aad47bebb691f241a9f6bb8
('sched/fair: Fix fixed point arithmetic width for shares and effective load')
to 4.8.
It was apparently not backported as of 4.8.16, while it fixes a huge
performance regression in our tests, see the graphs between 19320.5 and
19451.5 on
http://starpu.gforge.inria.fr/testing/trunk/benchmarks/tasks_size_overhead_total_lws-200.png
which happened to be using a kernel without this fix.
FTR, here is the patch again.
Samuel
commit ab522e33f91799661aad47bebb691f241a9f6bb8
Author: Dietmar Eggemann <dietmar.eggemann@arm.com>
Date: Mon Aug 22 15:00:41 2016 +0100
sched/fair: Fix fixed point arithmetic width for shares and effective load
Since commit:
2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels")
we now have two different fixed point units for load:
- 'shares' in calc_cfs_shares() has 20 bit fixed point unit on 64-bit
kernels. Therefore use scale_load() on MIN_SHARES.
- 'wl' in effective_load() has 10 bit fixed point unit. Therefore use
scale_load_down() on tg->shares which has 20 bit fixed point unit on
64-bit kernels.
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1471874441-24701-1-git-send-email-dietmar.eggemann@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8fb4d19..786ef94 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5017,9 +5017,9 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
* wl = S * s'_i; see (2)
*/
if (W > 0 && w < W)
- wl = (w * (long)tg->shares) / W;
+ wl = (w * (long)scale_load_down(tg->shares)) / W;
else
- wl = tg->shares;
+ wl = scale_load_down(tg->shares);
/*
* Per the above, wl is the new se->load.weight value; since
next reply other threads:[~2017-01-07 20:38 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-07 20:38 Samuel Thibault [this message]
2017-01-08 11:29 ` sched/fair: Fix fixed point arithmetic width for shares and effective load Greg KH
2017-01-08 11:32 ` Samuel Thibault
2017-01-08 12:51 ` Greg KH
2017-01-08 12:52 ` Samuel Thibault
-- strict thread matches above, loose matches on Subject: below --
2016-12-19 20:27 Samuel Thibault
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170107203836.GJ2641@var.home \
--to=samuel.thibault@ens-lyon.org \
--cc=dietmar.eggemann@arm.com \
--cc=efault@gmx.de \
--cc=gregkh@linuxfoundation.org \
--cc=peterz@infradead.org \
--cc=stable@vger.kernel.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).