From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from hera.aquilenet.fr ([141.255.128.1]:35952 "EHLO hera.aquilenet.fr" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752529AbcLSUfj (ORCPT ); Mon, 19 Dec 2016 15:35:39 -0500 Date: Mon, 19 Dec 2016 21:27:54 +0100 From: Samuel Thibault To: stable@vger.kernel.org Cc: Dietmar Eggemann , Peter Zijlstra , Mike Galbraith , Thomas Gleixner Subject: sched/fair: Fix fixed point arithmetic width for shares and effective load Message-ID: <20161219202754.GA24395@var.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Sender: stable-owner@vger.kernel.org List-ID: Hello, Please backport commit ab522e33f91799661aad47bebb691f241a9f6bb8 ('sched/fair: Fix fixed point arithmetic width for shares and effective load') to 4.7 and 4.8. It was apparently not backported as of 4.7.10 and 4.8.15, while it fixes a huge performance regression in our tests, see the graphs between 19320.5 and 19451.5 on http://starpu.gforge.inria.fr/testing/trunk/benchmarks/tasks_size_overhead_total_lws-200.png which happened to be using a kernel without this fix. FTR, here is the patch again. Samuel commit ab522e33f91799661aad47bebb691f241a9f6bb8 Author: Dietmar Eggemann Date: Mon Aug 22 15:00:41 2016 +0100 sched/fair: Fix fixed point arithmetic width for shares and effective load Since commit: 2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels") we now have two different fixed point units for load: - 'shares' in calc_cfs_shares() has 20 bit fixed point unit on 64-bit kernels. Therefore use scale_load() on MIN_SHARES. - 'wl' in effective_load() has 10 bit fixed point unit. Therefore use scale_load_down() on tg->shares which has 20 bit fixed point unit on 64-bit kernels. Signed-off-by: Dietmar Eggemann Signed-off-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/1471874441-24701-1-git-send-email-dietmar.eggemann@arm.com Signed-off-by: Ingo Molnar diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8fb4d19..786ef94 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5017,9 +5017,9 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg) * wl = S * s'_i; see (2) */ if (W > 0 && w < W) - wl = (w * (long)tg->shares) / W; + wl = (w * (long)scale_load_down(tg->shares)) / W; else - wl = tg->shares; + wl = scale_load_down(tg->shares); /* * Per the above, wl is the new se->load.weight value; since