From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757219AbcEEJmI (ORCPT ); Thu, 5 May 2016 05:42:08 -0400 Received: from terminus.zytor.com ([198.137.202.10]:44170 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756908AbcEEJmC (ORCPT ); Thu, 5 May 2016 05:42:02 -0400 Date: Thu, 5 May 2016 02:41:23 -0700 From: tip-bot for Yuyang Du Message-ID: Cc: mingo@kernel.org, peterz@infradead.org, efault@gmx.de, hpa@zytor.com, vincent.guittot@linaro.org, yuyang.du@intel.com, torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, morten.rasmussen@arm.com Reply-To: hpa@zytor.com, efault@gmx.de, peterz@infradead.org, vincent.guittot@linaro.org, yuyang.du@intel.com, mingo@kernel.org, linux-kernel@vger.kernel.org, tglx@linutronix.de, morten.rasmussen@arm.com, torvalds@linux-foundation.org In-Reply-To: <1462226078-31904-2-git-send-email-yuyang.du@intel.com> References: <1462226078-31904-2-git-send-email-yuyang.du@intel.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:sched/core] sched/fair: Optimize sum computation with a lookup table Git-Commit-ID: 7b20b916e953cabef569541f991a0a583bc344cb X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 7b20b916e953cabef569541f991a0a583bc344cb Gitweb: http://git.kernel.org/tip/7b20b916e953cabef569541f991a0a583bc344cb Author: Yuyang Du AuthorDate: Tue, 3 May 2016 05:54:27 +0800 Committer: Ingo Molnar CommitDate: Thu, 5 May 2016 09:41:08 +0200 sched/fair: Optimize sum computation with a lookup table __compute_runnable_contrib() uses a loop to compute sum, whereas a table lookup can do it faster in a constant amount of time. The program to generate the constants is located at: Documentation/scheduler/sched-avg.txt Signed-off-by: Yuyang Du Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Morten Rasmussen Acked-by: Vincent Guittot Cc: Linus Torvalds Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: bsegall@google.com Cc: dietmar.eggemann@arm.com Cc: juri.lelli@arm.com Cc: pjt@google.com Link: http://lkml.kernel.org/r/1462226078-31904-2-git-send-email-yuyang.du@intel.com Signed-off-by: Ingo Molnar --- kernel/sched/fair.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e148571..8c381a6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2603,6 +2603,16 @@ static const u32 runnable_avg_yN_sum[] = { }; /* + * Precomputed \Sum y^k { 1<=k<=n, where n%32=0). Values are rolled down to + * lower integers. See Documentation/scheduler/sched-avg.txt how these + * were generated: + */ +static const u32 __accumulated_sum_N32[] = { + 0, 23371, 35056, 40899, 43820, 45281, + 46011, 46376, 46559, 46650, 46696, 46719, +}; + +/* * Approximate: * val * y^n, where y^32 ~= 0.5 (~1 scheduling period) */ @@ -2650,14 +2660,9 @@ static u32 __compute_runnable_contrib(u64 n) else if (unlikely(n >= LOAD_AVG_MAX_N)) return LOAD_AVG_MAX; - /* Compute \Sum k^n combining precomputed values for k^i, \Sum k^j */ - do { - contrib /= 2; /* y^LOAD_AVG_PERIOD = 1/2 */ - contrib += runnable_avg_yN_sum[LOAD_AVG_PERIOD]; - - n -= LOAD_AVG_PERIOD; - } while (n > LOAD_AVG_PERIOD); - + /* Since n < LOAD_AVG_MAX_N, n/LOAD_AVG_PERIOD < 11 */ + contrib = __accumulated_sum_N32[n/LOAD_AVG_PERIOD]; + n %= LOAD_AVG_PERIOD; contrib = decay_load(contrib, n); return contrib + runnable_avg_yN_sum[n]; }