From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752719AbbIGOpH (ORCPT ); Mon, 7 Sep 2015 10:45:07 -0400 Received: from eu-smtp-delivery-143.mimecast.com ([146.101.78.143]:11420 "EHLO eu-smtp-delivery-143.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750898AbbIGOpB convert rfc822-to-8bit (ORCPT ); Mon, 7 Sep 2015 10:45:01 -0400 Subject: Re: [PATCH 0/6] sched/fair: Compute capacity invariant load/utilization tracking To: Peter Zijlstra , Morten Rasmussen References: <1439569394-11974-1-git-send-email-morten.rasmussen@arm.com> <20150831092449.GJ19282@twins.programming.kicks-ass.net> <20150907124220.GT18673@twins.programming.kicks-ass.net> Cc: "mingo@redhat.com" , "vincent.guittot@linaro.org" , "daniel.lezcano@linaro.org" , "yuyang.du@intel.com" , "mturquette@baylibre.com" , "rjw@rjwysocki.net" , Juri Lelli , "sgurrappadi@nvidia.com" , "pang.xunlei@zte.com.cn" , "linux-kernel@vger.kernel.org" From: Dietmar Eggemann Message-ID: <55EDA2E9.8040900@arm.com> Date: Mon, 7 Sep 2015 15:44:57 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: <20150907124220.GT18673@twins.programming.kicks-ass.net> X-OriginalArrivalTime: 07 Sep 2015 14:44:58.0570 (UTC) FILETIME=[C4CC0AA0:01D0E97B] X-MC-Unique: kiCQrlj7SNGBVXz_54xaCQ-1 Content-Type: text/plain; charset=WINDOWS-1252 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/09/15 13:42, Peter Zijlstra wrote: > On Mon, Aug 31, 2015 at 11:24:49AM +0200, Peter Zijlstra wrote: > >> A quick run here gives: >> >> IVB-EP (2*20*2): > > As noted by someone; that should be 2*10*2, for a total of 40 cpus in > this machine. > >> >> perf stat --null --repeat 10 -- perf bench sched messaging -g 50 -l 5000 >> >> Before: After: >> 5.484170711 ( +- 0.74% ) 5.590001145 ( +- 0.45% ) >> >> Which is an almost 2% slowdown :/ >> >> I've yet to look at what happens. > > OK, so it appears this is link order nonsense. When I compared profiles > between the series, the one function that had significant change was > skb_release_data(), which doesn't make much sense. > > If I do a 'make clean' in front of each build, I get a repeatable > improvement with this patch set (although how much of that is due to the > patches itself or just because of code movement is as yet undetermined). > > I'm of a mind to apply these patches; with two patches on top, which > I'll post shortly. > -- >8 -- From: Dietmar Eggemann Date: Mon, 7 Sep 2015 14:57:22 +0100 Subject: [PATCH] sched/fair: Defer calling scaling functions Do not call the scaling functions in case time goes backwards or the last update of the sched_avg structure has happened less than 1024ns ago. Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index d6ca8d987a63..3445d2fb38f4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2552,8 +2552,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, u64 delta, scaled_delta, periods; u32 contrib; unsigned int delta_w, scaled_delta_w, decayed = 0; - unsigned long scale_freq = arch_scale_freq_capacity(NULL, cpu); - unsigned long scale_cpu = arch_scale_cpu_capacity(NULL, cpu); + unsigned long scale_freq, scale_cpu; delta = now - sa->last_update_time; /* @@ -2574,6 +2573,9 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, return 0; sa->last_update_time = now; + scale_freq = arch_scale_freq_capacity(NULL, cpu); + scale_cpu = arch_scale_cpu_capacity(NULL, cpu); + /* delta_w is the amount already accumulated against our next period */ delta_w = sa->period_contrib; if (delta + delta_w >= 1024) { -- 1.9.1