From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753539AbcD1JTj (ORCPT ); Thu, 28 Apr 2016 05:19:39 -0400 Received: from merlin.infradead.org ([205.233.59.134]:46712 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753407AbcD1JTe (ORCPT ); Thu, 28 Apr 2016 05:19:34 -0400 Date: Thu, 28 Apr 2016 11:19:19 +0200 From: Peter Zijlstra To: Yuyang Du Cc: mingo@kernel.org, linux-kernel@vger.kernel.org, bsegall@google.com, pjt@google.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, lizefan@huawei.com, umgwanakikbuti@gmail.com Subject: Re: [PATCH v3 5/6] sched/fair: Rename scale_load() and scale_load_down() Message-ID: <20160428091919.GW3430@twins.programming.kicks-ass.net> References: <1459829551-21625-1-git-send-email-yuyang.du@intel.com> <1459829551-21625-6-git-send-email-yuyang.du@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1459829551-21625-6-git-send-email-yuyang.du@intel.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 05, 2016 at 12:12:30PM +0800, Yuyang Du wrote: > Rename scale_load() and scale_load_down() to user_to_kernel_load() > and kernel_to_user_load() respectively, to allow the names to bear > what they are really about. > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -189,7 +189,7 @@ static void __update_inv_weight(struct load_weight *lw) > if (likely(lw->inv_weight)) > return; > > - w = scale_load_down(lw->weight); > + w = kernel_to_user_load(lw->weight); > > if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST)) > lw->inv_weight = 1; > @@ -213,7 +213,7 @@ static void __update_inv_weight(struct load_weight *lw) > */ > static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw) > { > - u64 fact = scale_load_down(weight); > + u64 fact = kernel_to_user_load(weight); > int shift = WMULT_SHIFT; > > __update_inv_weight(lw); > @@ -6952,10 +6952,11 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s > */ > if (busiest->group_type == group_overloaded && > local->group_type == group_overloaded) { > + unsigned long min_cpu_load = > + kernel_to_user_load(NICE_0_LOAD) * busiest->group_capacity; > load_above_capacity = busiest->sum_nr_running * NICE_0_LOAD; > - if (load_above_capacity > scale_load(busiest->group_capacity)) > - load_above_capacity -= > - scale_load(busiest->group_capacity); > + if (load_above_capacity > min_cpu_load) > + load_above_capacity -= min_cpu_load; > else > load_above_capacity = ~0UL; > } Except these 3 really are not about user/kernel visible fixed point ranges _at_all_... :/