From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751020AbcD2EME (ORCPT ); Fri, 29 Apr 2016 00:12:04 -0400 Received: from mga14.intel.com ([192.55.52.115]:6265 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750718AbcD2EMC (ORCPT ); Fri, 29 Apr 2016 00:12:02 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,549,1455004800"; d="scan'208";a="694096608" Date: Fri, 29 Apr 2016 04:30:07 +0800 From: Yuyang Du To: Peter Zijlstra Cc: mingo@kernel.org, linux-kernel@vger.kernel.org, bsegall@google.com, pjt@google.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, lizefan@huawei.com, umgwanakikbuti@gmail.com Subject: Re: [PATCH v3 5/6] sched/fair: Rename scale_load() and scale_load_down() Message-ID: <20160428203007.GD16093@intel.com> References: <1459829551-21625-1-git-send-email-yuyang.du@intel.com> <1459829551-21625-6-git-send-email-yuyang.du@intel.com> <20160428091919.GW3430@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160428091919.GW3430@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 28, 2016 at 11:19:19AM +0200, Peter Zijlstra wrote: > On Tue, Apr 05, 2016 at 12:12:30PM +0800, Yuyang Du wrote: > > Rename scale_load() and scale_load_down() to user_to_kernel_load() > > and kernel_to_user_load() respectively, to allow the names to bear > > what they are really about. > > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -189,7 +189,7 @@ static void __update_inv_weight(struct load_weight *lw) > > if (likely(lw->inv_weight)) > > return; > > > > - w = scale_load_down(lw->weight); > > + w = kernel_to_user_load(lw->weight); > > > > if (BITS_PER_LONG > 32 && unlikely(w >= WMULT_CONST)) > > lw->inv_weight = 1; > > @@ -213,7 +213,7 @@ static void __update_inv_weight(struct load_weight *lw) > > */ > > static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw) > > { > > - u64 fact = scale_load_down(weight); > > + u64 fact = kernel_to_user_load(weight); > > int shift = WMULT_SHIFT; > > > > __update_inv_weight(lw); [snip] > Except these 3 really are not about user/kernel visible fixed point > ranges _at_all_... :/ But are the above two falling back to user fixed point precision? And the reason being that we can't efficiently do this multiply/divide thing with increased fixed point for kernel load.