From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751784AbbIXIMO (ORCPT ); Thu, 24 Sep 2015 04:12:14 -0400 Received: from mga09.intel.com ([134.134.136.24]:4975 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750711AbbIXILz (ORCPT ); Thu, 24 Sep 2015 04:11:55 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,580,1437462000"; d="scan'208";a="812138765" Date: Thu, 24 Sep 2015 08:22:40 +0800 From: Yuyang Du To: bsegall@google.com Cc: Peter Zijlstra , Morten Rasmussen , Vincent Guittot , Dietmar Eggemann , Steve Muckle , "mingo@redhat.com" , "daniel.lezcano@linaro.org" , "mturquette@baylibre.com" , "rjw@rjwysocki.net" , Juri Lelli , "sgurrappadi@nvidia.com" , "pang.xunlei@zte.com.cn" , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH 5/6] sched/fair: Get rid of scaling utilization by capacity_orig Message-ID: <20150924002240.GG11102@intel.com> References: <20150909094305.GO3644@twins.programming.kicks-ass.net> <20150909111309.GD27098@e105550-lin.cambridge.arm.com> <20150911172246.GI27098@e105550-lin.cambridge.arm.com> <20150917103825.GG3604@twins.programming.kicks-ass.net> <20150921011652.GD11102@intel.com> <20150921233900.GE11102@intel.com> <20150922232222.GF11102@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 23, 2015 at 09:54:08AM -0700, bsegall@google.com wrote: > > This second thought made a mistake (what was wrong with me). load_avg is for sure > > no greater than load with or without blocked load. > > > > With that said, it really does not matter what the following numbers are, 32bit or > > 64bit machine. What matters is that cfs_rq->load.weight is one that needs to worry > > whether overflow or not, not the load_avg. It is as simple as that. > > > > With that, I think we can and should get rid of the scale_load_down() > > for load_avg. > > load_avg yes is bounded by load.weight, but on 64-bit load_sum is only > bounded by load.weight * LOAD_AVG_MAX and is the same size as > load.weight (as I said below). There's still space for anything > reasonable though with 10 bits of SLR. You are absolutely right. > >> > If NICE_0_LOAD is nice-0's load, and if SCHED_LOAD_SHIFT is to say how to get > >> > nice-0's load, I don't understand why you want to separate them. > >> > >> SCHED_LOAD_SHIFT is not how to get nice-0's load, it just happens to > >> have the same value as NICE_0_SHIFT. (I think anyway, SCHED_LOAD_* is > >> used in precisely one place other than the newish util_avg, and as I > >> mentioned it's not remotely clear what compute_imbalance is doing theer) > > > > Yes, it is not clear to me either. > > > > With the above proposal to get rid of scale_load_down() for load_avg, so I think > > now we can remove SCHED_LOAD_*, and rename scale_load() to user_to_kernel_load(), > > and raname scale_load_down() to kernel_to_user_load(). > > > > Hmm? > > I have no opinion on renaming the scale_load functions, it's certainly > reasonable, but the scale_load names seem fine too. Without scale_load_down() in load_avg, it seems they are only used when reading/writing load between user and kernel. I will ponder more, but lets see whether others have opinion.