From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935256Ab3BTOeE (ORCPT ); Wed, 20 Feb 2013 09:34:04 -0500 Received: from mga11.intel.com ([192.55.52.93]:12499 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933717Ab3BTOeD (ORCPT ); Wed, 20 Feb 2013 09:34:03 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.84,702,1355126400"; d="scan'208";a="293600801" Message-ID: <5124DED5.5050207@intel.com> Date: Wed, 20 Feb 2013 22:33:57 +0800 From: Alex Shi User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120912 Thunderbird/15.0.1 MIME-Version: 1.0 To: Peter Zijlstra CC: torvalds@linux-foundation.org, mingo@redhat.com, tglx@linutronix.de, akpm@linux-foundation.org, arjan@linux.intel.com, bp@alien8.de, pjt@google.com, namhyung@kernel.org, efault@gmx.de, vincent.guittot@linaro.org, gregkh@linuxfoundation.org, preeti@linux.vnet.ibm.com, viresh.kumar@linaro.org, linux-kernel@vger.kernel.org, morten.rasmussen@arm.com Subject: Re: [patch v5 06/15] sched: log the cpu utilization at rq References: <1361164062-20111-1-git-send-email-alex.shi@intel.com> <1361164062-20111-7-git-send-email-alex.shi@intel.com> <1361352643.10155.4.camel@laptop> In-Reply-To: <1361352643.10155.4.camel@laptop> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 02/20/2013 05:30 PM, Peter Zijlstra wrote: > On Mon, 2013-02-18 at 13:07 +0800, Alex Shi wrote: > >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> index fcdb21f..b9a34ab 100644 >> --- a/kernel/sched/fair.c >> +++ b/kernel/sched/fair.c >> @@ -1495,8 +1495,12 @@ static void update_cfs_rq_blocked_load(struct cfs_rq *cfs_rq, int force_update) >> >> static inline void update_rq_runnable_avg(struct rq *rq, int runnable) >> { >> + u32 period; >> __update_entity_runnable_avg(rq->clock_task, &rq->avg, runnable); >> __update_tg_runnable_avg(&rq->avg, &rq->cfs); >> + >> + period = rq->avg.runnable_avg_period ? rq->avg.runnable_avg_period : 1; >> + rq->util = rq->avg.runnable_avg_sum * 100 / period; >> } >> >> /* Add the load generated by se into cfs_rq's child load-average */ >> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h >> index 7a19792..ac1e107 100644 >> --- a/kernel/sched/sched.h >> +++ b/kernel/sched/sched.h >> @@ -350,6 +350,9 @@ extern struct root_domain def_root_domain; >> >> #endif /* CONFIG_SMP */ >> >> +/* the percentage full cpu utilization */ >> +#define FULL_UTIL 100 > > There's generally a better value than 100 when using computers.. seeing > how 100 is 64+32+4. I didn't find a good example for this. and no idea of your suggestion, would you like to explain a bit more? > >> + >> /* >> * This is the main, per-CPU runqueue data structure. >> * >> @@ -481,6 +484,7 @@ struct rq { >> #endif >> >> struct sched_avg avg; >> + unsigned int util; >> }; >> >> static inline int cpu_of(struct rq *rq) > > You don't actually compute the rq utilization, you only compute the > utilization as per the fair class, so if there's significant RT activity > it'll think the cpu is under-utilized, whihc I think will result in the > wrong thing. yes. A bit complicit to resolve this. Any suggestions on this, guys? > -- Thanks Alex