From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752909AbbIZNOu (ORCPT ); Sat, 26 Sep 2015 09:14:50 -0400 Received: from mail-wi0-f176.google.com ([209.85.212.176]:33237 "EHLO mail-wi0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752776AbbIZNOt (ORCPT ); Sat, 26 Sep 2015 09:14:49 -0400 Date: Sat, 26 Sep 2015 15:14:45 +0200 From: Frederic Weisbecker To: byungchul.park@lge.com Cc: mingo@kernel.org, peterz@infradead.org, linux-kernel@vger.kernel.org, tglx@linutronix.de Subject: Re: [RESEND PATCH] sched: consider missed ticks when updating global cpu load Message-ID: <20150926131444.GA5507@lerouge> References: <1443171157-23384-1-git-send-email-byungchul.park@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1443171157-23384-1-git-send-email-byungchul.park@lge.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 25, 2015 at 05:52:37PM +0900, byungchul.park@lge.com wrote: > From: Byungchul Park > > hello, > > i have already sent this patch about 1 month ago. > (see https://lkml.org/lkml/2015/8/13/160) > > now, i am resending the same patch with adding some additional commit > message. > > thank you, > byungchul > > ----->8----- > From 8ece9a0482e74a39cd2e9165bf8eec1d04665fa9 Mon Sep 17 00:00:00 2001 > From: Byungchul Park > Date: Fri, 25 Sep 2015 17:10:10 +0900 > Subject: [RESEND PATCH] sched: consider missed ticks when updating global cpu > load > > in hrtimer_interrupt(), the first tick_program_event() can be failed > because the next timer could be already expired due to, > (see the comment in hrtimer_interrupt()) > > - tracing > - long lasting callbacks > - being scheduled away when running in a VM > > in the case that the first tick_program_event() is failed, the second > tick_program_event() set the expired time to more than one tick later. > then next tick can happen after more than one tick, even though tick is > not stopped by e.g. NOHZ. > > when the next tick occurs, update_process_times() -> scheduler_tick() > -> update_cpu_load_active() is performed, assuming the distance between > last tick and current tick is 1 tick! it's wrong in this case. thus, > this abnormal case should be considered in update_cpu_load_active(). > > Signed-off-by: Byungchul Park > --- > kernel/sched/fair.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 4d5f97b..829282f 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -4356,12 +4356,15 @@ void update_cpu_load_nohz(void) > */ > void update_cpu_load_active(struct rq *this_rq) > { > + unsigned long curr_jiffies = READ_ONCE(jiffies); > + unsigned long pending_updates; > unsigned long load = weighted_cpuload(cpu_of(this_rq)); > /* > * See the mess around update_idle_cpu_load() / update_cpu_load_nohz(). > */ > - this_rq->last_load_update_tick = jiffies; > - __update_cpu_load(this_rq, load, 1); > + pending_updates = curr_jiffies - this_rq->last_load_update_tick; > + this_rq->last_load_update_tick = curr_jiffies; > + __update_cpu_load(this_rq, load, pending_updates); > } That's right but __update_cpu_load() doesn't handle correctly pending updates with non-zero loads. Currently, pending updates are wheeled through decay_load_missed() that assume it's all about idle load. But in the cases you've enumerated, as well as in the nohz full case, missed pending updates can be about buzy loads. I think we need to fix update_cpu_load() to handle that first, or your fix is going to make things worse.