From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752727AbcFTGwm (ORCPT ); Mon, 20 Jun 2016 02:52:42 -0400 Received: from mga03.intel.com ([134.134.136.65]:19284 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751428AbcFTGwd (ORCPT ); Mon, 20 Jun 2016 02:52:33 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,496,1459839600"; d="scan'208";a="1005672218" Date: Mon, 20 Jun 2016 06:55:52 +0800 From: Yuyang Du To: Peter Zijlstra Cc: Vincent Guittot , Ingo Molnar , linux-kernel , Mike Galbraith , Benjamin Segall , Paul Turner , Morten Rasmussen , Dietmar Eggemann , Matt Fleming Subject: Re: [PATCH 4/4] sched,fair: Fix PELT integrity for new tasks Message-ID: <20160619225552.GC19934@intel.com> References: <20160617120136.064100812@infradead.org> <20160617120454.150630859@infradead.org> <20160617142814.GT30154@twins.programming.kicks-ass.net> <20160617160239.GL30927@twins.programming.kicks-ass.net> <20160617161831.GM30927@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160617161831.GM30927@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 17, 2016 at 06:18:31PM +0200, Peter Zijlstra wrote: > On Fri, Jun 17, 2016 at 06:02:39PM +0200, Peter Zijlstra wrote: > > So yes, ho-humm, how to go about doing that bestest. Lemme have a play. > > This is what I came up with, not entirely pretty, but I suppose it'll > have to do. FWIW, I don't think the entire fix is pretty, so I will post my amended fix still later. > --- > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -724,6 +724,7 @@ void post_init_entity_util_avg(struct sc > struct cfs_rq *cfs_rq = cfs_rq_of(se); > struct sched_avg *sa = &se->avg; > long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2; > + u64 now = cfs_rq_clock_task(cfs_rq); > > if (cap > 0) { > if (cfs_rq->avg.util_avg != 0) { > @@ -738,7 +739,20 @@ void post_init_entity_util_avg(struct sc > sa->util_sum = sa->util_avg * LOAD_AVG_MAX; > } > > - update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, false); > + if (entity_is_task(se)) { > + struct task_struct *p = task_of(se); > + if (p->sched_class != &fair_sched_class) { > + /* > + * For !fair tasks do attach_entity_load_avg() > + * followed by detach_entity_load_avg() as per > + * switched_from_fair(). > + */ > + se->avg.last_update_time = now; > + return; > + } > + } > + > + update_cfs_rq_load_avg(now, cfs_rq, false); > attach_entity_load_avg(cfs_rq, se); > } >