From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752495AbbJMEa4 (ORCPT ); Tue, 13 Oct 2015 00:30:56 -0400 Received: from mga03.intel.com ([134.134.136.65]:7130 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751156AbbJMEaz (ORCPT ); Tue, 13 Oct 2015 00:30:55 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,676,1437462000"; d="scan'208";a="579489232" Date: Tue, 13 Oct 2015 04:42:25 +0800 From: Yuyang Du To: Mike Galbraith Cc: Peter Zijlstra , linux-kernel@vger.kernel.org Subject: Re: 4.3 group scheduling regression Message-ID: <20151012204225.GN11102@intel.com> References: <1444585321.4169.18.camel@gmail.com> <20151012072344.GM3604@twins.programming.kicks-ass.net> <1444635897.3425.19.camel@gmail.com> <20151012080407.GJ3816@twins.programming.kicks-ass.net> <20151012005351.GJ11102@intel.com> <20151012091206.GK3816@twins.programming.kicks-ass.net> <20151012021230.GK11102@intel.com> <1444645411.3534.5.camel@gmail.com> <20151012195516.GM11102@intel.com> <1444709314.3362.2.camel@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1444709314.3362.2.camel@gmail.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 13, 2015 at 06:08:34AM +0200, Mike Galbraith wrote: > It sounded like you wanted me to run the below alone. If so, it's a nogo. Yes, thanks. Then it is the sad fact that after migrate and removed_load_avg is added in migrate_task_rq_fair(), we don't get a chance to update the tg so fast that at the destination the mplayer is weighted to the group's share. > ----------------------------------------------------------------------------------------------------------------- > Task | Runtime ms | Switches | Average delay ms | Maximum delay ms | Maximum delay at | > ----------------------------------------------------------------------------------------------------------------- > oink:(8) | 787001.236 ms | 21641 | avg: 0.377 ms | max: 21.991 ms | max at: 51.504005 s > mplayer:(25) | 4256.224 ms | 7264 | avg: 19.698 ms | max: 2087.489 ms | max at: 115.294922 s > Xorg:1011 | 1507.958 ms | 4081 | avg: 8.349 ms | max: 1652.200 ms | max at: 126.908021 s > konsole:1752 | 697.806 ms | 1186 | avg: 5.749 ms | max: 160.189 ms | max at: 53.037952 s > testo:(9) | 438.164 ms | 2551 | avg: 6.616 ms | max: 215.527 ms | max at: 117.302455 s > plasma-desktop:1716 | 280.418 ms | 1624 | avg: 3.701 ms | max: 574.806 ms | max at: 53.582261 s > kwin:1708 | 144.986 ms | 2422 | avg: 3.301 ms | max: 315.707 ms | max at: 116.555721 s > > > -- > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index 4df37a4..3dba883 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -2686,12 +2686,13 @@ static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq); > > static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) > > { > > struct sched_avg *sa = &cfs_rq->avg; > > - int decayed; > > + int decayed, updated = 0; > > > > if (atomic_long_read(&cfs_rq->removed_load_avg)) { > > long r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0); > > sa->load_avg = max_t(long, sa->load_avg - r, 0); > > sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0); > > + updated = 1; > > } > > > > if (atomic_long_read(&cfs_rq->removed_util_avg)) { > > @@ -2708,7 +2709,7 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) > > cfs_rq->load_last_update_time_copy = sa->last_update_time; > > #endif > > > > - return decayed; > > + return decayed | updated; A typo: decayed || updated, but shouldn't make any difference. > > } > > > > /* Update task and its cfs_rq load average */ >