From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752098AbbJJOlb (ORCPT ); Sat, 10 Oct 2015 10:41:31 -0400 Received: from mail-wi0-f180.google.com ([209.85.212.180]:34658 "EHLO mail-wi0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751595AbbJJOla (ORCPT ); Sat, 10 Oct 2015 10:41:30 -0400 Message-ID: <1444488086.2804.13.camel@gmail.com> Subject: Re: [patch] sched: disable task group re-weighting on the desktop From: Mike Galbraith To: kbuild test robot Cc: kbuild-all@01.org, Peter Zijlstra , paul.szabo@sydney.edu.au, linux-kernel@vger.kernel.org Date: Sat, 10 Oct 2015 16:41:26 +0200 In-Reply-To: <201510102230.EuRy5bIi%fengguang.wu@intel.com> References: <201510102230.EuRy5bIi%fengguang.wu@intel.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.12.11 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, 2015-10-10 at 22:03 +0800, kbuild test robot wrote: > Hi Mike, Hi there pin-the-tail-on-the-donkey bot. Eeee Ahhh :) sched: disable task group wide utilization based weight on the desktop Task group wide utilization based weight may work well for servers, but it is horrible on the desktop. 8 groups of 1 hog demoloshes interactivity, 1 group of 8 hogs has noticable impact, 2 such groups is very very noticable. Turn it off if autogroup is enabled, and add a feature to let people set the definition of fair to what serves them best. For the desktop, fixed group weight wins hands down, no contest.... Signed-off-by: Mike Galbraith --- kernel/sched/fair.c | 5 +++++ kernel/sched/features.h | 14 ++++++++++++++ --- kernel/sched/fair.c | 5 +++++ kernel/sched/features.h | 14 ++++++++++++++ 2 files changed, 19 insertions(+) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2372,6 +2372,8 @@ static long calc_cfs_shares(struct cfs_r { long tg_weight, load, shares; + if (!sched_feat(SMP_FAIR_GROUPS)) + return tg->shares; tg_weight = calc_tg_weight(tg, cfs_rq); load = cfs_rq_load_avg(cfs_rq); @@ -2423,6 +2425,9 @@ static void update_cfs_shares(struct cfs #ifndef CONFIG_SMP if (likely(se->load.weight == tg->shares)) return; +#else + if (!sched_feat(SMP_FAIR_GROUPS) && se->load.weight == tg->shares) + return; #endif shares = calc_cfs_shares(cfs_rq, tg); --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -88,3 +88,17 @@ SCHED_FEAT(LB_MIN, false) */ SCHED_FEAT(NUMA, true) #endif + +#ifdef CONFIG_FAIR_GROUP_SCHED +/* + * With SMP_FAIR_GROUPS set, activity group wide determines share for + * all froup members. This does very bad things to interactivity when + * a desktop box is heavily loaded. Default to off when autogroup is + * enabled, and let all users set it to what works best for them. + */ +#if defined(CONFIG_SMP) && defined(CONFIG_FAIR_GROUP_SCHED) +SCHED_FEAT(SMP_FAIR_GROUPS, true) +#else +SCHED_FEAT(SMP_FAIR_GROUPS, false) +#endif +#endif