From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751917AbbGMI0W (ORCPT ); Mon, 13 Jul 2015 04:26:22 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:53378 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751584AbbGMI0V (ORCPT ); Mon, 13 Jul 2015 04:26:21 -0400 Date: Mon, 13 Jul 2015 10:26:09 +0200 From: Peter Zijlstra To: byungchul.park@lge.com Cc: mingo@kernel.org, linux-kernel@vger.kernel.org, pjt@google.com Subject: Re: [PATCH v2] sched: let __sched_period() use rq's nr_running Message-ID: <20150713082609.GU19282@twins.programming.kicks-ass.net> References: <1436515890-10792-1-git-send-email-byungchul.park@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1436515890-10792-1-git-send-email-byungchul.park@lge.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 10, 2015 at 05:11:30PM +0900, byungchul.park@lge.com wrote: > From: Byungchul Park > > __sched_period() returns a period which a rq can have. the period has to be > stretched by the number of task *the rq has*, when nr_running > nr_latency. > otherwise, task slice can be very smaller than sysctl_sched_min_granularity > depending on the position of tg hierarchy when CONFIG_FAIR_GROUP_SCHED. > > Signed-off-by: Byungchul Park > --- > kernel/sched/fair.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 09456fc..8ae7aeb 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -635,7 +635,7 @@ static u64 __sched_period(unsigned long nr_running) > */ > static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) > { > - u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq); > + u64 slice = __sched_period(rq_of(cfs_rq)->nr_running + !se->on_rq); > > for_each_sched_entity(se) { > struct load_weight *load; This really doesn't make sense; look at what that for_each_sched_entity() loop does below this. I agree that sched_slice() is a difficult proposition in the face of cgroup, but everything is, cgroups suck arse, they make everything hard.