From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933367Ab1IIN1J (ORCPT ); Fri, 9 Sep 2011 09:27:09 -0400 Received: from e32.co.us.ibm.com ([32.97.110.150]:34238 "EHLO e32.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758774Ab1IIN1G (ORCPT ); Fri, 9 Sep 2011 09:27:06 -0400 Date: Fri, 9 Sep 2011 18:56:37 +0530 From: Srivatsa Vaddagiri To: Peter Zijlstra Cc: Paul Turner , Kamalesh Babulal , Vladimir Davydov , "linux-kernel@vger.kernel.org" , Bharata B Rao , Dhaval Giani , Vaidyanathan Srinivasan , Ingo Molnar , Pavel Emelianov Subject: Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinnede Message-ID: <20110909132637.GA8724@linux.vnet.ibm.com> Reply-To: Srivatsa Vaddagiri References: <20110608163234.GA23031@linux.vnet.ibm.com> <20110610181719.GA30330@linux.vnet.ibm.com> <20110615053716.GA390@linux.vnet.ibm.com> <20110907152009.GA3868@linux.vnet.ibm.com> <1315423342.11101.25.camel@twins> <20110908151433.GB6587@linux.vnet.ibm.com> <1315571462.26517.9.camel@twins> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <1315571462.26517.9.camel@twins> User-Agent: Mutt/1.5.21 (2010-09-15) x-cbid: 11090913-3270-0000-0000-000000139677 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Peter Zijlstra [2011-09-09 14:31:02]: > > We have setup cgroups and their hard limits so that in theory they should > > consume the entire capacity available on machine, leading to 0% idle time. > > That's not what we see. A more detailed description of the setup and the problem > > is here: > > > > https://lkml.org/lkml/2011/6/7/352 > > That's frigging irrelevant isn't it? A patch should contain its own > justification. Agreed my bad. I was (wrongly) setting the problem context by posting this in response to Paul's email where the problem was discussed. > > One > > possibility is to make the idle load balancer become aggressive in > > pulling tasks across sched-domain boundaries i.e when a CPU becomes idle > > (after a task got throttled) and invokes the idle load balancer, it > > should try "harder" at pulling a task from far-off cpus (across > > package/node boundaries)? > > How about we just live with it? I think we will, unless the load balancer can be improved (which seems unlikely to me :-() - vatsa