From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933965AbXJSTga (ORCPT ); Fri, 19 Oct 2007 15:36:30 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1758396AbXJSTgV (ORCPT ); Fri, 19 Oct 2007 15:36:21 -0400 Received: from pentafluge.infradead.org ([213.146.154.40]:49182 "EHLO pentafluge.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754260AbXJSTgU (ORCPT ); Fri, 19 Oct 2007 15:36:20 -0400 Subject: Re: [patch 6/8] pull RT tasks From: Peter Zijlstra To: Steven Rostedt Cc: LKML , RT , Linus Torvalds , Andrew Morton , Ingo Molnar , Thomas Gleixner , Gregory Haskins , Paul Jackson In-Reply-To: <1192821893.9471.17.camel@lappy> References: <20071019184254.456160632@goodmis.org> <20071019184336.983272715@goodmis.org> <1192821893.9471.17.camel@lappy> Content-Type: text/plain Date: Fri, 19 Oct 2007 21:35:30 +0200 Message-Id: <1192822530.9471.22.camel@lappy> Mime-Version: 1.0 X-Mailer: Evolution 2.12.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 2007-10-19 at 21:24 +0200, Peter Zijlstra wrote: > On Fri, 2007-10-19 at 14:43 -0400, Steven Rostedt wrote: > > plain text document attachment (rt-balance-pull-tasks.patch) > > > +static int pull_rt_task(struct rq *this_rq) > > +{ > > + struct task_struct *next; > > + struct task_struct *p; > > + struct rq *src_rq; > > + int this_cpu = this_rq->cpu; > > + int cpu; > > + int ret = 0; > > + > > + assert_spin_locked(&this_rq->lock); > > + > > + if (likely(!atomic_read(&rt_overload))) > > + return 0; > > This seems to be the only usage of rt_overload. I'm not sure its worth > keeping it around for this. Ingo just brought up a good point. With large smp (where large is >64) this will all suck chunks. rt_overload will bounce around the system, and the rto_cpumask updates might already hurt. The idea would be to do this per cpuset, these naturally limit the migraiton posibilities of tasks and would thus be the natural locality to break this data structure. > > + next = pick_next_task_rt(this_rq); > > + > > + for_each_cpu_mask(cpu, rto_cpumask) { > > + if (this_cpu == cpu) > > + continue; > > ... > > > + } > > + > > + return ret; > > +} >