From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2992818AbXDRVsn (ORCPT ); Wed, 18 Apr 2007 17:48:43 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S2992796AbXDRVsn (ORCPT ); Wed, 18 Apr 2007 17:48:43 -0400 Received: from mx2.mail.elte.hu ([157.181.151.9]:46761 "EHLO mx2.mail.elte.hu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2992818AbXDRVsm (ORCPT ); Wed, 18 Apr 2007 17:48:42 -0400 Date: Wed, 18 Apr 2007 23:48:16 +0200 From: Ingo Molnar To: Davide Libenzi Cc: Linus Torvalds , Matt Mackall , Nick Piggin , William Lee Irwin III , Peter Williams , Mike Galbraith , Con Kolivas , ck list , Bill Huey , Linux Kernel Mailing List , Andrew Morton , Arjan van de Ven , Thomas Gleixner Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS] Message-ID: <20070418214816.GA10902@elte.hu> References: <20070418043831.GR11115@waste.org> <20070418050024.GF18452@wotan.suse.de> <20070418055525.GS11115@waste.org> <20070418152355.GU11115@waste.org> <20070418174945.GA7930@elte.hu> <20070418175936.GA11980@elte.hu> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.2i X-ELTE-VirusStatus: clean X-ELTE-SpamScore: -2.0 X-ELTE-SpamLevel: X-ELTE-SpamCheck: no X-ELTE-SpamVersion: ELTE 2.0 X-ELTE-SpamCheck-Details: score=-2.0 required=5.9 tests=BAYES_00 autolearn=no SpamAssassin version=3.0.3 -2.0 BAYES_00 BODY: Bayesian spam probability is 0 to 1% [score: 0.0000] Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org * Davide Libenzi wrote: > I think Ingo's idea of a new sched_group to contain the generic > parameters needed for the "key" calculation, works better than adding > more fields to existing strctures (that would, of course, host > pointers to it). Otherwise I can already the the struct_signal being > the target for other unrelated fields :) yeah. Another detail is that for global containers like uids, the statistics will have to be percpu_alloc()-ed, both for correctness (runqueues are per CPU) and for performance. That's one reason why i dont think it's necessarily a good idea to group-schedule threads, we dont really want to do a per thread group percpu_alloc(). In fact for threads the _reverse_ problem exists, threaded apps tend to _strive_ for more performance - hence their desperation of using the threaded programming model to begin with ;) (just think of media playback apps which are typically multithreaded) I dont think threads are all that different. Also, the resource-conserving act of using CLONE_VM to share the VM (and to use a different programming environment like Java) should not be 'punished' by forcing the thread group to be accounted as a single, shared entity against other 'fat' tasks. so my current impression is that we want per UID accounting to solve the X problem, the kernel threads problem and the many-users problem, but i'd not want to do it for threads just yet because for them there's not really any apparent problem to be solved. Ingo