From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753395AbXDWHMi (ORCPT ); Mon, 23 Apr 2007 03:12:38 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753559AbXDWHMi (ORCPT ); Mon, 23 Apr 2007 03:12:38 -0400 Received: from mx2.mail.elte.hu ([157.181.151.9]:54332 "EHLO mx2.mail.elte.hu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753395AbXDWHMg (ORCPT ); Mon, 23 Apr 2007 03:12:36 -0400 Date: Mon, 23 Apr 2007 09:10:50 +0200 From: Ingo Molnar To: Nick Piggin Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Andrew Morton , Con Kolivas , Mike Galbraith , Arjan van de Ven , Peter Williams , Thomas Gleixner , caglar@pardus.org.tr, Willy Tarreau , Gene Heskett , Mark Lord , Ulrich Drepper Subject: Re: [patch] CFS scheduler, -v5 Message-ID: <20070423071050.GD4518@elte.hu> References: <20070420140457.GA14017@elte.hu> <20070423011229.GA20367@elte.hu> <20070423012509.GA25162@wotan.suse.de> <20070423025553.GA10407@elte.hu> <20070423032215.GC25162@wotan.suse.de> <20070423034310.GA19845@elte.hu> <20070423040600.GD25162@wotan.suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20070423040600.GD25162@wotan.suse.de> User-Agent: Mutt/1.4.2.2i X-ELTE-VirusStatus: clean X-ELTE-SpamScore: -2.0 X-ELTE-SpamLevel: X-ELTE-SpamCheck: no X-ELTE-SpamVersion: ELTE 2.0 X-ELTE-SpamCheck-Details: score=-2.0 required=5.9 tests=BAYES_00 autolearn=no SpamAssassin version=3.1.7 -2.0 BAYES_00 BODY: Bayesian spam probability is 0 to 1% [score: 0.0000] Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org * Nick Piggin wrote: > > yeah - but they'll all be quad core, so the SMP timeslice > > multiplicator should do the trick. Most of the CFS testers use > > single-CPU systems. > > But desktop users could have have quad thread and even 8 thread CPUs > soon, so if the number doesn't work for both then you're in trouble. > It just smells like a hack to scale with CPU numbers. hm, i still like Con's approach in this case because it makes independent sense: in essence we calculate the "human visible" effective latency of a physical resource: more CPUs/threads means more parallelism and less visible choppiness of whatever basic chunking of workloads there might be, hence larger size chunking can be done. > > it doesnt in any test i do, but again, i'm erring on the side of it > > being more interactive. > > I'd start by erring on the side of trying to ensure no obvious > performance regressions like this because that's the easy part. > Suppose everybody finds your scheduler wonderfully interactive, but > you can't make it so with a larger timeslice? look at CFS's design and you'll see that it can easily take larger timeslices :) I really dont need any reinforcement on that part. But i do need reinforcement and test results on the basic part: _can_ this design be interactive enough on the desktop? So far the feedback has been affirmative, but more testing is needed. server scheduling, while obviously of prime importance to us, is really 'easy' in comparison technically, because it has alot less human factors and is thus a much more deterministic task. > For _real_ desktop systems, sure, erring on the side of being more > interactive is fine. For RFC patches for testing, I really think you > could be taking advantage of the fact that people will give you > feedback on the issue. 90% of the testers are using CFS on desktops. 80% of the scheduler complaints come regarding the human (latency/behavior/consistency) aspect of the upstream scheduler. (Sure, we dont want to turn that around into '80% of the complaints come due to performance' - so i increased the granularity based on your kbuild feedback to that near of SD's, to show that mini-timeslices are not a necessity in CFS, but i really think that server scheduling is the easier part.) Ingo