public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mike Fedyk <mfedyk@matchmail.com>
To: Davide Libenzi <davidel@xmailserver.org>
Cc: Ingo Molnar <mingo@elte.hu>, lkml <linux-kernel@vger.kernel.org>
Subject: Re: [patch] scheduler cache affinity improvement for 2.4 kernels
Date: Thu, 8 Nov 2001 17:34:58 -0800	[thread overview]
Message-ID: <20011108173458.C14468@mikef-linux.matchmail.com> (raw)
In-Reply-To: <20011108170740.B14468@mikef-linux.matchmail.com> <Pine.LNX.4.40.0111081718570.1501-100000@blue1.dev.mcafeelabs.com>
In-Reply-To: <Pine.LNX.4.40.0111081718570.1501-100000@blue1.dev.mcafeelabs.com>

On Thu, Nov 08, 2001 at 05:29:25PM -0800, Davide Libenzi wrote:
> On Thu, 8 Nov 2001, Mike Fedyk wrote:
> 
> > [cc trimed]
> >
> > On Thu, Nov 08, 2001 at 04:37:46PM -0800, Davide Libenzi wrote:
> > > On Thu, 8 Nov 2001, Mike Fedyk wrote:
> > >
> > > > Ingo's patch in effect lowers the number of jiffies taken per second in the
> > > > scheduler (by making each task use several jiffies).
> > > >
> > > > Davide's patch can take the default scheduler (even Ingo's enhanced
> > > > scheduler) and make it per processor, with his extra layer of scheduling
> > > > between individual processors.
> > >
> > > Don't mix things :)
> > > We're talking only about the CpuHistory token of the scheduler proposed here:
> > >
> > > http://www.xmailserver.org/linux-patches/mss.html
> > >
> > > This is a bigger ( and not yet complete ) change on the SMP scheduler
> > > behavior, while it keeps the scheduler that runs on each CPU the same.
> > > I'm currently working on different balancing methods to keep the proposed
> > > scheduler fair well balanced without spinning tasks "too much"(tm).
> > >
> > I've given your patch a try, and so far it looks promising.
> >
> > Running one niced copy of cpuhog on a 2x366 mhz celeron box did pretty well.
> > Instead of switching several times in one second, it only switched a few
> > times per minute.
> >
> > I was also able to merge it with just about everything else I was testing
> > (ext3, freeswan, elevator updates, -ac) except for the preempt patch.  Well, I
> > was able to manually merge it, but the cpu afinity broke.  (it wouldn't use
> > the second processor for anything except for interrupt processing...)
> >
> > I haven't tried any of the other scheduler patches though.  MQ, looks
> > interesting... :)
> >
> > All in all, I think xsched will have much more impact on performance.
> > Simply because it tackles the problem of CPU affinity...
> >
> > Even comparing Ingo's patch to your CPU History patch isn't fair, because
> > they attack different problems.  Yours of CPU affinity, Ingo's of time spent
> > on individual tasks within a single processor.

Looking at that again, it could've been "Ingo's of intra-CPU time slice
length"... ;)

> xsched is not complete yet, it's a draft ( working draft :) ) that i'm

A pretty good draft, I'd say!

> using to study a more heavy CPU tasks isolation on SMP systems.
> I think that this is the way to go for a more scalable SMP scheduler.
> I'm currently sampling the proposed scheduler with LatSched that gives a
> very good picture of 1) process migration 2) _real_ scheduler latency
> cycles cost.

:)

> The MQ scheduler has the same roots of the proposed one but has a longest
> fast path due the try to make global scheduling decisions at every
> schedule.

Ahh, so that's why it hasn't been adopted...

> I'm in contact ( close contact coz we're both in Beaverton :) ) with IBM
> guys to have the two scheduler tested on bigger machines if the proposed
> scheduler will give some fruit.
>

>From what I've seen, it probably will...

I hope something like this will go into 2.5...

What do other unixes do in this case?  Are there any commercial Unixes that
have loose affinity like linux currently does?  What about NT?

  reply	other threads:[~2001-11-09  1:35 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-11-08 14:30 [patch] scheduler cache affinity improvement for 2.4 kernels Ingo Molnar
2001-11-08 15:22 ` M. Edward Borasky
2001-11-08 16:33   ` Ingo Molnar
2001-11-08 17:15 ` Davide Libenzi
2001-11-08 18:27   ` Ingo Molnar
2001-11-08 18:03     ` Davide Libenzi
2001-11-08 19:40       ` Ingo Molnar
2001-11-08 19:13         ` Davide Libenzi
2001-11-08 23:37           ` Mike Fedyk
2001-11-09  0:37             ` Davide Libenzi
2001-11-09  1:07               ` Mike Fedyk
2001-11-09  1:29                 ` Davide Libenzi
2001-11-09  1:34                   ` Mike Fedyk [this message]
2001-11-09  2:09                     ` Davide Libenzi
2001-11-09  2:08                       ` Mike Fedyk
2001-11-19 18:34               ` bill davidsen
2001-11-09  8:28             ` Ingo Molnar
2001-11-09  8:05               ` Mike Fedyk
2001-11-11 21:18               ` Davide Libenzi
2001-11-11 22:31                 ` Davide Libenzi
2001-11-08 23:46 ` Andrea Arcangeli
2001-11-09  0:31 ` Davide Libenzi
2001-11-14  4:56 ` Mike Kravetz
2001-11-14 18:08   ` Davide Libenzi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20011108173458.C14468@mikef-linux.matchmail.com \
    --to=mfedyk@matchmail.com \
    --cc=davidel@xmailserver.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox