public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Larry McVoy <lm@bitmover.com>
To: Davide Libenzi <davidel@xmailserver.org>
Cc: Mike Kravetz <mkravetz@sequent.com>,
	lse-tech@lists.sourceforge.net, Andi Kleen <ak@suse.de>,
	linux-kernel@vger.kernel.org
Subject: Re: CPU affinity & IPI latency
Date: Thu, 12 Jul 2001 17:36:41 -0700	[thread overview]
Message-ID: <20010712173641.C11719@work.bitmover.com> (raw)
In-Reply-To: <20010712164017.C1150@w-mikek2.des.beaverton.ibm.com> <XFMail.20010712172255.davidel@xmailserver.org>
In-Reply-To: <XFMail.20010712172255.davidel@xmailserver.org>; from davidel@xmailserver.org on Thu, Jul 12, 2001 at 05:22:55PM -0700

Be careful tuning for LMbench (says the author :-)

Especially this benchmark.  It's certainly possible to get dramatically better
SMP numbers by pinning all the lat_ctx processes to a single CPU, because 
the benchmark is single threaded.  In other words, if we have 5 processes,
call them A, B, C, D, and E, then the benchmark is passing a token from
A to B to C to D to E and around again.  

If the amount of data/instructions needed by all 5 processes fits in the 
cache and you pin all the processes to the same CPU you'll get much 
better performance than simply letting them float.

But making the system do that naively is a bad idea.

This is a really hard area to get right but you can take a page from all
the failed process migration efforts.  In general, moving stuff is a bad
idea, it's much better to leave it where it is.  Everything scales better
if there is a process queue per CPU and the default is that you leave the
processes on the queue on which they last run.  However, if the load average
for a queue starts going up and there is another queue with a substantially
lower load average, then and ONLY then, should you move the process.

I think if you experiment with that you'll see that lat_ctx does well and
so do a lot of other things.

An optimization on that requires hardware support.  If you knew the number
of cache misses associated with each time slice, you could factor that in
and start moving processes that have a "too high" cache miss rate, with the
idea being that we want to keep all processes on the same CPU if we can
but if that is causing an excessive cache miss rate, it's time to move.

Another optimization is to always schedule an exec-ed process (as opposed
to a forked process) on a different CPU than its parent.  In general, when
you exec you have a clear boundary and it's good to spread those out.

All of this is based on my somewhat dated performance efforts that lead to
LMbench.  I don't know of any fundamental changed that invalidate these 
opinions but I could be wrong.

This is an area in which I've done a pile of work and I'd be interested
in keeping a finger in any efforts to fix up the scheduler.
-- 
---
Larry McVoy            	 lm at bitmover.com           http://www.bitmover.com/lm 

  reply	other threads:[~2001-07-13  0:36 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2001-07-12 23:40 CPU affinity & IPI latency Mike Kravetz
2001-07-13  0:22 ` Davide Libenzi
2001-07-13  0:36   ` Larry McVoy [this message]
2001-07-13  2:06     ` Mark Hahn
2001-07-13 16:41     ` Davide Libenzi
2001-07-13 17:31       ` Mike Kravetz
2001-07-13 19:17         ` Davide Libenzi
2001-07-13 19:39           ` [Lse-tech] " Gerrit Huizenga
2001-07-13 20:05             ` Davide Libenzi
2001-07-13 17:05     ` Mike Kravetz
2001-07-13 19:51       ` David Lang
2001-07-13 22:43         ` Mike Kravetz
2001-07-15 20:02           ` Davide Libenzi
2001-07-15 20:10             ` [Lse-tech] " Andi Kleen
2001-07-15 20:15           ` Andi Kleen
2001-07-15 20:31             ` Davide Libenzi
2001-07-16 15:46             ` [Lse-tech] " Mike Kravetz
2001-07-13 19:54       ` Chris Wedgwood
2001-07-15  7:42 ` Troy Benjegerdes
2001-07-15  9:05   ` [Lse-tech] " Andi Kleen
2001-07-15 17:00     ` Troy Benjegerdes
2001-07-16  0:58       ` Mike Kravetz
  -- strict thread matches above, loose matches on Subject: below --
2001-07-14  3:25 Hubertus Franke
2001-07-16 16:14 ` Mike Kravetz
2001-07-16 21:25   ` Davide Libenzi
2001-07-16 10:10 Hubertus Franke
2001-07-16 16:16 ` Davide Libenzi
2001-07-16 18:26 Hubertus Franke
2001-07-16 21:45 Hubertus Franke
2001-07-16 22:56 ` Davide Libenzi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20010712173641.C11719@work.bitmover.com \
    --to=lm@bitmover.com \
    --cc=ak@suse.de \
    --cc=davidel@xmailserver.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lse-tech@lists.sourceforge.net \
    --cc=mkravetz@sequent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox