From: Larry McVoy <lm@bitmover.com>
To: Linus Torvalds <torvalds@transmeta.com>
Cc: Hubertus Franke <frankeh@us.ibm.com>,
Mike Kravetz <mkravetz@beaverton.ibm.com>,
linux-kernel@vger.kernel.org, wscott@bitmover.com
Subject: Re: [RFC][PATCH] Scalable Scheduling
Date: Wed, 8 Aug 2001 11:18:44 -0700 [thread overview]
Message-ID: <20010808111844.S23718@work.bitmover.com> (raw)
In-Reply-To: <Pine.LNX.4.33.0108081041260.8047-100000@penguin.transmeta.com> <Pine.LNX.4.33.0108081058420.8103-100000@penguin.transmeta.com>
In-Reply-To: <Pine.LNX.4.33.0108081058420.8103-100000@penguin.transmeta.com>; from torvalds@transmeta.com on Wed, Aug 08, 2001 at 11:00:50AM -0700
On Wed, Aug 08, 2001 at 11:00:50AM -0700, Linus Torvalds wrote:
> Oh, and as I didn't actually run it, I have no idea about what performance
> is really like. I assume you've done lmbench runs across wide variety (ie
> UP to SMP) of machines with and without this?
I'd really really really like to see before/after cache miss counters for
lat_ctx runs. LMbench is not fine enough grained to catch the addition
of cache misses unless the call path is so short that a 200ns cache miss
dominates. Very few are.
Someobdy really ought to take the time to make a cache miss counter program
that works like /bin/time. So I could do
$ cachemiss lat_ctx 2
10123 instruction, 22345 data, 50432 TLB flushes
Has anyone done that? If so, then what would be cool is if each of these
wonderful new features that people propose come with cachemiss results for
the related part of LMbench or some other benchmark.
Then we need to get smart about looking at the results. It's quite easy to
convince yourself that all is well when running a microbenchmark (LMbench
is mostly microbenchmarks) because if the benchmark uses up < 100% of the
cache then you can add cache footprint up 100% of the cache and still see
really great cachemiss results.
The lat_ctx benchmark tries to address this. For scheduler changes, I'd
want to see cachmiss results for runs with different numbers and sizes
of processes. The lat_ctx benchmark has the ability to add to the cache
footprint in powers of 2. I.e., it touches a power of 2 sized chunk of
mem before context switching.
I don't remember if it touches or reads the data, we should certainly have
one that just reads to get around the write through cache problem.
Does all this make sense to most performance people out there? It is good
to have lots of people understand the points here, argue them out and get
on the same page about them and then police the various perf changes that
people want to make. I know Alan likes to call me "the man who says no"
but my voice is but one tiny one and I think we all really rely on Linus
to make these calls. Let's give him a little help. Try and develop a
mental picture of Linus leaning back on a nice rocking chair, smoking
a stogy, nodding sagely at the collective good judgement of the list.
--
---
Larry McVoy lm at bitmover.com http://www.bitmover.com/lm
next prev parent reply other threads:[~2001-08-08 18:18 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-08-08 17:32 [RFC][PATCH] Scalable Scheduling Hubertus Franke
2001-08-08 17:43 ` Linus Torvalds
2001-08-08 18:00 ` Linus Torvalds
2001-08-08 18:18 ` Larry McVoy [this message]
2001-08-08 18:53 ` David S. Miller
2001-08-10 23:58 ` Chris Wedgwood
2001-08-08 18:28 ` Mike Kravetz
2001-08-08 19:06 ` Daniel Phillips
2001-08-08 19:14 ` Linus Torvalds
2001-08-08 19:27 ` Victor Yodaiken
-- strict thread matches above, loose matches on Subject: below --
2001-08-14 14:22 Erik Corry
2001-08-11 0:04 Hubertus Franke
2001-08-08 20:02 Hubertus Franke
2001-08-08 19:40 Hubertus Franke
2001-08-08 19:51 ` Victor Yodaiken
2001-08-08 19:16 Hubertus Franke
2001-08-08 19:05 Hubertus Franke
2001-08-08 16:16 Mike Kravetz
2001-08-08 16:40 ` Linus Torvalds
2001-08-08 17:05 ` Mike Kravetz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20010808111844.S23718@work.bitmover.com \
--to=lm@bitmover.com \
--cc=frankeh@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mkravetz@beaverton.ibm.com \
--cc=torvalds@transmeta.com \
--cc=wscott@bitmover.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox