From: Dan Malek <dan@embeddededge.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: linuxppc-embedded@lists.linuxppc.org, Paul Mackerras <paulus@samba.org>
Subject: Re: LMBench and CONFIG_PIN_TLB
Date: Wed, 29 May 2002 10:40:02 -0400 [thread overview]
Message-ID: <3CF4E842.3070207@embeddededge.com> (raw)
In-Reply-To: 20020529030838.GZ16537@zax
David Gibson wrote:
> I did some LMBench runs to observe the effect of CONFIG_PIN_TLB.
I implemented the tlb pinning for two reasons. One, politics, since
everyone "just knows it is signficanlty better", and two, to alleviate
the exception path return problem of taking a TLB miss after loading SRR0/1.
> .... the difference varies between
> nothing (lost in the noise) to around 15% (fork proc). The only
> measurement where no pinned entries might be argued to win is
> LMbench's main memory latency measurement. The difference is < 0.1%
> and may just be chance fluctation.
It has been my experience over the last 20 years that in general
applications that show high TLB miss activity are making inefficient
use of all system resources and aren't likely to be doing any useful
work. Why aren't we measuring cache efficiency? Why aren't we profiling
the kernel to see where code changes will really make a difference?
Why aren't we measuring TLB performace on all processors? If you want
to improve TLB performance, get a processor with larger TLBs or better
hardware support.
Pinning TLB entries simply reduces the resource availability. When I'm
running a real application, doing real work in a real product, I don't
want these resources allocated for something else that is seldom used.
There are lots of other TLB management implementations that can really
improve performance, they just don't fit well into the current Linux/PowerPC
design.
I have seen exactly one application where TLB pinning actually
improved the performace of the system. It was a real-time system,
based on Linux using an MPC8xx, where the maximum event response latency
had to be guaranteed. With the proper locking of pages and TLB pins
this could be done. It didn't improve the performance of the application,
but did ensure the system operated properly.
> The difference between 1 and 2 pinned entries is very small.
> There are a few cases where 1 might be better (but it might just be
> random noise) and a very few where 2 might be better than one. On the
> basis of that there seems little point in pinning 2 entries.
What kind of scientific analysis is this? Run controlled tests, post
the results, explain the variances, and allow it to be repeatable by
others. Is there any consistency to the results?
> ..... Unless someone can come up with a
> real life workload which works poorly with pinned TLBs, I see little
> point in keeping the option - pinned TLBs should always be on (pinning
> 1 entry).
Where is your data that supports this? Where is your "real life workload"
that actually supports what you want to do?
From my perspective, your data shows we shouldn't do it. A "real life
workload" is not a fork proc test, but rather main memory latency test,
where your tests showed it was better to not pin entries but you can't
explain the "fluctuation." I contend the difference is due to the fact
you have reduced the TLB resources, increasing the number of TLB misses
to an application that is trying to do real work.
I suggest you heed the quote you always attach to your messages. This
isn't a simple solution that is suitable for all applications. It's one
option among many that needs to be tuned to meet the requirements of
an application.
Thanks.
-- Dan
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
next prev parent reply other threads:[~2002-05-29 14:40 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-05-29 3:08 LMBench and CONFIG_PIN_TLB David Gibson
2002-05-29 14:40 ` Dan Malek [this message]
2002-05-29 23:04 ` Paul Mackerras
2002-05-29 23:16 ` Tom Rini
2002-05-30 1:34 ` Dan Malek
2002-05-30 5:14 ` David Gibson
2002-05-30 16:09 ` Matthew Locke
2002-05-30 23:50 ` Paul Mackerras
2002-05-30 23:01 ` Matthew Locke
2002-05-31 2:39 ` David Gibson
2002-05-31 0:10 ` Tom Rini
2002-05-31 14:48 ` Tom Rini
2002-05-30 5:05 ` David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3CF4E842.3070207@embeddededge.com \
--to=dan@embeddededge.com \
--cc=david@gibson.dropbear.id.au \
--cc=linuxppc-embedded@lists.linuxppc.org \
--cc=paulus@samba.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).