From: Andi Kleen <andi@firstfloor.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Andi Kleen <ak@linux.intel.com>, Andi Kleen <andi@firstfloor.org>,
x86@kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/3] x86: Move msr accesses out of line
Date: Wed, 25 Feb 2015 19:20:32 +0100 [thread overview]
Message-ID: <20150225182032.GD823@two.firstfloor.org> (raw)
In-Reply-To: <20150225122701.GK5029@twins.programming.kicks-ass.net>
> Still, I wondered, so I ran me a little test. Note that I used a
> serializing instruction (LOCK XCHG) because WRMSR is too.
WRMSR has a lot of uops internally unlike LOCK XCHG, so I expect it
will mostly overlap with what it does. I'll run some benchmarks on
this today.
Also we do quite a few RDMSRs, which are not necessarily
serializing.
> I see a ~14 cycle difference between the inline and noinline version.
>
> If I substitute the LOCK XCHG with XADD, I get to 1,5 cycles in
> difference, so clearly there is some magic happening, but serializing
> instructions wreck it.
>
> Anybody can explain how such RSP deps get magiced away?
On Intel Core (since Yonah), the CPU frontend has a special
stack tracker that avoids these dependencies.
See 2.3.2.5 in the optimization manual
Also BTW just from tracing MSRs there is a lot of optimization
potential. Will send some patches later.
-Andi
next prev parent reply other threads:[~2015-02-25 18:20 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-02-21 1:38 [PATCH 1/3] x86: Move msr accesses out of line Andi Kleen
2015-02-21 1:38 ` [PATCH 2/3] x86: Add trace point for MSR accesses Andi Kleen
2015-02-21 1:38 ` [PATCH 3/3] perf, x86: Remove old MSR perf tracing code Andi Kleen
2015-02-23 17:04 ` [PATCH 1/3] x86: Move msr accesses out of line Peter Zijlstra
2015-02-23 17:43 ` Andi Kleen
2015-02-25 12:27 ` Peter Zijlstra
2015-02-25 18:20 ` Andi Kleen [this message]
2015-02-25 18:34 ` Borislav Petkov
2015-02-26 11:43 ` [RFC][PATCH] module: Optimize __module_address() using a latched RB-tree Peter Zijlstra
2015-02-26 12:00 ` Ingo Molnar
2015-02-26 14:12 ` Peter Zijlstra
2015-02-27 11:51 ` Rusty Russell
2015-02-26 16:02 ` Mathieu Desnoyers
2015-02-26 16:43 ` Peter Zijlstra
2015-02-26 16:55 ` Mathieu Desnoyers
2015-02-26 17:16 ` Peter Zijlstra
2015-02-26 17:22 ` Peter Zijlstra
2015-02-26 18:28 ` Paul E. McKenney
2015-02-26 19:06 ` Mathieu Desnoyers
2015-02-26 19:13 ` Peter Zijlstra
2015-02-26 19:41 ` Paul E. McKenney
2015-02-26 19:45 ` Peter Zijlstra
2015-02-26 22:32 ` Peter Zijlstra
2015-02-26 20:52 ` Andi Kleen
2015-02-26 22:36 ` Peter Zijlstra
2015-02-27 10:01 ` Peter Zijlstra
2015-02-28 23:30 ` Paul E. McKenney
2015-02-28 16:41 ` Peter Zijlstra
2015-02-28 16:56 ` Peter Zijlstra
2015-02-28 23:32 ` Paul E. McKenney
2015-03-02 9:24 ` Peter Zijlstra
2015-03-02 16:58 ` Paul E. McKenney
2015-02-27 12:02 ` Rusty Russell
2015-02-27 14:30 ` Peter Zijlstra
-- strict thread matches above, loose matches on Subject: below --
2015-03-20 0:29 Updated MSR tracing patchkit v2 Andi Kleen
2015-03-20 0:29 ` [PATCH 1/3] x86: Move msr accesses out of line Andi Kleen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150225182032.GD823@two.firstfloor.org \
--to=andi@firstfloor.org \
--cc=ak@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=peterz@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox