From: Don Zickus <dzickus@redhat.com>
To: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
Linus Torvalds <torvalds@linux-foundation.org>,
Andi Kleen <ak@linux.intel.com>,
dave.hansen@linux.intel.com,
Stephane Eranian <eranian@google.com>,
jmario@redhat.com,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Arnaldo Carvalho de Melo <acme@infradead.org>
Subject: Re: [PATCH] perf, x86: Optimize intel_pmu_pebs_fixup_ip()
Date: Mon, 28 Oct 2013 09:19:46 -0400 [thread overview]
Message-ID: <20131028131946.GG108330@redhat.com> (raw)
In-Reply-To: <20131026103651.GA21294@gmail.com>
On Sat, Oct 26, 2013 at 12:36:52PM +0200, Ingo Molnar wrote:
>
> * Don Zickus <dzickus@redhat.com> wrote:
>
> > On Thu, Oct 24, 2013 at 12:52:06PM +0200, Peter Zijlstra wrote:
> > > On Wed, Oct 23, 2013 at 10:48:38PM +0200, Peter Zijlstra wrote:
> > > > I'll also make sure to test we actually hit the fault path
> > > > by concurrently running something like:
> > > >
> > > > while :; echo 1 > /proc/sys/vm/drop_caches ; done
> > > >
> > > > while doing perf top or so..
> > >
> > > So the below appears to work; I've ran:
> > >
> > > while :; do echo 1 > /proc/sys/vm/drop_caches; sleep 1; done &
> > > while :; do make O=defconfig-build/ clean; perf record -a -g fp -e cycles:pp make O=defconfig-build/ -s -j64; done
> > >
> > > And verified that the if (in_nmi()) trace_printk() was visible in the
> > > trace output verifying we indeed took the fault from the NMI code.
> > >
> > > I've had this running for ~ 30 minutes or so and the machine is still
> > > healthy.
> > >
> > > Don, can you give this stuff a spin on your system?
> >
> > Hi Peter,
> >
> > I finally had a chance to run this on my machine. From my
> > testing, it looks good. Better performance numbers. I think my
> > longest latency went from 300K cycles down to 150K cycles and very
> > few of those (most are under 100K cycles).
>
> Btw., do we know where those ~100k-150k cycles are spent
> specifically? 100k cycles is still an awful lot of time to spend in
> NMI context ...
I agree, there is still a bunch of latency in the nmi path. I believe it
is still in the pebs code. I share the machine with a colleague right
now, so I haven't been able to isolate it.
But going from a few hundred samples over a million cycles to about a
couple dozen over 100K was a big step. :-)
I still see perf throttling and people are complaining about it here so I
still plan to keep investigating. Just taking me a while. Sorry about
that.
Cheers,
Don
next prev parent reply other threads:[~2013-10-28 13:20 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-10-14 20:35 x86, perf: throttling issues with long nmi latencies Don Zickus
2013-10-14 21:28 ` Andi Kleen
2013-10-15 10:14 ` Peter Zijlstra
2013-10-15 13:02 ` Peter Zijlstra
2013-10-15 14:32 ` Peter Zijlstra
2013-10-15 15:07 ` Peter Zijlstra
2013-10-15 15:41 ` Don Zickus
2013-10-16 10:57 ` [PATCH] perf, x86: Optimize intel_pmu_pebs_fixup_ip() Peter Zijlstra
2013-10-16 12:46 ` Don Zickus
2013-10-16 13:31 ` Peter Zijlstra
2013-10-16 13:54 ` Don Zickus
2013-10-17 11:21 ` Peter Zijlstra
2013-10-17 13:33 ` Peter Zijlstra
2013-10-29 14:07 ` [tip:perf/urgent] perf/x86: Fix NMI measurements tip-bot for Peter Zijlstra
2013-10-16 20:52 ` [PATCH] perf, x86: Optimize intel_pmu_pebs_fixup_ip() Andi Kleen
2013-10-16 21:03 ` Peter Zijlstra
2013-10-16 23:07 ` Peter Zijlstra
2013-10-17 9:41 ` Peter Zijlstra
2013-10-17 16:00 ` Don Zickus
2013-10-17 16:04 ` Don Zickus
2013-10-17 16:30 ` Peter Zijlstra
2013-10-17 18:26 ` Linus Torvalds
2013-10-17 21:08 ` Peter Zijlstra
2013-10-17 21:11 ` Peter Zijlstra
2013-10-17 22:01 ` Peter Zijlstra
2013-10-17 22:27 ` Linus Torvalds
2013-10-22 21:12 ` Peter Zijlstra
2013-10-23 7:09 ` Linus Torvalds
2013-10-23 20:48 ` Peter Zijlstra
2013-10-24 10:52 ` Peter Zijlstra
2013-10-24 13:47 ` Don Zickus
2013-10-24 14:06 ` Peter Zijlstra
2013-10-25 16:33 ` Don Zickus
2013-10-25 17:03 ` Peter Zijlstra
2013-10-26 10:36 ` Ingo Molnar
2013-10-28 13:19 ` Don Zickus [this message]
2013-10-29 14:08 ` [tip:perf/core] perf/x86: Further optimize copy_from_user_nmi() tip-bot for Peter Zijlstra
2013-10-23 7:44 ` [PATCH] perf, x86: Optimize intel_pmu_pebs_fixup_ip() Ingo Molnar
2013-10-17 14:49 ` Don Zickus
2013-10-17 14:51 ` Peter Zijlstra
2013-10-17 15:03 ` Don Zickus
2013-10-17 15:09 ` Peter Zijlstra
2013-10-17 15:11 ` Peter Zijlstra
2013-10-17 16:50 ` [tip:perf/core] perf/x86: " tip-bot for Peter Zijlstra
2013-10-15 16:22 ` x86, perf: throttling issues with long nmi latencies Don Zickus
2013-10-15 14:36 ` Don Zickus
2013-10-15 14:39 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131028131946.GG108330@redhat.com \
--to=dzickus@redhat.com \
--cc=acme@infradead.org \
--cc=ak@linux.intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=eranian@google.com \
--cc=jmario@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).