From: Andi Kleen <andi@firstfloor.org>
To: Vince Weaver <vince@deater.net>
Cc: Andi Kleen <andi@firstfloor.org>, Ingo Molnar <mingo@elte.hu>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Paul Mackerras <paulus@samba.org>,
linux-kernel@vger.kernel.org, Mike Galbraith <efault@gmx.de>
Subject: Re: [numbers] perfmon/pfmon overhead of 17%-94%
Date: Sat, 4 Jul 2009 01:40:23 +0200 [thread overview]
Message-ID: <20090703234023.GM2041@one.firstfloor.org> (raw)
In-Reply-To: <Pine.LNX.4.64.0907031719530.17372@pianoman.cluster.toy>
On Fri, Jul 03, 2009 at 05:25:32PM -0400, Vince Weaver wrote:
> >Vince Weaver <vince@deater.net> writes:
> >>
> >>as I said in a previous post, on most x86 chips the instructions_retired
> >>counter also includes any hardware interrupts that occur during the
> >>process runtime.
> >
> >On the other hand afaik near all chips have interrupt performance counter
> >events.
>
> I guess by "near all" you mean "only AMD"? The AMD event also has some
Intel CPUs typically have HW_INT.RX event. AMD has a similar event.
> well, it's basically at least HZ extra instructions per however many
> seconds your benchmark runs, and unfortunately it's non-deterministic
> because it depends on keyboard/network/usb/etc interrupts too that may by
> chance happen while your program is running.
>
> For me, it's the determinism that matters. Not overhead, not runtime not
To be honest I don't think you'll ever be full deterministic. Modern
computers and operating systems are just too complex with too
many (often unpredictable) things going on in the background. In my own
experience even simulators (which are much more stable than
real hardware) are not fully deterministic. You'll always run
into problems.
If you need 100% deterministic use a simple micro controller.
-Andi
--
ak@linux.intel.com -- Speaking for myself only.
next prev parent reply other threads:[~2009-07-03 23:40 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-24 13:59 performance counter 20% error finding retired instruction count Vince Weaver
2009-06-24 15:10 ` Ingo Molnar
2009-06-25 2:12 ` Vince Weaver
2009-06-25 6:50 ` Peter Zijlstra
2009-06-25 9:13 ` Ingo Molnar
2009-06-26 18:22 ` Vince Weaver
2009-06-26 19:12 ` Peter Zijlstra
2009-06-27 5:32 ` Ingo Molnar
2009-06-26 19:23 ` Vince Weaver
2009-06-27 6:04 ` performance counter ~0.4% " Ingo Molnar
2009-06-27 6:44 ` [numbers] perfmon/pfmon overhead of 17%-94% Ingo Molnar
2009-06-29 18:25 ` Vince Weaver
2009-06-29 21:02 ` Ingo Molnar
2009-07-02 21:07 ` Vince Weaver
2009-07-03 7:58 ` Ingo Molnar
2009-07-03 21:43 ` Vince Weaver
2009-07-03 18:31 ` Andi Kleen
2009-07-03 21:25 ` Vince Weaver
2009-07-03 23:40 ` Andi Kleen [this message]
2009-06-29 23:46 ` [patch] perf_counter: Add enable-on-exec attribute Ingo Molnar
2009-06-29 23:55 ` [numbers] perfmon/pfmon overhead of 17%-94% Ingo Molnar
2009-06-30 0:05 ` Ingo Molnar
2009-06-27 6:48 ` performance counter ~0.4% error finding retired instruction count Paul Mackerras
2009-06-27 17:28 ` Ingo Molnar
2009-06-29 2:12 ` Paul Mackerras
2009-06-29 2:13 ` Paul Mackerras
2009-06-29 3:48 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090703234023.GM2041@one.firstfloor.org \
--to=andi@firstfloor.org \
--cc=a.p.zijlstra@chello.nl \
--cc=efault@gmx.de \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=paulus@samba.org \
--cc=vince@deater.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox