linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Ahern <dsahern@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>, Gleb Natapov <gleb@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>, KVM <kvm@vger.kernel.org>,
	yoshihiro.yunomae.ez@hitachi.com
Subject: Re: RFC: paravirtualizing perf_clock
Date: Mon, 28 Oct 2013 20:58:08 -0600	[thread overview]
Message-ID: <526F2440.9030607@gmail.com> (raw)
In-Reply-To: <20131028131556.GN19466@laptop.lan>

On 10/28/13 7:15 AM, Peter Zijlstra wrote:
>> Any suggestions on how to do this and without impacting performance. I
>> noticed the MSR path seems to take about twice as long as the current
>> implementation (which I believe results in rdtsc in the VM for x86 with
>> stable TSC).
>
> So assuming all the TSCs are in fact stable; you could implement this by
> syncing up the guest TSC to the host TSC on guest boot. I don't think
> anything _should_ rely on the absolute TSC value.
>
> Of course you then also need to make sure the host and guest tsc
> multipliers (cyc2ns) are identical, you can play games with
> cyc2ns_offset if you're brave.
>

This and the method Gleb mentioned both are going to be complex and 
fragile -- based assumptions on how the perf_clock timestamps are 
generated. For example, 489223e assumes you have the tracepoint enabled 
at VM start with some means of capturing the data (e.g., a perf-session 
active). In both cases the end result requires piecing together and 
re-generating the VM's timestamp on the events. For perf this means 
either modifying the tool to take parameters and an algorithm on how to 
modify the timestamp or a homegrown tool to regenerate the file with 
updated timestamps.

To back out a bit, my end goal is to be able to create and merge 
perf-events from any context on a KVM-based host -- guest userspace, 
guest kernel space, host userspace and host kernel space (userspace 
events with a perf-clock timestamp is another topic ;-)). Having the 
events generated with the proper timestamp is the simpler approach than 
trying to collect various tidbits of data, massage timestamps (and 
hoping the clock source hasn't changed) and then merge events.

And then for the cherry on top a design that works across architectures 
(e.g., x86 now, but arm later).

David

  reply	other threads:[~2013-10-29  2:58 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-28  1:27 RFC: paravirtualizing perf_clock David Ahern
2013-10-28 13:00 ` Gleb Natapov
2013-10-28 13:15 ` Peter Zijlstra
2013-10-29  2:58   ` David Ahern [this message]
2013-10-29 13:23     ` Peter Zijlstra
2013-10-30  5:59     ` Masami Hiramatsu
2013-10-30 14:03       ` David Ahern
2013-10-31  8:09         ` Masami Hiramatsu
2013-10-31 16:45           ` David Ahern
2013-10-30 14:20     ` Gleb Natapov
2013-10-30 14:31       ` David Ahern

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=526F2440.9030607@gmail.com \
    --to=dsahern@gmail.com \
    --cc=gleb@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=yoshihiro.yunomae.ez@hitachi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).