From: George Dunlap <george.dunlap@citrix.com>
To: Paul Sujkov <psujkov@gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
xen-devel@lists.xen.org
Subject: Re: xentrace, xenalyze
Date: Wed, 24 Feb 2016 15:58:54 +0000 [thread overview]
Message-ID: <56CDD33E.2090409@citrix.com> (raw)
In-Reply-To: <CA+KUHvxqX_G5AQH3zwSJ24=8WtAhHkh3QzSyodDjXaEN2M=6qw@mail.gmail.com>
On 24/02/16 15:24, Paul Sujkov wrote:
>> I think actually the first thing you might need to do is to get the xentrace
> infrastructure working on ARM
>
> Already done that. It requires some patches to memory manager, timer and
> policies. I guess I should upstream them, though.
>
>> After that, the next thing would be to add the equivalent of VMEXIT and VMENTRY
> traces in the hypervisor on ARM guest exit and entry
>
> It seems that this is already covered as well. At least, I have pretty
> decent (and correct if I support timer frequency instead of CPU frequency
> to xenalyze - this is where it differs from x86) trace info summary.
You mean, you have local patches you haven't upstreamed? Or they're
already upstream? (If the latter, I don't see the trace definitions in
xen/include/public/trace.h...)
If I could see those traces I could give you better advice about how to
integrate them into xenalyze (and possibly how to change them so they
fit better into what xenalyze does).
>
>> add in extra tracing information
>> add support for analyzing that data to xenalyze
>
> And, well, these are exactly the steps I can really use some help with :)
> are there any examples of parsing some additional custom trace with
> xenalyze?
So at the basic level, xenalyze has a "dump" mode, which just attempts
to print out the trace records it sees in the file, in a human readable
format, in the order in which they originally happened (even across
physical corer / processors).
To get *that* working, you just need to add it into the "triage" from
xenalyze.c:process_record().
But the real power of xenalyze is to aggregate information about how
many vmexits of a particular type happened, and how long we spent (in
cycles) doing each one.
The basic data structure for this is struct event_cycles_summary. You
keep such a struct for every separate type of event that takes a certain
number of cycles you want to be able to classify. As you go through the
trace file, whenever that event happens, you call update_summary() with
a pointer to the event struct and the number of cycles.
Then when you're done processing the whole file, you call
PRINT_SUMMARY() with a pointer to the summary struct, along with printf
information you want to print before the summary information.
So the next step, after getting the ARM equivalent of TRC_HVM_VMEXIT and
TRC_HVM_VMENTRY set up, would be to get the equivalent of
hvm_vmexit_process() and hvm_vmentry_process() (and hvm_close_vmexit())
set up.
You'd probably want to start by creating a new structure, arm_data, and
adding it to the vcpu_data struct (beside hvm_data and pv_data) (also
making a new VCPU_DATA_ARM enumeration value of course).
The basic processing cycle goes like this:
* vmexit: Store the information about the vmexit in v->hvm_data
* Other HVM traces: add more information about what happened in v->hvm_data
* vmentry: Calculate the length of this event (vmentry.tsc -
vmexit.tsc), figure out all the different summaries which correspond to
this event, and call update_summary() on each of them.
One subtlety to introduce here: it's not uncommon to enter into Xen due
to a vmexit, do something on behalf of a guest, and then get scheduled
out to run some other vcpu. The simplistic "vmexit -> vmentry"
calculation would account this time waiting for the cpu as time
processing the event -- which is not what you want. So xenalyze has a
concept of "closing" a vmexit which happens when the vmexit is logically
finished. hvm_close_vmexit() is called either from hvm_process_vmext(),
or from process_runstate_change when it detects a vcpu switching to the
"runnable" state.
OK, hopefully that gives you enough to start with. :-)
-George
next prev parent reply other threads:[~2016-02-24 15:58 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-24 13:21 xentrace, xenalyze Paul Sujkov
2016-02-24 14:41 ` George Dunlap
2016-02-24 15:24 ` Paul Sujkov
2016-02-24 15:53 ` Dario Faggioli
2016-02-24 15:58 ` George Dunlap [this message]
2016-02-24 17:58 ` Paul Sujkov
2016-02-24 14:51 ` Dario Faggioli
2016-02-24 16:00 ` Paul Sujkov
2016-02-24 16:19 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56CDD33E.2090409@citrix.com \
--to=george.dunlap@citrix.com \
--cc=psujkov@gmail.com \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).