From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Jan Beulich' <JBeulich@suse.com>,
xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH RFC] x86/HVM: meet xentrace's expectations on emulation event data
Date: Thu, 9 Aug 2018 09:09:07 +0000 [thread overview]
Message-ID: <9f8ef8a6534f4e1ea12d89792f8ce695@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <5B6BF4DB02000078001DC531@prv1-mh.provo.novell.com>
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 09 August 2018 09:02
> To: xen-devel <xen-devel@lists.xenproject.org>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul Durrant
> <Paul.Durrant@citrix.com>; George Dunlap <George.Dunlap@citrix.com>
> Subject: [PATCH RFC] x86/HVM: meet xentrace's expectations on emulation
> event data
>
> According to the logic in hvm_mmio_assist_process(), 64 bits of data are
> expected with 64-bit addresses, and 32 bits of data with 32-bit ones. I
> don't think this is very reasonable, but I'm also not going to touch the
> consumer side, the more that it is anyway not very helpful for the code
> here to only ever supply 32 bits of data (despite the field being 64
> bits wide, and having been even in the 32-bit days of Xen).
I suspect the data field was 64 bits so it could hold addresses, and no-one thought about 64 bits of immediate data (which of course you can only get with MMIO) whereas you can have 64-bit addresses even with PIO. Anyway the change LGTM...
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> RFC: Untested; solely based on the observation of "(no data)" in a trace
> where it was entirely unclear why no data would have been available.
> I just so happened that the guest had more than 4Gb of memory, and
> hence addresses were not representable as 32-bit values.
>
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -30,7 +30,7 @@ struct hvmemul_cache
> static void hvmtrace_io_assist(const ioreq_t *p)
> {
> unsigned int size, event;
> - unsigned char buffer[12];
> + unsigned char buffer[16];
>
> if ( likely(!tb_init_done) )
> return;
> @@ -47,8 +47,11 @@ static void hvmtrace_io_assist(const ior
>
> if ( !p->data_is_ptr )
> {
> - *(uint32_t *)&buffer[size] = p->data;
> - size += 4;
> + if ( size == 4 )
> + *(uint32_t *)&buffer[size] = p->data;
> + else
> + *(uint64_t *)&buffer[size] = p->data;
> + size *= 2;
> }
>
> trace_var(event, 0/*!cycles*/, size, buffer);
>
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-08-09 9:09 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-09 8:01 [PATCH RFC] x86/HVM: meet xentrace's expectations on emulation event data Jan Beulich
2018-08-09 9:09 ` Paul Durrant [this message]
2018-08-29 7:10 ` Ping: " Jan Beulich
2018-08-29 14:22 ` Andrew Cooper
2018-08-29 14:39 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9f8ef8a6534f4e1ea12d89792f8ce695@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=George.Dunlap@citrix.com \
--cc=JBeulich@suse.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).