From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Jan Beulich' <JBeulich@suse.com>,
xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, Olaf Hering <olaf@aepfle.de>
Subject: Re: [PATCH v2 2/2] x86/HVM: split page straddling emulated accesses in more cases
Date: Mon, 3 Sep 2018 16:15:04 +0000 [thread overview]
Message-ID: <131b993a872142deaa6a5c62487b9e03@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <5B8D56E602000078001E4C0B@prv1-mh.provo.novell.com>
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 03 September 2018 16:45
> To: xen-devel <xen-devel@lists.xenproject.org>
> Cc: Olaf Hering <olaf@aepfle.de>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>
> Subject: [PATCH v2 2/2] x86/HVM: split page straddling emulated accesses in
> more cases
>
> Assuming consecutive linear addresses map to all RAM or all MMIO is not
> correct. Nor is assuming that a page straddling MMIO access will access
> the same emulating component for both parts of the access. If a guest
> RAM read fails with HVMTRANS_bad_gfn_to_mfn and if the access straddles
> a page boundary, issue accesses separately for both parts.
>
> The extra call to known_glfn() from hvmemul_write() is just to preserve
> original behavior; we should consider dropping this (also to make sure
> the intended effect of 8cbd4fb0b7 ["x86/hvm: implement hvmemul_write()
> using real mappings"] is achieved in all cases where it can be achieved
> without further rework), or alternatively we perhaps ought to mirror
> this in hvmemul_rmw().
>
> Note that the correctness of this depends on the MMIO caching used
> elsewhere in the emulation code.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Also handle hvmemul_{write,rmw}().
>
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1041,6 +1041,110 @@ static inline int hvmemul_linear_mmio_wr
> pfec, hvmemul_ctxt, translate);
> }
>
> +static bool known_glfn(unsigned long addr, unsigned int bytes, uint32_t
> pfec)
> +{
> + const struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io;
> +
> + if ( pfec & PFEC_write_access )
> + {
> + if ( !vio->mmio_access.write_access )
> + return false;
> + }
> + else if ( pfec & PFEC_insn_fetch )
> + {
> + if ( !vio->mmio_access.insn_fetch )
> + return false;
> + }
> + else if ( !vio->mmio_access.read_access )
> + return false;
> +
> + return (vio->mmio_gla == (addr & PAGE_MASK) &&
> + (addr & ~PAGE_MASK) + bytes <= PAGE_SIZE);
> +}
> +
Would it be possible to split the introduction of the above function into a separate patch? AFAICT it does not seem to be intrinsically involved with handle page straddling. It was certainly not there in your RFC patch.
Paul
> +static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
> + uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
> +{
> + pagefault_info_t pfinfo;
> + int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec,
> &pfinfo);
> +
> + switch ( rc )
> + {
> + unsigned int offset, part1;
> +
> + case HVMTRANS_okay:
> + return X86EMUL_OKAY;
> +
> + case HVMTRANS_bad_linear_to_gfn:
> + x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
> + return X86EMUL_EXCEPTION;
> +
> + case HVMTRANS_bad_gfn_to_mfn:
> + if ( pfec & PFEC_insn_fetch )
> + return X86EMUL_UNHANDLEABLE;
> +
> + offset = addr & ~PAGE_MASK;
> + if ( offset + bytes <= PAGE_SIZE )
> + return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec,
> + hvmemul_ctxt,
> + known_glfn(addr, bytes, pfec));
> +
> + /* Split the access at the page boundary. */
> + part1 = PAGE_SIZE - offset;
> + rc = linear_read(addr, part1, p_data, pfec, hvmemul_ctxt);
> + if ( rc == X86EMUL_OKAY )
> + rc = linear_read(addr + part1, bytes - part1, p_data + part1,
> + pfec, hvmemul_ctxt);
> + return rc;
> +
> + case HVMTRANS_gfn_paged_out:
> + case HVMTRANS_gfn_shared:
> + return X86EMUL_RETRY;
> + }
> +
> + return X86EMUL_UNHANDLEABLE;
> +}
> +
> +static int linear_write(unsigned long addr, unsigned int bytes, void *p_data,
> + uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
> +{
> + pagefault_info_t pfinfo;
> + int rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
> +
> + switch ( rc )
> + {
> + unsigned int offset, part1;
> +
> + case HVMTRANS_okay:
> + return X86EMUL_OKAY;
> +
> + case HVMTRANS_bad_linear_to_gfn:
> + x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
> + return X86EMUL_EXCEPTION;
> +
> + case HVMTRANS_bad_gfn_to_mfn:
> + offset = addr & ~PAGE_MASK;
> + if ( offset + bytes <= PAGE_SIZE )
> + return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec,
> + hvmemul_ctxt,
> + known_glfn(addr, bytes, pfec));
> +
> + /* Split the access at the page boundary. */
> + part1 = PAGE_SIZE - offset;
> + rc = linear_write(addr, part1, p_data, pfec, hvmemul_ctxt);
> + if ( rc == X86EMUL_OKAY )
> + rc = linear_write(addr + part1, bytes - part1, p_data + part1,
> + pfec, hvmemul_ctxt);
> + return rc;
> +
> + case HVMTRANS_gfn_paged_out:
> + case HVMTRANS_gfn_shared:
> + return X86EMUL_RETRY;
> + }
> +
> + return X86EMUL_UNHANDLEABLE;
> +}
> +
> static int __hvmemul_read(
> enum x86_segment seg,
> unsigned long offset,
> @@ -1049,11 +1153,8 @@ static int __hvmemul_read(
> enum hvm_access_type access_type,
> struct hvm_emulate_ctxt *hvmemul_ctxt)
> {
> - struct vcpu *curr = current;
> - pagefault_info_t pfinfo;
> unsigned long addr, reps = 1;
> uint32_t pfec = PFEC_page_present;
> - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
> int rc;
>
> if ( is_x86_system_segment(seg) )
> @@ -1067,34 +1168,8 @@ static int __hvmemul_read(
> seg, offset, bytes, &reps, access_type, hvmemul_ctxt, &addr);
> if ( rc != X86EMUL_OKAY || !bytes )
> return rc;
> - if ( ((access_type != hvm_access_insn_fetch
> - ? vio->mmio_access.read_access
> - : vio->mmio_access.insn_fetch)) &&
> - (vio->mmio_gla == (addr & PAGE_MASK)) )
> - return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 1);
> -
> - rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> -
> - switch ( rc )
> - {
> - case HVMTRANS_okay:
> - break;
> - case HVMTRANS_bad_linear_to_gfn:
> - x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
> - return X86EMUL_EXCEPTION;
> - case HVMTRANS_bad_gfn_to_mfn:
> - if ( access_type == hvm_access_insn_fetch )
> - return X86EMUL_UNHANDLEABLE;
> -
> - return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 0);
> - case HVMTRANS_gfn_paged_out:
> - case HVMTRANS_gfn_shared:
> - return X86EMUL_RETRY;
> - default:
> - return X86EMUL_UNHANDLEABLE;
> - }
>
> - return X86EMUL_OKAY;
> + return linear_read(addr, bytes, p_data, pfec, hvmemul_ctxt);
> }
>
> static int hvmemul_read(
> @@ -1171,12 +1246,10 @@ static int hvmemul_write(
> {
> struct hvm_emulate_ctxt *hvmemul_ctxt =
> container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
> - struct vcpu *curr = current;
> unsigned long addr, reps = 1;
> uint32_t pfec = PFEC_page_present | PFEC_write_access;
> - struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
> int rc;
> - void *mapping;
> + void *mapping = NULL;
>
> if ( is_x86_system_segment(seg) )
> pfec |= PFEC_implicit;
> @@ -1188,16 +1261,15 @@ static int hvmemul_write(
> if ( rc != X86EMUL_OKAY || !bytes )
> return rc;
>
> - if ( vio->mmio_access.write_access &&
> - (vio->mmio_gla == (addr & PAGE_MASK)) )
> - return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 1);
> -
> - mapping = hvmemul_map_linear_addr(addr, bytes, pfec, hvmemul_ctxt);
> - if ( IS_ERR(mapping) )
> - return ~PTR_ERR(mapping);
> + if ( !known_glfn(addr, bytes, pfec) )
> + {
> + mapping = hvmemul_map_linear_addr(addr, bytes, pfec,
> hvmemul_ctxt);
> + if ( IS_ERR(mapping) )
> + return ~PTR_ERR(mapping);
> + }
>
> if ( !mapping )
> - return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 0);
> + return linear_write(addr, bytes, p_data, pfec, hvmemul_ctxt);
>
> memcpy(mapping, p_data, bytes);
>
> @@ -1218,7 +1290,6 @@ static int hvmemul_rmw(
> container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
> unsigned long addr, reps = 1;
> uint32_t pfec = PFEC_page_present | PFEC_write_access;
> - struct hvm_vcpu_io *vio = ¤t->arch.hvm.hvm_io;
> int rc;
> void *mapping;
>
> @@ -1244,18 +1315,14 @@ static int hvmemul_rmw(
> else
> {
> unsigned long data = 0;
> - bool known_gpfn = vio->mmio_access.write_access &&
> - vio->mmio_gla == (addr & PAGE_MASK);
>
> if ( bytes > sizeof(data) )
> return X86EMUL_UNHANDLEABLE;
> - rc = hvmemul_linear_mmio_read(addr, bytes, &data, pfec,
> hvmemul_ctxt,
> - known_gpfn);
> + rc = linear_read(addr, bytes, &data, pfec, hvmemul_ctxt);
> if ( rc == X86EMUL_OKAY )
> rc = x86_emul_rmw(&data, bytes, eflags, state, ctxt);
> if ( rc == X86EMUL_OKAY )
> - rc = hvmemul_linear_mmio_write(addr, bytes, &data, pfec,
> - hvmemul_ctxt, known_gpfn);
> + rc = linear_write(addr, bytes, &data, pfec, hvmemul_ctxt);
> }
>
> return rc;
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-09-03 16:15 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-30 11:05 [PATCH 0/2] x86/HVM: emulation adjustments Jan Beulich
2018-08-30 11:09 ` [PATCH 1/2] x86/HVM: drop hvm_fetch_from_guest_linear() Jan Beulich
2018-08-30 11:18 ` Andrew Cooper
2018-08-30 12:02 ` Jan Beulich
2018-08-30 12:22 ` Andrew Cooper
2018-08-30 12:28 ` Jan Beulich
2018-08-30 11:24 ` Andrew Cooper
2018-08-30 11:09 ` [PATCH RFC 2/2] x86/HVM: split page straddling emulated accesses in more cases Jan Beulich
2018-09-03 9:11 ` Paul Durrant
2018-09-03 15:41 ` [PATCH v2 0/2] x86/HVM: emulation adjustments Jan Beulich
2018-09-03 15:43 ` [PATCH v2 1/2] x86/HVM: drop hvm_fetch_from_guest_linear() Jan Beulich
2018-09-04 10:01 ` Jan Beulich
2018-09-05 7:20 ` Olaf Hering
2018-09-03 15:44 ` [PATCH v2 2/2] x86/HVM: split page straddling emulated accesses in more cases Jan Beulich
2018-09-03 16:15 ` Paul Durrant [this message]
2018-09-04 7:43 ` Jan Beulich
2018-09-04 8:15 ` Paul Durrant
2018-09-04 8:46 ` Jan Beulich
2018-09-05 7:22 ` Olaf Hering
2018-09-06 12:58 ` [PATCH v3 0/3] x86/HVM: emulation adjustments Jan Beulich
2018-09-06 13:03 ` [PATCH v3 1/3] x86/HVM: drop hvm_fetch_from_guest_linear() Jan Beulich
2018-09-06 14:51 ` Paul Durrant
2018-09-06 13:03 ` [PATCH v3 2/3] x86/HVM: add known_gla() emulation helper Jan Beulich
2018-09-06 13:12 ` Paul Durrant
2018-09-06 13:22 ` Jan Beulich
2018-09-06 14:50 ` Paul Durrant
2018-09-06 13:04 ` [PATCH v3 3/3] x86/HVM: split page straddling emulated accesses in more cases Jan Beulich
2018-09-06 13:14 ` Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=131b993a872142deaa6a5c62487b9e03@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=olaf@aepfle.de \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).