From: Paul Durrant <Paul.Durrant@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: "Andrew Cooper" <Andrew.Cooper3@citrix.com>,
"Mihai Donțu" <mdontu@bitdefender.com>,
"Razvan Cojocaru" <rcojocaru@bitdefender.com>,
"Jan Beulich" <JBeulich@suse.com>
Subject: Re: [PATCH 6/6] x86/hvm: Implement hvmemul_write() using real mappings
Date: Wed, 21 Jun 2017 16:19:00 +0000 [thread overview]
Message-ID: <29dda16cf3474229b0f12ad0e93f9fec@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <1498057952-13556-7-git-send-email-andrew.cooper3@citrix.com>
> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 21 June 2017 16:13
> To: Xen-devel <xen-devel@lists.xen.org>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Jan Beulich
> <JBeulich@suse.com>; Paul Durrant <Paul.Durrant@citrix.com>; Razvan
> Cojocaru <rcojocaru@bitdefender.com>; Mihai Donțu
> <mdontu@bitdefender.com>
> Subject: [PATCH 6/6] x86/hvm: Implement hvmemul_write() using real
> mappings
>
> An access which crosses a page boundary is performed atomically by x86
> hardware, albeit with a severe performance penalty. An important corner
> case
> is when a straddled access hits two pages which differ in whether a
> translation exists, or in net access rights.
>
> The use of hvm_copy*() in hvmemul_write() is problematic, because it
> performs
> a translation then completes the partial write, before moving onto the next
> translation.
>
> If an individual emulated write straddles two pages, the first of which is
> writable, and the second of which is not, the first half of the write will
> complete before #PF is raised from the second half.
>
> This results in guest state corruption as a side effect of emulation, which
> has been observed to cause windows to crash while under introspection.
>
> Introduce the hvmemul_{,un}map_linear_addr() helpers, which translate an
> entire contents of a linear access, and vmap() the underlying frames to
> provide a contiguous virtual mapping for the emulator to use. This is the
> same mechanism as used by the shadow emulation code.
>
> This will catch any translation issues and abort the emulation before any
> modifications occur.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Paul Durrant <paul.durrant@citrix.com>
> CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
> CC: Mihai Donțu <mdontu@bitdefender.com>
>
> While the maximum size of linear mapping is capped at 1 page, the logic in
> the
> helpers is written to work properly as hvmemul_ctxt->mfn[] gets longer,
> specifically with XSAVE instruction emulation in mind.
>
> This has only had light testing so far.
> ---
> xen/arch/x86/hvm/emulate.c | 179
> ++++++++++++++++++++++++++++++++++----
> xen/include/asm-x86/hvm/emulate.h | 7 ++
> 2 files changed, 169 insertions(+), 17 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 384ad0b..804bea5 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -498,6 +498,159 @@ static int hvmemul_do_mmio_addr(paddr_t
> mmio_gpa,
> }
>
> /*
> + * Map the frame(s) covering an individual linear access, for writeable
> + * access. May return NULL for MMIO, or ERR_PTR(~X86EMUL_*) for other
> errors
> + * including ERR_PTR(~X86EMUL_OKAY) for write-discard mappings.
> + *
> + * In debug builds, map() checks that each slot in hvmemul_ctxt->mfn[] is
> + * clean before use, and poisions unused slots with INVALID_MFN.
> + */
> +static void *hvmemul_map_linear_addr(
> + unsigned long linear, unsigned int bytes, uint32_t pfec,
> + struct hvm_emulate_ctxt *hvmemul_ctxt)
> +{
> + struct vcpu *curr = current;
> + void *err, *mapping;
> +
> + /* First and final gfns which need mapping. */
> + unsigned long frame = linear >> PAGE_SHIFT, first = frame;
> + unsigned long final = (linear + bytes - !!bytes) >> PAGE_SHIFT;
Do we need to worry about linear + bytes overflowing here?
Also, is this ever legitimately called with bytes == 0?
> +
> + /*
> + * mfn points to the next free slot. All used slots have a page reference
> + * held on them.
> + */
> + mfn_t *mfn = &hvmemul_ctxt->mfn[0];
> +
> + /*
> + * The caller has no legitimate reason for trying a zero-byte write, but
> + * final is calculate to fail safe in release builds.
> + *
> + * The maximum write size depends on the number of adjacent mfns[]
> which
> + * can be vmap()'d, accouting for possible misalignment within the region.
> + * The higher level emulation callers are responsible for ensuring that
> + * mfns[] is large enough for the requested write size.
> + */
> + if ( bytes == 0 ||
> + final - first > ARRAY_SIZE(hvmemul_ctxt->mfn) - 1 )
> + {
I guess not, so why the weird looking calculation for final? It's value will not be used when bytes == 0.
> + ASSERT_UNREACHABLE();
> + goto unhandleable;
> + }
> +
> + do {
> + enum hvm_translation_result res;
> + struct page_info *page;
> + pagefault_info_t pfinfo;
> + p2m_type_t p2mt;
> +
> + res = hvm_translate_get_page(curr, frame << PAGE_SHIFT, true, pfec,
> + &pfinfo, &page, NULL, &p2mt);
> +
> + switch ( res )
> + {
> + case HVMTRANS_okay:
> + break;
> +
> + case HVMTRANS_bad_linear_to_gfn:
> + x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
> + err = ERR_PTR(~(long)X86EMUL_EXCEPTION);
> + goto out;
> +
> + case HVMTRANS_bad_gfn_to_mfn:
> + err = NULL;
> + goto out;
> +
> + case HVMTRANS_gfn_paged_out:
> + case HVMTRANS_gfn_shared:
> + err = ERR_PTR(~(long)X86EMUL_RETRY);
> + goto out;
> +
> + default:
> + goto unhandleable;
> + }
> +
> + /* Error checking. Confirm that the current slot is clean. */
> + ASSERT(mfn_x(*mfn) == 0);
> +
> + *mfn++ = _mfn(page_to_mfn(page));
> + frame++;
> +
> + if ( p2m_is_discard_write(p2mt) )
> + {
> + err = ERR_PTR(~(long)X86EMUL_OKAY);
> + goto out;
> + }
> +
> + } while ( frame < final );
> +
> + /* Entire access within a single frame? */
> + if ( first == final )
> + mapping = map_domain_page(hvmemul_ctxt->mfn[0]) + (linear &
> ~PAGE_MASK);
> + /* Multiple frames? Need to vmap(). */
> + else if ( (mapping = vmap(hvmemul_ctxt->mfn,
> + mfn - hvmemul_ctxt->mfn)) == NULL )
> + goto unhandleable;
> +
> +#ifndef NDEBUG /* Poision unused mfn[]s with INVALID_MFN. */
> + while ( mfn < hvmemul_ctxt->mfn + ARRAY_SIZE(hvmemul_ctxt->mfn) )
> + {
> + ASSERT(mfn_x(*mfn) == 0);
> + *mfn++ = INVALID_MFN;
> + }
> +#endif
> +
> + return mapping;
> +
> + unhandleable:
> + err = ERR_PTR(~(long)X86EMUL_UNHANDLEABLE);
> +
> + out:
> + /* Drop all held references. */
> + while ( mfn > hvmemul_ctxt->mfn )
> + put_page(mfn_to_page(mfn_x(*mfn--)));
> +
> + return err;
> +}
> +
> +static void hvmemul_unmap_linear_addr(
> + void *mapping, unsigned long linear, unsigned int bytes,
> + struct hvm_emulate_ctxt *hvmemul_ctxt)
> +{
> + struct domain *currd = current->domain;
> + unsigned long frame = linear >> PAGE_SHIFT;
> + unsigned long final = (linear + bytes - !!bytes) >> PAGE_SHIFT;
> + mfn_t *mfn = &hvmemul_ctxt->mfn[0];
> +
> + ASSERT(bytes > 0);
Why not return if bytes == 0? I know it's not a legitimate call but in a non-debug build it would result in unmap_domain_page() being called below.
Paul
> +
> + if ( frame == final )
> + unmap_domain_page(mapping);
> + else
> + vunmap(mapping);
> +
> + do
> + {
> + ASSERT(mfn_valid(*mfn));
> + paging_mark_dirty(currd, *mfn);
> + put_page(mfn_to_page(mfn_x(*mfn)));
> +
> + frame++;
> + *mfn++ = _mfn(0); /* Clean slot for map()'s error checking. */
> +
> + } while ( frame < final );
> +
> +
> +#ifndef NDEBUG /* Check (and clean) all unused mfns. */
> + while ( mfn < hvmemul_ctxt->mfn + ARRAY_SIZE(hvmemul_ctxt->mfn) )
> + {
> + ASSERT(mfn_eq(*mfn, INVALID_MFN));
> + *mfn++ = _mfn(0);
> + }
> +#endif
> +}
> +
> +/*
> * Convert addr from linear to physical form, valid over the range
> * [addr, addr + *reps * bytes_per_rep]. *reps is adjusted according to
> * the valid computed range. It is always >0 when X86EMUL_OKAY is
> returned.
> @@ -987,11 +1140,11 @@ static int hvmemul_write(
> struct hvm_emulate_ctxt *hvmemul_ctxt =
> container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
> struct vcpu *curr = current;
> - pagefault_info_t pfinfo;
> unsigned long addr, reps = 1;
> uint32_t pfec = PFEC_page_present | PFEC_write_access;
> struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
> int rc;
> + void *mapping;
>
> if ( is_x86_system_segment(seg) )
> pfec |= PFEC_implicit;
> @@ -1007,23 +1160,15 @@ static int hvmemul_write(
> (vio->mmio_gla == (addr & PAGE_MASK)) )
> return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 1);
>
> - rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
> -
> - switch ( rc )
> - {
> - case HVMTRANS_okay:
> - break;
> - case HVMTRANS_bad_linear_to_gfn:
> - x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
> - return X86EMUL_EXCEPTION;
> - case HVMTRANS_bad_gfn_to_mfn:
> + mapping = hvmemul_map_linear_addr(addr, bytes, pfec,
> hvmemul_ctxt);
> + if ( IS_ERR(mapping) )
> + return ~PTR_ERR(mapping);
> + else if ( !mapping )
> return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 0);
> - case HVMTRANS_gfn_paged_out:
> - case HVMTRANS_gfn_shared:
> - return X86EMUL_RETRY;
> - default:
> - return X86EMUL_UNHANDLEABLE;
> - }
> +
> + memcpy(mapping, p_data, bytes);
> +
> + hvmemul_unmap_linear_addr(mapping, addr, bytes, hvmemul_ctxt);
>
> return X86EMUL_OKAY;
> }
> diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-
> x86/hvm/emulate.h
> index 8864775..65efd4e 100644
> --- a/xen/include/asm-x86/hvm/emulate.h
> +++ b/xen/include/asm-x86/hvm/emulate.h
> @@ -37,6 +37,13 @@ struct hvm_emulate_ctxt {
> unsigned long seg_reg_accessed;
> unsigned long seg_reg_dirty;
>
> + /*
> + * MFNs behind temporary mappings in the write callback. The length is
> + * arbitrary, and can be increased if writes longer than PAGE_SIZE are
> + * needed.
> + */
> + mfn_t mfn[2];
> +
> uint32_t intr_shadow;
>
> bool_t set_context;
> --
> 2.1.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-06-21 16:19 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-21 15:12 [PATCH 0/6] Various XSA followups Andrew Cooper
2017-06-21 15:12 ` [PATCH 1/6] x86/hvm: Fixes to hvmemul_insn_fetch() Andrew Cooper
2017-06-21 16:04 ` Paul Durrant
2017-06-21 16:15 ` Andrew Cooper
2017-06-21 16:49 ` Paul Durrant
2017-06-22 8:05 ` Jan Beulich
2017-06-21 15:12 ` [PATCH 2/6] x86/shadow: Fixes to hvm_emulate_insn_fetch() Andrew Cooper
2017-06-22 8:09 ` Jan Beulich
2017-06-22 11:52 ` Tim Deegan
2017-06-21 15:12 ` [PATCH 3/6] x86/shadow: Use ERR_PTR infrastructure for sh_emulate_map_dest() Andrew Cooper
2017-06-22 8:14 ` Jan Beulich
2017-06-22 8:21 ` Andrew Cooper
2017-06-22 9:07 ` Jan Beulich
2017-07-03 14:22 ` Andrew Cooper
2017-07-03 14:30 ` Jan Beulich
2017-06-22 12:09 ` Tim Deegan
2017-06-22 12:46 ` Andrew Cooper
2017-06-21 15:12 ` [PATCH 4/6] [RFC] x86/hvm: Rename enum hvm_copy_result to hvm_translation_result Andrew Cooper
2017-06-22 8:19 ` Jan Beulich
2017-06-21 15:12 ` [PATCH 5/6] x86/hvm: Break out __hvm_copy()'s translation logic Andrew Cooper
2017-06-22 8:30 ` Jan Beulich
2017-06-21 15:12 ` [PATCH 6/6] x86/hvm: Implement hvmemul_write() using real mappings Andrew Cooper
2017-06-21 16:19 ` Paul Durrant [this message]
2017-06-22 9:06 ` Jan Beulich
2017-07-03 15:07 ` Andrew Cooper
2017-07-03 16:06 ` Jan Beulich
[not found] ` <594BA4A3?= =?UTF-8?Q?0200007800165AA5@prv=ef=bf=bdmh.provo.novell.com>
[not found] ` <b9e9b637-0755-?= =?UTF-8?Q?a1bd-99c7-44ad3f13b5a4@citrix.com>
[not found] ` <595A879202000078001680B7@prv-?= =?UTF-8?Q?mh.provo.novell.com>
2017-07-03 17:24 ` Andrew Cooper
2017-07-04 7:10 ` Jan Beulich
[not found] <1498057952-13556-1-git-send-email-andrew.cooper3@citr?= =?UTF-8?Q?ix.com>
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=29dda16cf3474229b0f12ad0e93f9fec@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=mdontu@bitdefender.com \
--cc=rcojocaru@bitdefender.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).