xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: xen-devel@lists.xen.org, Jan Beulich <JBeulich@suse.com>
Subject: Re: [PATCH 5/5] x86/HVM: cache emulated instruction for retry processing
Date: Thu, 10 Oct 2013 12:35:31 +0100	[thread overview]
Message-ID: <52569103.60308@citrix.com> (raw)
In-Reply-To: <524991D902000078000F8087@nat28.tlf.novell.com>


[-- Attachment #1.1: Type: text/plain, Size: 4850 bytes --]

On 30/09/13 13:59, Jan Beulich wrote:
> Rather than re-reading the instruction bytes upon retry processing,
> stash away and re-use tha what we already read. That way we can be
> certain that the retry won't do something different from what requested
> the retry, getting once again closer to real hardware behavior (where
> what we use retries for is simply a bus operation, not involving
> redundant decoding of instructions).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

It occurs to me that this caching is also needed in the shadow emulation
path, but that is probably better in a separate patch/series than
polluting this HVM series.

>
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -569,9 +569,19 @@ static int hvmemul_insn_fetch(
>  
>      /* Fall back if requested bytes are not in the prefetch cache. */
>      if ( unlikely((insn_off + bytes) > hvmemul_ctxt->insn_buf_bytes) )
> -        return __hvmemul_read(
> -            seg, offset, p_data, bytes,
> -            hvm_access_insn_fetch, hvmemul_ctxt);
> +    {
> +        int rc = __hvmemul_read(seg, offset, p_data, bytes,
> +                                hvm_access_insn_fetch, hvmemul_ctxt);
> +
> +        if ( rc == X86EMUL_OKAY )
> +        {
> +            ASSERT(insn_off + bytes <= sizeof(hvmemul_ctxt->insn_buf));
> +            memcpy(&hvmemul_ctxt->insn_buf[insn_off], p_data, bytes);
> +            hvmemul_ctxt->insn_buf_bytes = insn_off + bytes;
> +        }
> +
> +        return rc;
> +    }
>  
>      /* Hit the cache. Simple memcpy. */
>      memcpy(p_data, &hvmemul_ctxt->insn_buf[insn_off], bytes);
> @@ -1148,17 +1158,27 @@ int hvm_emulate_one(
>          pfec |= PFEC_user_mode;
>  
>      hvmemul_ctxt->insn_buf_eip = regs->eip;
> -    hvmemul_ctxt->insn_buf_bytes =
> -        hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf)
> -        ? :
> -        (hvm_virtual_to_linear_addr(
> -            x86_seg_cs, &hvmemul_ctxt->seg_reg[x86_seg_cs],
> -            regs->eip, sizeof(hvmemul_ctxt->insn_buf),
> -            hvm_access_insn_fetch, hvmemul_ctxt->ctxt.addr_size, &addr) &&
> -         !hvm_fetch_from_guest_virt_nofault(
> -             hvmemul_ctxt->insn_buf, addr,
> -             sizeof(hvmemul_ctxt->insn_buf), pfec))
> -        ? sizeof(hvmemul_ctxt->insn_buf) : 0;
> +    if ( !vio->mmio_insn_bytes )
> +    {
> +        hvmemul_ctxt->insn_buf_bytes =
> +            hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?:
> +            (hvm_virtual_to_linear_addr(x86_seg_cs,
> +                                        &hvmemul_ctxt->seg_reg[x86_seg_cs],
> +                                        regs->eip,
> +                                        sizeof(hvmemul_ctxt->insn_buf),
> +                                        hvm_access_insn_fetch,
> +                                        hvmemul_ctxt->ctxt.addr_size,
> +                                        &addr) &&
> +             hvm_fetch_from_guest_virt_nofault(hvmemul_ctxt->insn_buf, addr,
> +                                               sizeof(hvmemul_ctxt->insn_buf),
> +                                               pfec) == HVMCOPY_okay) ?
> +            sizeof(hvmemul_ctxt->insn_buf) : 0;
> +    }
> +    else
> +    {
> +        hvmemul_ctxt->insn_buf_bytes = vio->mmio_insn_bytes;
> +        memcpy(hvmemul_ctxt->insn_buf, vio->mmio_insn, vio->mmio_insn_bytes);
> +    }
>  
>      hvmemul_ctxt->exn_pending = 0;
>      vio->mmio_retrying = vio->mmio_retry;
> @@ -1169,7 +1189,16 @@ int hvm_emulate_one(
>      if ( rc == X86EMUL_OKAY && vio->mmio_retry )
>          rc = X86EMUL_RETRY;
>      if ( rc != X86EMUL_RETRY )
> +    {
>          vio->mmio_large_read_bytes = vio->mmio_large_write_bytes = 0;
> +        vio->mmio_insn_bytes = 0;
> +    }
> +    else
> +    {
> +        BUILD_BUG_ON(sizeof(vio->mmio_insn) < sizeof(hvmemul_ctxt->insn_buf));
> +        vio->mmio_insn_bytes = hvmemul_ctxt->insn_buf_bytes;
> +        memcpy(vio->mmio_insn, hvmemul_ctxt->insn_buf, vio->mmio_insn_bytes);
> +    }
>  
>      if ( rc != X86EMUL_OKAY )
>          return rc;
> --- a/xen/include/asm-x86/hvm/vcpu.h
> +++ b/xen/include/asm-x86/hvm/vcpu.h
> @@ -66,6 +66,9 @@ struct hvm_vcpu_io {
>      /* We may write up to m256 as a number of device-model transactions. */
>      unsigned int mmio_large_write_bytes;
>      paddr_t mmio_large_write_pa;
> +    /* For retries we shouldn't re-fetch the instruction. */
> +    unsigned int mmio_insn_bytes;
> +    unsigned char mmio_insn[16];
>      /*
>       * For string instruction emulation we need to be able to signal a
>       * necessary retry through other than function return codes.
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


[-- Attachment #1.2: Type: text/html, Size: 5787 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2013-10-10 11:35 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-30 12:51 [PATCH 0/5] x86/HVM: XSA-63 follow-ups Jan Beulich
2013-09-30 12:57 ` [PATCH 1/5] x86/HVM: properly handle backward string instruction emulation Jan Beulich
2013-10-08 16:36   ` Andrew Cooper
2013-09-30 12:57 ` [PATCH 2/5] x86/HVM: fix direct PCI port I/O emulation retry and error handling Jan Beulich
2013-10-08 18:13   ` Andrew Cooper
2013-09-30 12:58 ` [PATCH 3/5] x86/HVM: don't ignore hvm_copy_to_guest_phys() errors during I/O intercept Jan Beulich
2013-10-08 18:20   ` Andrew Cooper
2013-10-09  7:48     ` Jan Beulich
2013-09-30 12:58 ` [PATCH 4/5] x86/HVM: properly deal with hvm_copy_*_guest_phys() errors Jan Beulich
2013-09-30 13:07   ` Andrew Cooper
2013-09-30 12:59 ` [PATCH 5/5] x86/HVM: cache emulated instruction for retry processing Jan Beulich
2013-10-10 11:35   ` Andrew Cooper [this message]
2013-12-18  8:36   ` Zhang, Yang Z
2013-12-18  8:48     ` Jan Beulich
2013-12-18  9:40       ` Zhang, Yang Z
2013-12-18 10:53         ` Jan Beulich
2013-12-24 11:29           ` Zhang, Yang Z
2014-01-07  8:28             ` Jan Beulich
2014-01-07  8:54               ` Zhang, Yang Z
2014-01-07  9:43                 ` Egger, Christoph
2014-01-08  5:50                   ` Zhang, Yang Z
2014-01-09 12:19                     ` Egger, Christoph
2014-01-16  4:42                       ` Zhang, Yang Z
2014-01-16  8:23                         ` Jan Beulich
2014-01-17  2:53                           ` Zhang, Yang Z
2013-10-08 15:10 ` Ping: [PATCH 0/5] x86/HVM: XSA-63 follow-ups Jan Beulich
2013-10-14  7:29 ` Keir Fraser

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52569103.60308@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).