xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Jan Beulich' <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, Olaf Hering <olaf@aepfle.de>
Subject: Re: [PATCH v3 3/3] x86/HVM: split page straddling emulated accesses in more cases
Date: Thu, 6 Sep 2018 13:14:30 +0000	[thread overview]
Message-ID: <a32daf95223547959b38263998c20758@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <5B9125D102000078001E5F1D@prv1-mh.provo.novell.com>

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 06 September 2018 14:04
> To: xen-devel <xen-devel@lists.xenproject.org>
> Cc: Olaf Hering <olaf@aepfle.de>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>
> Subject: [PATCH v3 3/3] x86/HVM: split page straddling emulated accesses in
> more cases
> 
> Assuming consecutive linear addresses map to all RAM or all MMIO is not
> correct. Nor is assuming that a page straddling MMIO access will access
> the same emulating component for both parts of the access. If a guest
> RAM read fails with HVMTRANS_bad_gfn_to_mfn and if the access straddles
> a page boundary, issue accesses separately for both parts.
> 
> The extra call to known_gla() from hvmemul_write() is just to preserve
> original behavior; for consistency the check also gets added to
> hvmemul_rmw() (albeit I continue to be unsure whether we wouldn't better
> drop both).
> 
> Note that the correctness of this depends on the MMIO caching used
> elsewhere in the emulation code.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
> v3: Move introduction of known_gla() to a prereq patch. Mirror check
>     using the function into hvmemul_rmw().
> v2: Also handle hvmemul_{write,rmw}().
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1058,7 +1058,91 @@ static bool known_gla(unsigned long addr
>      else if ( !vio->mmio_access.read_access )
>              return false;
> 
> -    return vio->mmio_gla == (addr & PAGE_MASK);
> +    return (vio->mmio_gla == (addr & PAGE_MASK) &&
> +            (addr & ~PAGE_MASK) + bytes <= PAGE_SIZE);
> +}
> +
> +static int linear_read(unsigned long addr, unsigned int bytes, void *p_data,
> +                       uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
> +{
> +    pagefault_info_t pfinfo;
> +    int rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec,
> &pfinfo);
> +
> +    switch ( rc )
> +    {
> +        unsigned int offset, part1;
> +
> +    case HVMTRANS_okay:
> +        return X86EMUL_OKAY;
> +
> +    case HVMTRANS_bad_linear_to_gfn:
> +        x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
> +        return X86EMUL_EXCEPTION;
> +
> +    case HVMTRANS_bad_gfn_to_mfn:
> +        if ( pfec & PFEC_insn_fetch )
> +            return X86EMUL_UNHANDLEABLE;
> +
> +        offset = addr & ~PAGE_MASK;
> +        if ( offset + bytes <= PAGE_SIZE )
> +            return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec,
> +                                            hvmemul_ctxt,
> +                                            known_gla(addr, bytes, pfec));
> +
> +        /* Split the access at the page boundary. */
> +        part1 = PAGE_SIZE - offset;
> +        rc = linear_read(addr, part1, p_data, pfec, hvmemul_ctxt);
> +        if ( rc == X86EMUL_OKAY )
> +            rc = linear_read(addr + part1, bytes - part1, p_data + part1,
> +                             pfec, hvmemul_ctxt);
> +        return rc;
> +
> +    case HVMTRANS_gfn_paged_out:
> +    case HVMTRANS_gfn_shared:
> +        return X86EMUL_RETRY;
> +    }
> +
> +    return X86EMUL_UNHANDLEABLE;
> +}
> +
> +static int linear_write(unsigned long addr, unsigned int bytes, void *p_data,
> +                        uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt)
> +{
> +    pagefault_info_t pfinfo;
> +    int rc = hvm_copy_to_guest_linear(addr, p_data, bytes, pfec, &pfinfo);
> +
> +    switch ( rc )
> +    {
> +        unsigned int offset, part1;
> +
> +    case HVMTRANS_okay:
> +        return X86EMUL_OKAY;
> +
> +    case HVMTRANS_bad_linear_to_gfn:
> +        x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
> +        return X86EMUL_EXCEPTION;
> +
> +    case HVMTRANS_bad_gfn_to_mfn:
> +        offset = addr & ~PAGE_MASK;
> +        if ( offset + bytes <= PAGE_SIZE )
> +            return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec,
> +                                             hvmemul_ctxt,
> +                                             known_gla(addr, bytes, pfec));
> +
> +        /* Split the access at the page boundary. */
> +        part1 = PAGE_SIZE - offset;
> +        rc = linear_write(addr, part1, p_data, pfec, hvmemul_ctxt);
> +        if ( rc == X86EMUL_OKAY )
> +            rc = linear_write(addr + part1, bytes - part1, p_data + part1,
> +                              pfec, hvmemul_ctxt);
> +        return rc;
> +
> +    case HVMTRANS_gfn_paged_out:
> +    case HVMTRANS_gfn_shared:
> +        return X86EMUL_RETRY;
> +    }
> +
> +    return X86EMUL_UNHANDLEABLE;
>  }
> 
>  static int __hvmemul_read(
> @@ -1069,7 +1153,6 @@ static int __hvmemul_read(
>      enum hvm_access_type access_type,
>      struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
> -    pagefault_info_t pfinfo;
>      unsigned long addr, reps = 1;
>      uint32_t pfec = PFEC_page_present;
>      int rc;
> @@ -1085,31 +1168,8 @@ static int __hvmemul_read(
>          seg, offset, bytes, &reps, access_type, hvmemul_ctxt, &addr);
>      if ( rc != X86EMUL_OKAY || !bytes )
>          return rc;
> -    if ( known_gla(addr, bytes, pfec) )
> -        return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 1);
> 
> -    rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> -
> -    switch ( rc )
> -    {
> -    case HVMTRANS_okay:
> -        break;
> -    case HVMTRANS_bad_linear_to_gfn:
> -        x86_emul_pagefault(pfinfo.ec, pfinfo.linear, &hvmemul_ctxt->ctxt);
> -        return X86EMUL_EXCEPTION;
> -    case HVMTRANS_bad_gfn_to_mfn:
> -        if ( access_type == hvm_access_insn_fetch )
> -            return X86EMUL_UNHANDLEABLE;
> -
> -        return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 0);
> -    case HVMTRANS_gfn_paged_out:
> -    case HVMTRANS_gfn_shared:
> -        return X86EMUL_RETRY;
> -    default:
> -        return X86EMUL_UNHANDLEABLE;
> -    }
> -
> -    return X86EMUL_OKAY;
> +    return linear_read(addr, bytes, p_data, pfec, hvmemul_ctxt);
>  }
> 
>  static int hvmemul_read(
> @@ -1189,7 +1249,7 @@ static int hvmemul_write(
>      unsigned long addr, reps = 1;
>      uint32_t pfec = PFEC_page_present | PFEC_write_access;
>      int rc;
> -    void *mapping;
> +    void *mapping = NULL;
> 
>      if ( is_x86_system_segment(seg) )
>          pfec |= PFEC_implicit;
> @@ -1201,15 +1261,15 @@ static int hvmemul_write(
>      if ( rc != X86EMUL_OKAY || !bytes )
>          return rc;
> 
> -    if ( known_gla(addr, bytes, pfec) )
> -        return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 1);
> -
> -    mapping = hvmemul_map_linear_addr(addr, bytes, pfec, hvmemul_ctxt);
> -    if ( IS_ERR(mapping) )
> -        return ~PTR_ERR(mapping);
> +    if ( !known_gla(addr, bytes, pfec) )
> +    {
> +        mapping = hvmemul_map_linear_addr(addr, bytes, pfec,
> hvmemul_ctxt);
> +        if ( IS_ERR(mapping) )
> +             return ~PTR_ERR(mapping);
> +    }
> 
>      if ( !mapping )
> -        return hvmemul_linear_mmio_write(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 0);
> +        return linear_write(addr, bytes, p_data, pfec, hvmemul_ctxt);
> 
>      memcpy(mapping, p_data, bytes);
> 
> @@ -1231,7 +1291,7 @@ static int hvmemul_rmw(
>      unsigned long addr, reps = 1;
>      uint32_t pfec = PFEC_page_present | PFEC_write_access;
>      int rc;
> -    void *mapping;
> +    void *mapping = NULL;
> 
>      rc = hvmemul_virtual_to_linear(
>          seg, offset, bytes, &reps, hvm_access_write, hvmemul_ctxt, &addr);
> @@ -1243,9 +1303,12 @@ static int hvmemul_rmw(
>      else if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )
>          pfec |= PFEC_user_mode;
> 
> -    mapping = hvmemul_map_linear_addr(addr, bytes, pfec, hvmemul_ctxt);
> -    if ( IS_ERR(mapping) )
> -        return ~PTR_ERR(mapping);
> +    if ( !known_gla(addr, bytes, pfec) )
> +    {
> +        mapping = hvmemul_map_linear_addr(addr, bytes, pfec,
> hvmemul_ctxt);
> +        if ( IS_ERR(mapping) )
> +            return ~PTR_ERR(mapping);
> +    }
> 
>      if ( mapping )
>      {
> @@ -1255,17 +1318,14 @@ static int hvmemul_rmw(
>      else
>      {
>          unsigned long data = 0;
> -        bool known_gpfn = known_gla(addr, bytes, pfec);
> 
>          if ( bytes > sizeof(data) )
>              return X86EMUL_UNHANDLEABLE;
> -        rc = hvmemul_linear_mmio_read(addr, bytes, &data, pfec,
> hvmemul_ctxt,
> -                                      known_gpfn);
> +        rc = linear_read(addr, bytes, &data, pfec, hvmemul_ctxt);
>          if ( rc == X86EMUL_OKAY )
>              rc = x86_emul_rmw(&data, bytes, eflags, state, ctxt);
>          if ( rc == X86EMUL_OKAY )
> -            rc = hvmemul_linear_mmio_write(addr, bytes, &data, pfec,
> -                                           hvmemul_ctxt, known_gpfn);
> +            rc = linear_write(addr, bytes, &data, pfec, hvmemul_ctxt);
>      }
> 
>      return rc;
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

      reply	other threads:[~2018-09-06 13:14 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-30 11:05 [PATCH 0/2] x86/HVM: emulation adjustments Jan Beulich
2018-08-30 11:09 ` [PATCH 1/2] x86/HVM: drop hvm_fetch_from_guest_linear() Jan Beulich
2018-08-30 11:18   ` Andrew Cooper
2018-08-30 12:02     ` Jan Beulich
2018-08-30 12:22       ` Andrew Cooper
2018-08-30 12:28         ` Jan Beulich
2018-08-30 11:24   ` Andrew Cooper
2018-08-30 11:09 ` [PATCH RFC 2/2] x86/HVM: split page straddling emulated accesses in more cases Jan Beulich
2018-09-03  9:11   ` Paul Durrant
2018-09-03 15:41 ` [PATCH v2 0/2] x86/HVM: emulation adjustments Jan Beulich
2018-09-03 15:43   ` [PATCH v2 1/2] x86/HVM: drop hvm_fetch_from_guest_linear() Jan Beulich
2018-09-04 10:01     ` Jan Beulich
2018-09-05  7:20     ` Olaf Hering
2018-09-03 15:44   ` [PATCH v2 2/2] x86/HVM: split page straddling emulated accesses in more cases Jan Beulich
2018-09-03 16:15     ` Paul Durrant
2018-09-04  7:43       ` Jan Beulich
2018-09-04  8:15         ` Paul Durrant
2018-09-04  8:46           ` Jan Beulich
2018-09-05  7:22     ` Olaf Hering
2018-09-06 12:58 ` [PATCH v3 0/3] x86/HVM: emulation adjustments Jan Beulich
2018-09-06 13:03   ` [PATCH v3 1/3] x86/HVM: drop hvm_fetch_from_guest_linear() Jan Beulich
2018-09-06 14:51     ` Paul Durrant
2018-09-06 13:03   ` [PATCH v3 2/3] x86/HVM: add known_gla() emulation helper Jan Beulich
2018-09-06 13:12     ` Paul Durrant
2018-09-06 13:22       ` Jan Beulich
2018-09-06 14:50         ` Paul Durrant
2018-09-06 13:04   ` [PATCH v3 3/3] x86/HVM: split page straddling emulated accesses in more cases Jan Beulich
2018-09-06 13:14     ` Paul Durrant [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a32daf95223547959b38263998c20758@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=olaf@aepfle.de \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).