xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Jan Beulich' <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [PATCH] x86/HVM: correct hvmemul_map_linear_addr() for multi-page case
Date: Wed, 12 Sep 2018 11:51:50 +0000	[thread overview]
Message-ID: <6ea95d8032c94db79cee98c62dbe1d56@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <5B98D7CF02000078001E7AEA@prv1-mh.provo.novell.com>

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 12 September 2018 10:10
> To: xen-devel <xen-devel@lists.xenproject.org>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul Durrant
> <Paul.Durrant@citrix.com>
> Subject: [PATCH] x86/HVM: correct hvmemul_map_linear_addr() for multi-
> page case
> 
> The function does two translations in one go for a single guest access.
> Any failure of the first translation step (guest linear -> guest
> physical), resulting in #PF, ought to take precedence over any failure
> of the second step (guest physical -> host physical). Bail out of the
> loop early solely when translation produces HVMTRANS_bad_linear_to_gfn,
> and record the most relevant of perhaps multiple different errors
> otherwise. (The choice of ZERO_BLOCK_PTR as sentinel is arbitrary.)

Could we have comment perhaps saying what the order of relevance of the errors are? The logic in update_map_err() below is a little hard to follow.

  Paul

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -531,6 +531,20 @@ static int hvmemul_do_mmio_addr(paddr_t
>      return hvmemul_do_io_addr(1, mmio_gpa, reps, size, dir, df, ram_gpa);
>  }
> 
> +static void *update_map_err(void *err, void *new)
> +{
> +    if ( err == ZERO_BLOCK_PTR || err == ERR_PTR(~X86EMUL_OKAY) )
> +        return new;
> +
> +    if ( new == ERR_PTR(~X86EMUL_OKAY) )
> +        return err;
> +
> +    if ( err == ERR_PTR(~X86EMUL_RETRY) )
> +        return new;
> +
> +    return err;
> +}
> +
>  /*
>   * Map the frame(s) covering an individual linear access, for writeable
>   * access.  May return NULL for MMIO, or ERR_PTR(~X86EMUL_*) for other
> errors
> @@ -544,7 +558,7 @@ static void *hvmemul_map_linear_addr(
>      struct hvm_emulate_ctxt *hvmemul_ctxt)
>  {
>      struct vcpu *curr = current;
> -    void *err, *mapping;
> +    void *err = ZERO_BLOCK_PTR, *mapping;
>      unsigned int nr_frames = ((linear + bytes - !!bytes) >> PAGE_SHIFT) -
>          (linear >> PAGE_SHIFT) + 1;
>      unsigned int i;
> @@ -600,27 +614,28 @@ static void *hvmemul_map_linear_addr(
>              goto out;
> 
>          case HVMTRANS_bad_gfn_to_mfn:
> -            err = NULL;
> -            goto out;
> +            err = update_map_err(err, NULL);
> +            continue;
> 
>          case HVMTRANS_gfn_paged_out:
>          case HVMTRANS_gfn_shared:
> -            err = ERR_PTR(~X86EMUL_RETRY);
> -            goto out;
> +            err = update_map_err(err, ERR_PTR(~X86EMUL_RETRY));
> +            continue;
> 
>          default:
> -            goto unhandleable;
> +            err = update_map_err(err, ERR_PTR(~X86EMUL_UNHANDLEABLE));
> +            continue;
>          }
> 
>          *mfn++ = page_to_mfn(page);
> 
>          if ( p2m_is_discard_write(p2mt) )
> -        {
> -            err = ERR_PTR(~X86EMUL_OKAY);
> -            goto out;
> -        }
> +            err = update_map_err(err, ERR_PTR(~X86EMUL_OKAY));
>      }
> 
> +    if ( err != ZERO_BLOCK_PTR )
> +        goto out;
> +
>      /* Entire access within a single frame? */
>      if ( nr_frames == 1 )
>          mapping = map_domain_page(hvmemul_ctxt->mfn[0]);
> @@ -639,6 +654,7 @@ static void *hvmemul_map_linear_addr(
>      return mapping + (linear & ~PAGE_MASK);
> 
>   unhandleable:
> +    ASSERT(err == ZERO_BLOCK_PTR);
>      err = ERR_PTR(~X86EMUL_UNHANDLEABLE);
> 
>   out:
> 
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-09-12 11:51 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-12  9:09 [PATCH] x86/HVM: correct hvmemul_map_linear_addr() for multi-page case Jan Beulich
2018-09-12 11:51 ` Paul Durrant [this message]
2018-09-12 12:13   ` Jan Beulich
2018-09-13 10:12 ` [PATCH v2] " Jan Beulich
2018-09-13 11:06   ` Paul Durrant
2018-09-13 11:39     ` Jan Beulich
2018-09-13 11:41       ` Paul Durrant
2018-09-20 12:41   ` Andrew Cooper
2018-09-20 13:39     ` Jan Beulich
2018-09-20 14:13       ` Andrew Cooper
2018-09-20 14:51         ` Jan Beulich
2018-09-25 12:41     ` Jan Beulich
2018-09-25 15:30       ` Andrew Cooper
2018-09-26  9:27         ` Jan Beulich
2018-10-08 11:53         ` Jan Beulich
2019-07-31 11:26   ` [Xen-devel] " Alexandru Stefan ISAILA
2023-08-30 14:30 ` [Xen-devel] [PATCH] " Roger Pau Monné
2023-08-30 18:09   ` Andrew Cooper
2023-08-31  7:03     ` Jan Beulich
2023-08-31  8:59       ` Roger Pau Monné
2023-08-31  7:14   ` [Xen-devel] " Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6ea95d8032c94db79cee98c62dbe1d56@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).