xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Jan Beulich' <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Olaf Hering <olaf@aepfle.de>, "Tim (Xen.org)" <tim@xen.org>
Subject: Re: [PATCH v3 1/3] x86/HVM: drop hvm_fetch_from_guest_linear()
Date: Thu, 6 Sep 2018 14:51:09 +0000	[thread overview]
Message-ID: <8a8b2566526c4b5cb146cbbf8304e5bd@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <5B91259202000078001E5F17@prv1-mh.provo.novell.com>

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 06 September 2018 14:03
> To: xen-devel <xen-devel@lists.xenproject.org>
> Cc: Olaf Hering <olaf@aepfle.de>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>;
> Tim (Xen.org) <tim@xen.org>
> Subject: [PATCH v3 1/3] x86/HVM: drop hvm_fetch_from_guest_linear()
> 
> It can easily be expressed through hvm_copy_from_guest_linear(), and in
> two cases this even simplifies callers.
> 
> Suggested-by: Paul Durrant <paul.durrant@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Tested-by: Olaf Hering <olaf@aepfle.de>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
> v2: Make sure this compiles standalone. Slightly adjust change in
>     hvm_ud_intercept().
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1060,6 +1060,8 @@ static int __hvmemul_read(
>          pfec |= PFEC_implicit;
>      else if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )
>          pfec |= PFEC_user_mode;
> +    if ( access_type == hvm_access_insn_fetch )
> +        pfec |= PFEC_insn_fetch;
> 
>      rc = hvmemul_virtual_to_linear(
>          seg, offset, bytes, &reps, access_type, hvmemul_ctxt, &addr);
> @@ -1071,9 +1073,7 @@ static int __hvmemul_read(
>           (vio->mmio_gla == (addr & PAGE_MASK)) )
>          return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec,
> hvmemul_ctxt, 1);
> 
> -    rc = ((access_type == hvm_access_insn_fetch) ?
> -          hvm_fetch_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo) :
> -          hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo));
> +    rc = hvm_copy_from_guest_linear(p_data, addr, bytes, pfec, &pfinfo);
> 
>      switch ( rc )
>      {
> @@ -2512,9 +2512,10 @@ void hvm_emulate_init_per_insn(
>                                          hvm_access_insn_fetch,
>                                          &hvmemul_ctxt->seg_reg[x86_seg_cs],
>                                          &addr) &&
> -             hvm_fetch_from_guest_linear(hvmemul_ctxt->insn_buf, addr,
> -                                         sizeof(hvmemul_ctxt->insn_buf),
> -                                         pfec, NULL) == HVMTRANS_okay) ?
> +             hvm_copy_from_guest_linear(hvmemul_ctxt->insn_buf, addr,
> +                                        sizeof(hvmemul_ctxt->insn_buf),
> +                                        pfec | PFEC_insn_fetch,
> +                                        NULL) == HVMTRANS_okay) ?
>              sizeof(hvmemul_ctxt->insn_buf) : 0;
>      }
>      else
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3286,15 +3286,6 @@ enum hvm_translation_result hvm_copy_fro
>                        PFEC_page_present | pfec, pfinfo);
>  }
> 
> -enum hvm_translation_result hvm_fetch_from_guest_linear(
> -    void *buf, unsigned long addr, int size, uint32_t pfec,
> -    pagefault_info_t *pfinfo)
> -{
> -    return __hvm_copy(buf, addr, size, current,
> -                      HVMCOPY_from_guest | HVMCOPY_linear,
> -                      PFEC_page_present | PFEC_insn_fetch | pfec, pfinfo);
> -}
> -
>  unsigned long copy_to_user_hvm(void *to, const void *from, unsigned int
> len)
>  {
>      int rc;
> @@ -3740,16 +3731,16 @@ void hvm_ud_intercept(struct cpu_user_re
>      if ( opt_hvm_fep )
>      {
>          const struct segment_register *cs = &ctxt.seg_reg[x86_seg_cs];
> -        uint32_t walk = (ctxt.seg_reg[x86_seg_ss].dpl == 3)
> -            ? PFEC_user_mode : 0;
> +        uint32_t walk = ((ctxt.seg_reg[x86_seg_ss].dpl == 3)
> +                         ? PFEC_user_mode : 0) | PFEC_insn_fetch;
>          unsigned long addr;
>          char sig[5]; /* ud2; .ascii "xen" */
> 
>          if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
>                                          sizeof(sig), hvm_access_insn_fetch,
>                                          cs, &addr) &&
> -             (hvm_fetch_from_guest_linear(sig, addr, sizeof(sig),
> -                                          walk, NULL) == HVMTRANS_okay) &&
> +             (hvm_copy_from_guest_linear(sig, addr, sizeof(sig),
> +                                         walk, NULL) == HVMTRANS_okay) &&
>               (memcmp(sig, "\xf\xbxen", sizeof(sig)) == 0) )
>          {
>              regs->rip += sizeof(sig);
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -164,8 +164,9 @@ const struct x86_emulate_ops *shadow_ini
>          (!hvm_translate_virtual_addr(
>              x86_seg_cs, regs->rip, sizeof(sh_ctxt->insn_buf),
>              hvm_access_insn_fetch, sh_ctxt, &addr) &&
> -         !hvm_fetch_from_guest_linear(
> -             sh_ctxt->insn_buf, addr, sizeof(sh_ctxt->insn_buf), 0, NULL))
> +         !hvm_copy_from_guest_linear(
> +             sh_ctxt->insn_buf, addr, sizeof(sh_ctxt->insn_buf),
> +             PFEC_insn_fetch, NULL))
>          ? sizeof(sh_ctxt->insn_buf) : 0;
> 
>      return &hvm_shadow_emulator_ops;
> @@ -198,8 +199,9 @@ void shadow_continue_emulation(struct sh
>              (!hvm_translate_virtual_addr(
>                  x86_seg_cs, regs->rip, sizeof(sh_ctxt->insn_buf),
>                  hvm_access_insn_fetch, sh_ctxt, &addr) &&
> -             !hvm_fetch_from_guest_linear(
> -                 sh_ctxt->insn_buf, addr, sizeof(sh_ctxt->insn_buf), 0, NULL))
> +             !hvm_copy_from_guest_linear(
> +                 sh_ctxt->insn_buf, addr, sizeof(sh_ctxt->insn_buf),
> +                 PFEC_insn_fetch, NULL))
>              ? sizeof(sh_ctxt->insn_buf) : 0;
>          sh_ctxt->insn_buf_eip = regs->rip;
>      }
> --- a/xen/arch/x86/mm/shadow/hvm.c
> +++ b/xen/arch/x86/mm/shadow/hvm.c
> @@ -122,10 +122,10 @@ hvm_read(enum x86_segment seg,
>      if ( rc || !bytes )
>          return rc;
> 
> -    if ( access_type == hvm_access_insn_fetch )
> -        rc = hvm_fetch_from_guest_linear(p_data, addr, bytes, 0, &pfinfo);
> -    else
> -        rc = hvm_copy_from_guest_linear(p_data, addr, bytes, 0, &pfinfo);
> +    rc = hvm_copy_from_guest_linear(p_data, addr, bytes,
> +                                    (access_type == hvm_access_insn_fetch
> +                                     ? PFEC_insn_fetch : 0),
> +                                    &pfinfo);
> 
>      switch ( rc )
>      {
> --- a/xen/include/asm-x86/hvm/support.h
> +++ b/xen/include/asm-x86/hvm/support.h
> @@ -100,9 +100,6 @@ enum hvm_translation_result hvm_copy_to_
>  enum hvm_translation_result hvm_copy_from_guest_linear(
>      void *buf, unsigned long addr, int size, uint32_t pfec,
>      pagefault_info_t *pfinfo);
> -enum hvm_translation_result hvm_fetch_from_guest_linear(
> -    void *buf, unsigned long addr, int size, uint32_t pfec,
> -    pagefault_info_t *pfinfo);
> 
>  /*
>   * Get a reference on the page under an HVM physical or linear address.  If
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-09-06 14:51 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-30 11:05 [PATCH 0/2] x86/HVM: emulation adjustments Jan Beulich
2018-08-30 11:09 ` [PATCH 1/2] x86/HVM: drop hvm_fetch_from_guest_linear() Jan Beulich
2018-08-30 11:18   ` Andrew Cooper
2018-08-30 12:02     ` Jan Beulich
2018-08-30 12:22       ` Andrew Cooper
2018-08-30 12:28         ` Jan Beulich
2018-08-30 11:24   ` Andrew Cooper
2018-08-30 11:09 ` [PATCH RFC 2/2] x86/HVM: split page straddling emulated accesses in more cases Jan Beulich
2018-09-03  9:11   ` Paul Durrant
2018-09-03 15:41 ` [PATCH v2 0/2] x86/HVM: emulation adjustments Jan Beulich
2018-09-03 15:43   ` [PATCH v2 1/2] x86/HVM: drop hvm_fetch_from_guest_linear() Jan Beulich
2018-09-04 10:01     ` Jan Beulich
2018-09-05  7:20     ` Olaf Hering
2018-09-03 15:44   ` [PATCH v2 2/2] x86/HVM: split page straddling emulated accesses in more cases Jan Beulich
2018-09-03 16:15     ` Paul Durrant
2018-09-04  7:43       ` Jan Beulich
2018-09-04  8:15         ` Paul Durrant
2018-09-04  8:46           ` Jan Beulich
2018-09-05  7:22     ` Olaf Hering
2018-09-06 12:58 ` [PATCH v3 0/3] x86/HVM: emulation adjustments Jan Beulich
2018-09-06 13:03   ` [PATCH v3 1/3] x86/HVM: drop hvm_fetch_from_guest_linear() Jan Beulich
2018-09-06 14:51     ` Paul Durrant [this message]
2018-09-06 13:03   ` [PATCH v3 2/3] x86/HVM: add known_gla() emulation helper Jan Beulich
2018-09-06 13:12     ` Paul Durrant
2018-09-06 13:22       ` Jan Beulich
2018-09-06 14:50         ` Paul Durrant
2018-09-06 13:04   ` [PATCH v3 3/3] x86/HVM: split page straddling emulated accesses in more cases Jan Beulich
2018-09-06 13:14     ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8a8b2566526c4b5cb146cbbf8304e5bd@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=olaf@aepfle.de \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).