xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Keir (Xen.org)" <keir@xen.org>,
	Chang Jianzhong <changjzh@gmail.com>
Subject: Re: [PATCH v2] x86/HVM: fix forwarding of internally cached requests
Date: Tue, 29 Mar 2016 10:43:57 +0000	[thread overview]
Message-ID: <a9c0727005b3481e86f4efb84cf220ab@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <56FA779702000078000E0D0B@prv-mh.provo.novell.com>

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 29 March 2016 11:40
> To: xen-devel
> Cc: Andrew Cooper; Paul Durrant; Chang Jianzhong; Keir (Xen.org)
> Subject: [PATCH v2] x86/HVM: fix forwarding of internally cached requests
> 
> Forwarding entire batches to the device model when an individual
> iteration of them got rejected by internal device emulation handlers
> with X86EMUL_UNHANDLEABLE is wrong: The device model would then
> handle
> all iterations, without the internal handler getting to see any past
> the one it returned failure for. This causes misbehavior in at least
> the MSI-X and VGA code, which want to see all such requests for
> internal tracking/caching purposes. But note that this does not apply
> to buffered I/O requests.
> 
> This in turn means that the condition in hvm_process_io_intercept() of
> when to crash the domain was wrong: Since X86EMUL_UNHANDLEABLE can
> validly be returned by the individual device handlers, we mustn't
> blindly crash the domain if such occurs on other than the initial
> iteration. Instead we need to distinguish hvm_copy_*_guest_phys()
> failures from device specific ones, and then the former need to always
> be fatal to the domain (i.e. also on the first iteration), since
> otherwise we again would end up forwarding a request to qemu which the
> internal handler didn't get to see.
> 
> The adjustment should be okay even for stdvga's MMIO handling:
> - if it is not caching then the accept function would have failed so we
>   won't get into hvm_process_io_intercept(),
> - if it issued the buffered ioreq then we only get to the p->count
>   reduction if hvm_send_ioreq() actually encountered an error (in which
>   we don't care about the request getting split up).
> 
> Also commit 4faffc41d ("x86/hvm: limit reps to avoid the need to handle
> retry") went too far in removing code from hvm_process_io_intercept():
> When there were successfully handled iterations, the function should
> continue to return success with a clipped repeat count.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> Cc: Chang Jianzhong <changjzh@gmail.com>
> ---
> v2: Make hvm_process_io_intercept() always return a valid X86EMUL_*
>     code by moving the domain_crash() invocations up. Add a stdvga
>     related paragraph to the commit message.
> ---
> I assume this also addresses the issue which
> http://lists.xenproject.org/archives/html/xen-devel/2016-01/msg03189.html
> attempted to deal with in a not really acceptable way.
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -95,7 +95,7 @@ static const struct hvm_io_handler null_
>  };
> 
>  static int hvmemul_do_io(
> -    bool_t is_mmio, paddr_t addr, unsigned long reps, unsigned int size,
> +    bool_t is_mmio, paddr_t addr, unsigned long *reps, unsigned int size,
>      uint8_t dir, bool_t df, bool_t data_is_addr, uintptr_t data)
>  {
>      struct vcpu *curr = current;
> @@ -104,7 +104,7 @@ static int hvmemul_do_io(
>          .type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO,
>          .addr = addr,
>          .size = size,
> -        .count = reps,
> +        .count = *reps,
>          .dir = dir,
>          .df = df,
>          .data = data,
> @@ -136,7 +136,7 @@ static int hvmemul_do_io(
>          if ( (p.type != is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO) ||
>               (p.addr != addr) ||
>               (p.size != size) ||
> -             (p.count != reps) ||
> +             (p.count != *reps) ||
>               (p.dir != dir) ||
>               (p.df != df) ||
>               (p.data_is_ptr != data_is_addr) )
> @@ -214,7 +214,7 @@ static int hvmemul_do_io_buffer(
> 
>      BUG_ON(buffer == NULL);
> 
> -    rc = hvmemul_do_io(is_mmio, addr, *reps, size, dir, df, 0,
> +    rc = hvmemul_do_io(is_mmio, addr, reps, size, dir, df, 0,
>                         (uintptr_t)buffer);
>      if ( rc == X86EMUL_UNHANDLEABLE && dir == IOREQ_READ )
>          memset(buffer, 0xff, size);
> @@ -305,13 +305,13 @@ static int hvmemul_do_io_addr(
>          count = 1;
>      }
> 
> -    rc = hvmemul_do_io(is_mmio, addr, count, size, dir, df, 1,
> +    rc = hvmemul_do_io(is_mmio, addr, &count, size, dir, df, 1,
>                         ram_gpa);
> +
>      if ( rc == X86EMUL_OKAY )
> -    {
>          v->arch.hvm_vcpu.hvm_io.mmio_retry = (count < *reps);
> -        *reps = count;
> -    }
> +
> +    *reps = count;
> 
>   out:
>      while ( nr_pages )
> --- a/xen/arch/x86/hvm/intercept.c
> +++ b/xen/arch/x86/hvm/intercept.c
> @@ -148,8 +148,8 @@ int hvm_process_io_intercept(const struc
>                      ASSERT_UNREACHABLE();
>                      /* fall through */
>                  default:
> -                    rc = X86EMUL_UNHANDLEABLE;
> -                    break;
> +                    domain_crash(current->domain);
> +                    return X86EMUL_UNHANDLEABLE;
>                  }
>                  if ( rc != X86EMUL_OKAY )
>                      break;
> @@ -178,8 +178,8 @@ int hvm_process_io_intercept(const struc
>                      ASSERT_UNREACHABLE();
>                      /* fall through */
>                  default:
> -                    rc = X86EMUL_UNHANDLEABLE;
> -                    break;
> +                    domain_crash(current->domain);
> +                    return X86EMUL_UNHANDLEABLE;
>                  }
>                  if ( rc != X86EMUL_OKAY )
>                      break;
> @@ -196,8 +196,20 @@ int hvm_process_io_intercept(const struc
>          }
>      }
> 
> -    if ( i != 0 && rc == X86EMUL_UNHANDLEABLE )
> -        domain_crash(current->domain);
> +    if ( i )
> +    {
> +        p->count = i;
> +        rc = X86EMUL_OKAY;
> +    }
> +    else if ( rc == X86EMUL_UNHANDLEABLE )
> +    {
> +        /*
> +         * Don't forward entire batches to the device model: This would
> +         * prevent the internal handlers to see subsequent iterations of
> +         * the request.
> +         */
> +        p->count = 1;
> +    }
> 
>      return rc;
>  }
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-03-29 10:44 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-29 10:39 [PATCH v2] x86/HVM: fix forwarding of internally cached requests Jan Beulich
2016-03-29 10:43 ` Paul Durrant [this message]
2016-03-30  7:28 ` jzh Chang
2016-04-21  6:24   ` Jan Beulich
     [not found]     ` <CAPYBitZG=5R0jyHhn5ksa4eGPGtLocFApj1am-DyDUXp5i_6HA@mail.gmail.com>
2016-04-21  8:08       ` Jan Beulich
2016-04-21 14:19   ` Jan Beulich
2016-04-22  2:02     ` jzh Chang
2016-04-22  7:17       ` Jan Beulich
2016-04-25  8:40         ` Li, Liang Z
2016-04-25  8:54           ` Xu, Quan
2016-04-28 15:13           ` Jan Beulich
2016-04-28 16:11             ` Xu, Quan
2016-05-03 12:50               ` Xu, Quan
2016-05-03 13:38                 ` Jan Beulich
2016-05-04  8:07                   ` Xu, Quan
2016-05-19  8:30             ` Xu, Quan
2016-05-19  9:15               ` Jan Beulich
2016-05-19 14:35                 ` Xu, Quan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a9c0727005b3481e86f4efb84cf220ab@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=changjzh@gmail.com \
    --cc=keir@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).