xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH v3 3/3] x86: Clean up the Xen MSR infrastructure
Date: Wed, 12 Sep 2018 09:46:31 +0000	[thread overview]
Message-ID: <ea9eb527a924f2b42f7d84e607be8ac41fc988ac.camel@citrix.com> (raw)
In-Reply-To: <bdd460f3-0487-0ab7-cff6-db1c313571ee@citrix.com>

On Wed, 2018-09-12 at 10:12 +0100, Andrew Cooper wrote:
> On 12/09/18 09:29, Sergey Dyasli wrote:
> > On Tue, 2018-09-11 at 19:56 +0100, Andrew Cooper wrote:
> > > Rename them to guest_{rd,wr}msr_xen() for consistency, and because the _regs
> > > suffix isn't very appropriate.
> > > 
> > > Update them to take a vcpu pointer rather than presuming that they act on
> > > current, and switch to using X86EMUL_* return values.
> > > 
> > > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > > ---
> > > CC: Jan Beulich <JBeulich@suse.com>
> > > CC: Wei Liu <wei.liu2@citrix.com>
> > > CC: Roger Pau Monné <roger.pau@citrix.com>
> > > CC: Sergey Dyasli <sergey.dyasli@citrix.com>
> > > 
> > > v3:
> > >  * Clean up after splitting the series.
> > > ---
> > >  xen/arch/x86/msr.c              |  6 ++----
> > >  xen/arch/x86/traps.c            | 29 +++++++++++++----------------
> > >  xen/include/asm-x86/processor.h |  4 ++--
> > >  3 files changed, 17 insertions(+), 22 deletions(-)
> > > 
> > > diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> > > index cf0dc27..8f02a89 100644
> > > --- a/xen/arch/x86/msr.c
> > > +++ b/xen/arch/x86/msr.c
> > > @@ -156,8 +156,7 @@ int guest_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
> > >  
> > >          /* Fallthrough. */
> > >      case 0x40000200 ... 0x400002ff:
> > > -        ret = (rdmsr_hypervisor_regs(msr, val)
> > > -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
> > > +        ret = guest_rdmsr_xen(v, msr, val);
> > >          break;
> > >  
> > >      default:
> > > @@ -277,8 +276,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
> > >  
> > >          /* Fallthrough. */
> > >      case 0x40000200 ... 0x400002ff:
> > > -        ret = (wrmsr_hypervisor_regs(msr, val) == 1
> > > -               ? X86EMUL_OKAY : X86EMUL_EXCEPTION);
> > > +        ret = guest_wrmsr_xen(v, msr, val);
> > >          break;
> > >  
> > >      default:
> > > diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> > > index 7c17806..3988753 100644
> > > --- a/xen/arch/x86/traps.c
> > > +++ b/xen/arch/x86/traps.c
> > > @@ -768,29 +768,25 @@ static void do_trap(struct cpu_user_regs *regs)
> > >            trapnr, trapstr(trapnr), regs->error_code);
> > >  }
> > >  
> > > -/* Returns 0 if not handled, and non-0 for success. */
> > > -int rdmsr_hypervisor_regs(uint32_t idx, uint64_t *val)
> > > +int guest_rdmsr_xen(const struct vcpu *v, uint32_t idx, uint64_t *val)
> > >  {
> > > -    struct domain *d = current->domain;
> > > +    const struct domain *d = v->domain;
> > >      /* Optionally shift out of the way of Viridian architectural MSRs. */
> > >      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
> > >  
> > >      switch ( idx - base )
> > >      {
> > >      case 0: /* Write hypercall page MSR.  Read as zero. */
> > > -    {
> > >          *val = 0;
> > > -        return 1;
> > > -    }
> > > +        return X86EMUL_OKAY;
> > >      }
> > >  
> > > -    return 0;
> > > +    return X86EMUL_EXCEPTION;
> > >  }
> > >  
> > > -/* Returns 1 if handled, 0 if not and -Exx for error. */
> > > -int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> > > +int guest_wrmsr_xen(struct vcpu *v, uint32_t idx, uint64_t val)
> > >  {
> > > -    struct domain *d = current->domain;
> > > +    struct domain *d = v->domain;
> > >      /* Optionally shift out of the way of Viridian architectural MSRs. */
> > >      uint32_t base = is_viridian_domain(d) ? 0x40000200 : 0x40000000;
> > >  
> > > @@ -809,7 +805,7 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> > >              gdprintk(XENLOG_WARNING,
> > >                       "wrmsr hypercall page index %#x unsupported\n",
> > >                       page_index);
> > > -            return 0;
> > > +            return X86EMUL_EXCEPTION;
> > >          }
> > >  
> > >          page = get_page_from_gfn(d, gmfn, &t, P2M_ALLOC);
> > > @@ -822,13 +818,13 @@ int wrmsr_hypervisor_regs(uint32_t idx, uint64_t val)
> > >              if ( p2m_is_paging(t) )
> > >              {
> > >                  p2m_mem_paging_populate(d, gmfn);
> > > -                return -ERESTART;
> > > +                return X86EMUL_RETRY;
> > 
> > Previously -ERESTART would've been converted to X86EMUL_EXCEPTION. But
> > with this patch, X86EMUL_RETRY will actually be returned. I don't think
> > that callers can handle this situation.
> > 
> > E.g. the code from vmx_vmexit_handler():
> > 
> >     case EXIT_REASON_MSR_WRITE:
> >         switch ( hvm_msr_write_intercept(regs->ecx, msr_fold(regs), 1) )
> >         {
> >         case X86EMUL_OKAY:
> >             update_guest_eip(); /* Safe: WRMSR */
> >             break;
> > 
> >         case X86EMUL_EXCEPTION:
> >             hvm_inject_hw_exception(TRAP_gp_fault, 0);
> >             break;
> >         }
> >         break;
> 
> Hmm lovely, so it was broken before, but should be correct now.
> 
> RETRY has caused an entry to go onto the paging ring, which will pause
> the vcpu until a reply occurs, after which we will re-enter the guest
> without having moved RIP forwards, re-execute the wrmsr instruction, and
> this time succeed because the frame has been paged in.

Actually, the current VMX/SVM (but not PV) code does:

        switch ( wrmsr_hypervisor_regs(msr, msr_content) )
        {
        case -ERESTART:
            return X86EMUL_RETRY;

This code is removed in 1/3 patch but I wasn't CCed.

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2018-09-12  9:46 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-11 18:56 [PATCH v3 0/3] x86: Cleanup of MSR handling for Xen and Viridian ranges Andrew Cooper
2018-09-11 18:56 ` [PATCH v3 1/3] x86/msr: Dispatch Xen and Viridian MSRs from guest_{wr, rd}msr() Andrew Cooper
2018-09-12 12:00   ` [PATCH v4 " Andrew Cooper
2018-09-13  7:54     ` Sergey Dyasli
2018-09-13 12:25     ` Jan Beulich
2018-09-11 18:56 ` [PATCH v3 2/3] x86/viridan: Clean up Viridian MSR infrastructure Andrew Cooper
2018-09-12  8:14   ` Sergey Dyasli
2018-09-13 12:28   ` Jan Beulich
2018-09-11 18:56 ` [PATCH v3 3/3] x86: Clean up the Xen " Andrew Cooper
2018-09-12  8:29   ` Sergey Dyasli
2018-09-12  9:12     ` Andrew Cooper
2018-09-12  9:17       ` Jan Beulich
2018-09-12  9:24         ` Andrew Cooper
2018-09-12 10:05           ` Jan Beulich
2018-09-12 12:02             ` Andrew Cooper
2018-09-12  9:46       ` Sergey Dyasli [this message]
2018-09-12 10:23         ` Andrew Cooper
2018-09-13  7:57           ` Sergey Dyasli
2018-09-13 12:30             ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ea9eb527a924f2b42f7d84e607be8ac41fc988ac.camel@citrix.com \
    --to=sergey.dyasli@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=roger.pau@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).