xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>,
	IanJackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH] x86/domctl: Adjust size calculations for XEN_DOMCTL_get{_ext_vcpucontext, vcpuextstate}
Date: Mon, 28 Apr 2014 11:59:52 +0100	[thread overview]
Message-ID: <535E34A8.7050105@citrix.com> (raw)
In-Reply-To: <535E4AE3020000780000CD7B@nat28.tlf.novell.com>

On 28/04/14 11:34, Jan Beulich wrote:
>>>> On 28.04.14 at 11:43, <andrew.cooper3@citrix.com> wrote:
>> XEN_DOMCTL_get_ext_vcpucontext suffers from the same issue but while trying to
>> fix that in similar way, I discovered that it had a genuine bug when returning
>> the count of MSRs to the toolstack.  When running the hypercall on an active
>> vcpu, the vcpu can arbitrarily alter the count returned to the toolstack by
>> clearing and setting relevant MSRs.
> Did you perhaps overlook the vcpu_pause() there?

There is a vcpu pause in the hypercall, so for the duration of the
hypercall the returned value will be consistent.

However without the toolstack pausing the domain, issuing this hypercall
twice, first to get the size and second to get the data might still
result in -ENOBUFS if the vcpu suddenly writes non-0 values to the MSRs.

>
>> @@ -865,41 +867,54 @@ long arch_do_domctl(
>>              evc->vmce.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
>>              evc->vmce.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
>>  
>> -            i = ret = 0;
>> +            /* Count maximum number of optional msrs */
>>              if ( boot_cpu_has(X86_FEATURE_DBEXT) )
>> +                nr_msrs += 4;
>> +
>> +            if ( guest_handle_is_null(evc->msrs) ||
>> +                 (evc->msr_count < nr_msrs) )
>> +            {
>> +                evc->msr_count = nr_msrs;
>> +                ret = guest_handle_is_null(evc->msrs) ? 0 : -ENOBUFS;
>> +            }
> Won't this, with the migration part still not implemented in libxc,
> result in guests using any of these getting migrated with a non-
> zero MSR count, rather than the migration failing on the sender
> side?

Hmm.  As this lack of msrs is only transitory, I could patch
xc_domain_save() to explicitly fail if it finds msr_count != 0.

The current state of play with migration v2 is that we have PV and HVM
migration working and I am currently collating the work into a series
for submission.

>
> I'm also not really in favor of forcing the tools to allocate memory
> for the array if in fact no MSRs are being used by the guest.

If there are no msrs to receive, then passing a NULL guest handle is
still fine.

>
>> @@ -1177,7 +1192,8 @@ long arch_do_domctl(
>>          {
>>              unsigned int size = PV_XSAVE_SIZE(v->arch.xcr0_accum);
>>  
>> -            if ( !evc->size && !evc->xfeature_mask )
>> +            if ( (evc->size == 0 && evc->xfeature_mask == 0) ||
>> +                 guest_handle_is_null(domctl->u.vcpuextstate.buffer) )
> And strong reason not to keep the shorter ! operators?

I personally find it clearer than using !, but can change back.

>
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -594,11 +594,13 @@ struct xen_domctl_ext_vcpucontext {
>>      uint8_t          syscall32_disables_events;
>>      uint8_t          sysenter_disables_events;
>>      /*
>> -     * When, for the "get" version, msr_count is too small to cover all MSRs
>> -     * the hypervisor needs to be saved, the call will return -ENOBUFS and
>> -     * set msr_count to the required (minimum) value. Furthermore, for both
>> -     * "get" and "set", that field as well as the msrs one only get looked at
>> -     * if the size field above covers the structure up to the entire msrs one.
>> +     * Number of msr entries pointed to by the 'msrs' guest handle.
>> +     *
>> +     * For the "get" subop, if the guest handle is NULL, Xen shall write back
>> +     * the maximum number of msrs it might save.  If msr_count is fewer than
> I think there's "If the handle is non-NULL" missing at the beginning of
> the second sentence.
>
> Jan

Ok - that is more clear.

~Andrew

>
>> +     * the maximum, Xen shall return -ENOBUFS.  When Xen actually writes into
>> +     * the buffer, this field shall reflect the actual number of msrs written
>> +     * which might be fewer than the maximum, if some MSRs are not in use.
>
>

  reply	other threads:[~2014-04-28 10:59 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-28  9:43 [PATCH] x86/domctl: Adjust size calculations for XEN_DOMCTL_get{_ext_vcpucontext, vcpuextstate} Andrew Cooper
2014-04-28 10:34 ` Jan Beulich
2014-04-28 10:59   ` Andrew Cooper [this message]
2014-04-28 11:37     ` Jan Beulich
2014-04-28 12:26       ` Andrew Cooper
2014-04-28 13:45         ` Jan Beulich
2014-04-28 13:53           ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=535E34A8.7050105@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=keir@xen.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).